Prompts

Arch’s primary design point is to securely accept, process and handle prompts. To do that effectively, Arch relies on Envoy’s HTTP connection management, subsystem and its prompt handler subsystem engineered with purpose-built LLMs to implement critical functionality on behalf of developers so that you can stay focused on business logic.

Arch’s prompt handler subsystem interacts with the model subsytem through Envoy’s cluster manager system to ensure robust, resilient and fault-tolerant experience in managing incoming prompts.

See also

Read more about the model subsystem and how the LLMs are hosted in Arch.

Messages

Arch accepts messages directly from the body of the HTTP request in a format that follows the Hugging Face Messages API. This design allows developers to pass a list of messages, where each message is represented as a dictionary containing two key-value pairs:

  • Role: Defines the role of the message sender, such as “user” or “assistant”.

  • Content: Contains the actual text of the message.

Prompt Guard

Arch is engineered with Arch-Guard, an industry leading safety layer, powered by a compact and high-performimg LLM that monitors incoming prompts to detect and reject jailbreak attempts - ensuring that unauthorized or harmful behaviors are intercepted early in the process.

To add jailbreak guardrails, see example below:

Example Configuration
 1version: v0.1
 2
 3listener:
 4  address: 0.0.0.0 # or 127.0.0.1
 5  port: 10000
 6  # Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
 7  message_format: huggingface
 8
 9# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
10llm_providers:
11  - name: OpenAI
12    provider: openai
13    access_key: $OPENAI_API_KEY
14    model: gpt-4o
15    default: true
16    stream: true
17
18# default system prompt used by all prompt targets
19system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
20
21prompt_guards:
22  input_guards:
23    jailbreak:
24      on_exception:
25        message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.

Note

As a roadmap item, Arch will expose the ability for developers to define custom guardrails via Arch-Guard, and add support for additional safety checks defined by developers and hazardous categories like, violent crimes, privacy, hate, etc. To offer feedback on our roadmap, please visit our github page

Prompt Targets

Once a prompt passes any configured guardrail checks, Arch processes the contents of the incoming conversation and identifies where to forwad the conversation to via its prompt target primitve. Prompt targets are endpoints that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with metadata like knowing when a user’s intent has changed so that you can build faster, more accurate RAG apps.

Configuring prompt_targets is simple. See example below:

Example Configuration
 1version: v0.1
 2
 3listener:
 4  address: 0.0.0.0 # or 127.0.0.1
 5  port: 10000
 6  # Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
 7  message_format: huggingface
 8
 9# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
10llm_providers:
11  - name: OpenAI
12    provider: openai
13    access_key: $OPENAI_API_KEY
14    model: gpt-4o
15    default: true
16    stream: true
17
18# default system prompt used by all prompt targets
19system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
20
21prompt_guards:
22  input_guards:
23    jailbreak:
24      on_exception:
25        message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
26
27prompt_targets:
28  - name: information_extraction
29    default: true
30    description: handel all scenarios that are question and answer in nature. Like summarization, information extraction, etc.
31    endpoint:
32      name: app_server
33      path: /agent/summary
34    # Arch uses the default LLM and treats the response from the endpoint as the prompt to send to the LLM
35    auto_llm_dispatch_on_response: true
36    # override system prompt for this prompt target
37    system_prompt: You are a helpful information extraction assistant. Use the information that is provided to you.
38
39  - name: reboot_network_device
40    description: Reboot a specific network device
41    endpoint:
42      name: app_server
43      path: /agent/action
44    parameters:
45      - name: device_id
46        type: str
47        description: Identifier of the network device to reboot.
48        required: true
49      - name: confirmation
50        type: bool
51        description: Confirmation flag to proceed with reboot.
52        default: false
53        enum: [true, false]
54
55error_target:
56  endpoint:
57    name: error_target_1
58    path: /error
59
60# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
61endpoints:
62  app_server:
63    # value could be ip address or a hostname with port
64    # this could also be a list of endpoints for load balancing
65    # for example endpoint: [ ip1:port, ip2:port ]
66    endpoint: 127.0.0.1:80
67    # max time to wait for a connection to be established
68    connect_timeout: 0.005s

See also

Check Prompt Target for more details!

Intent Matching

Arch uses fast text embedding and intent recognition approaches to first detect the intent of each incoming prompt. This intent matching phase analyzes the prompt’s content and matches it against predefined prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint. Arch’s intent matching framework considers both the name and description of each prompt target, and uses a composite matching score between embedding similarity and intent classification scores to enchance accuracy in forwarding decisions.

  • Intent Recognition: NLI techniques further refine the matching process by evaluating the semantic alignment between the prompt and potential targets.

  • Text Embedding: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match, ensuring that the prompt is handled by the correct downstream service.

Agentic Apps via Prompt Targets

To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of prompts) needed by a downstream backend API or function call before calling it directly. For more details on how you can build agentic applications using Arch, see our full guide here:

Note

Arch-Function is a collection of dedicated agentic models engineered in Arch to extract information from a (set of) prompts and executes necessary backend API calls. This allows for efficient handling of agentic tasks, such as scheduling data retrieval, by dynamically interacting with backend services. Arch-Function achieves state-of-the-art performance, comparable with frontier models like Claude Sonnet 3.5 ang GPT-4, while being 100x cheaper ($0.05M/token hosted) and 10x faster (p50 latencies of 200ms).

Prompting LLMs

Arch is a single piece of software that is designed to manage both ingress and egress prompt traffic, drawing its distributed proxy nature from the robust Envoy. This makes it extremely efficient and capable of handling upstream connections to LLMs. If your application is originating code to an API-based LLM, simply use the OpenAI client and configure it with Arch. By sending traffic through Arch, you can propagate traces, manage and monitor traffic, apply rate limits, and utilize a large set of traffic management capabilities in a centralized way.

Attention

When you start Arch, it automatically creates a listener port for egress calls to upstream LLMs. This is based on the llm_providers configuration section in the arch_config.yml file. Arch binds itself to a local address such as 127.0.0.1:12000.

Example: Using OpenAI Client with Arch as an Egress Gateway

import openai

# Set the OpenAI API base URL to the Arch gateway endpoint
openai.api_base = "http://127.0.0.1:12000"

# No need to set openai.api_key since it's configured in Arch's gateway

# Use the OpenAI client as usual
response = openai.Completion.create(
   model="text-davinci-003",
   prompt="What is the capital of France?"
)

print("OpenAI Response:", response.choices[0].text.strip())

In these examples, the OpenAI client is used to send traffic directly through the Arch egress proxy to the LLM of your choice, such as OpenAI. The OpenAI client is configured to route traffic via Arch by setting the proxy to 127.0.0.1:12000, assuming Arch is running locally and bound to that address and port. This setup allows you to take advantage of Arch’s advanced traffic management features while interacting with LLM APIs like OpenAI.