RAG Application

The following section describes how Arch can help you build faster, smarter and more accurate Retrieval-Augmented Generation (RAG) applications.

Parameter Extraction for RAG

To build RAG (Retrieval Augmented Generation) applications, you can configure prompt targets with parameters, enabling Arch to retrieve critical information in a structured way for processing. This approach improves the retrieval quality and speed of your application. By extracting parameters from the conversation, you can pull the appropriate chunks from a vector database or SQL-like data store to enhance accuracy. With Arch, you can streamline data retrieval and processing to build more efficient and precise RAG applications.

Step 1: Define Prompt Targets

Prompt Targets
 1prompt_targets:
 2  - name: get_device_statistics
 3    description: Retrieve and present the relevant data based on the specified devices and time range
 4
 5    path: /agent/device_summary
 6    parameters:
 7      - name: device_ids
 8        type: list
 9        description: A list of device identifiers (IDs) to reboot.
10        required: true
11      - name: time_range
12        type: int
13        description: The number of days in the past over which to retrieve device statistics
14        required: false
15        default: 7

Step 2: Process Request Parameters in Flask

Once the prompt targets are configured as above, handling those parameters is

Parameter handling with Flask
 1from flask import Flask, request, jsonify
 2
 3app = Flask(__name__)
 4
 5
 6@app.route("/agent/device_summary", methods=["POST"])
 7def get_device_summary():
 8    """
 9    Endpoint to retrieve device statistics based on device IDs and an optional time range.
10    """
11    data = request.get_json()
12
13    # Validate 'device_ids' parameter
14    device_ids = data.get("device_ids")
15    if not device_ids or not isinstance(device_ids, list):
16        return (
17            jsonify({"error": "'device_ids' parameter is required and must be a list"}),
18            400,
19        )
20
21    # Validate 'time_range' parameter (optional, defaults to 7)
22    time_range = data.get("time_range", 7)
23    if not isinstance(time_range, int):
24        return jsonify({"error": "'time_range' must be an integer"}), 400
25
26    # Simulate retrieving statistics for the given device IDs and time range
27    # In a real application, you would query your database or external service here
28    statistics = []
29    for device_id in device_ids:
30        # Placeholder for actual data retrieval
31        stats = {
32            "device_id": device_id,
33            "time_range": f"Last {time_range} days",
34            "data": f"Statistics data for device {device_id} over the last {time_range} days.",
35        }
36        statistics.append(stats)
37
38    response = {"statistics": statistics}
39
40    return jsonify(response), 200
41
42
43if __name__ == "__main__":
44    app.run(debug=True)

[Coming Soon] Drift Detection via Arch Intent-Markers

Developers struggle to efficiently handle follow-up or clarification questions. Specifically, when users ask for changes or additions to previous responses their AI applications often generate entirely new responses instead of adjusting previous ones. Arch offers intent tracking as a feature so that developers can know when the user has shifted away from a previous intent so that they can dramatically improve retrieval accuracy, lower overall token cost and improve the speed of their responses back to users.

Arch uses its built-in lightweight NLI and embedding models to know if the user has steered away from an active intent. Arch’s intent-drift detection mechanism is based on its prompt target primtive. Arch tries to match an incoming prompt to one of the prompt_targets configured in the gateway. Once it detects that the user has moved away from an active active intent, Arch adds the x-arch-intent-marker headers to the request before sending it your application servers.

Intent Detection Example
 1@app.route("/process_rag", methods=["POST"])
 2def process_rag():
 3    # Extract JSON data from the request
 4    data = request.get_json()
 5
 6    user_id = data.get("user_id")
 7    if not user_id:
 8        return jsonify({"error": "User ID is required"}), 400
 9
10    client_messages = data.get("messages")
11    if not client_messages or not isinstance(client_messages, list):
12        return jsonify({"error": "Messages array is required"}), 400
13
14    # Extract the intent change marker from Arch's headers if present for the current prompt
15    intent_changed_header = request.headers.get("x-arch-intent-marker", "").lower()
16    if intent_changed_header in ["", "false"]:
17        intent_changed = False
18    elif intent_changed_header == "true":
19        intent_changed = True
20    else:
21        # Invalid value provided
22        return (
23            jsonify({"error": "Invalid value for x-arch-prompt-intent-change header"}),
24            400,
25        )
26
27    # Update user conversation based on intent change
28    memory = update_user_conversation(user_id, client_messages, intent_changed)
29
30    # Retrieve messages since last intent change for LLM
31    messages_for_llm = get_messages_since_last_intent(memory.chat_memory.messages)
32
33    # Forward messages to upstream LLM
34    llm_response = forward_to_llm(messages_for_llm)
35
36    # Prepare the messages to return
37    messages_to_return = []
38    for message in memory.chat_memory.messages:
39        role = "user" if isinstance(message, HumanMessage) else "assistant"
40        content = message.content
41        metadata = message.additional_kwargs.get("metadata", {})
42        message_entry = {
43            "uuid": metadata.get("uuid"),
44            "timestamp": metadata.get("timestamp"),
45            "role": role,
46            "content": content,
47            "intent_changed": metadata.get("intent_changed", False),
48        }
49        messages_to_return.append(message_entry)
50
51    # Prepare the response
52    response = {
53        "user_id": user_id,
54        "messages": messages_to_return,
55        "llm_response": llm_response,
56    }
57

Note

Arch is (mostly) stateless so that it can scale in an embarrassingly parrallel fashion. So, while Arch offers intent-drift detetction, you still have to maintain converational state with intent drift as metadata. The following code snippets show how easily you can build and enrich conversational history with Langchain (in Python), so that you can use the most relevant prompts for your retrieval and for prompting upstream LLMs.

Step 1: Define ConversationBufferMemory

 1from flask import Flask, request, jsonify
 2from datetime import datetime
 3import uuid
 4from langchain.memory import ConversationBufferMemory
 5from langchain.schema import AIMessage, HumanMessage
 6from langchain import OpenAI
 7
 8app = Flask(__name__)
 9
10# Global dictionary to keep track of user memories
11user_memories = {}
12
13
14def get_user_conversation(user_id):
15    """
16    Retrieve the user's conversation memory using LangChain.
17    If the user does not exist, initialize their conversation memory.
18    """
19    if user_id not in user_memories:
20        user_memories[user_id] = ConversationBufferMemory(return_messages=True)
21    return user_memories[user_id]

Step 2: Update ConversationBufferMemory with Intents

 1def update_user_conversation(user_id, client_messages, intent_changed):
 2    """
 3    Update the user's conversation memory with new messages using LangChain.
 4    Each message is augmented with a UUID, timestamp, and intent change marker.
 5    Only new messages are added to avoid duplication.
 6    """
 7    memory = get_user_conversation(user_id)
 8    stored_messages = memory.chat_memory.messages
 9
10    # Determine the number of stored messages
11    num_stored_messages = len(stored_messages)
12    new_messages = client_messages[num_stored_messages:]
13
14    # Process each new message
15    for index, message in enumerate(new_messages):
16        role = message.get("role")
17        content = message.get("content")
18        metadata = {
19            "uuid": str(uuid.uuid4()),
20            "timestamp": datetime.utcnow().isoformat(),
21            "intent_changed": False,  # Default value
22        }
23
24        # Mark the intent change on the last message if detected
25        if intent_changed and index == len(new_messages) - 1:
26            metadata["intent_changed"] = True
27
28        # Create a new message with metadata
29        if role == "user":
30            memory.chat_memory.add_message(
31                HumanMessage(content=content, additional_kwargs={"metadata": metadata})
32            )
33        elif role == "assistant":
34            memory.chat_memory.add_message(
35                AIMessage(content=content, additional_kwargs={"metadata": metadata})
36            )
37        else:
38            # Handle other roles if necessary
39            pass
40
41    return memory

Step 3: Get Messages based on latest drift

 1def get_messages_since_last_intent(messages):
 2    """
 3    Retrieve messages from the last intent change onwards using LangChain.
 4    """
 5    messages_since_intent = []
 6    for message in reversed(messages):
 7        # Insert message at the beginning to maintain correct order
 8        messages_since_intent.insert(0, message)
 9        metadata = message.additional_kwargs.get("metadata", {})
10        # Break if intent_changed is True
11        if metadata.get("intent_changed", False) == True:
12            break
13
14    return messages_since_intent

You can used the last set of messages that match to an intent to prompt an LLM, use it with an vector-DB for improved retrieval, etc. With Arch and a few lines of code, you can improve the retrieval accuracy, lower overall token cost and dramatically improve the speed of their responses back to users.