Quickstart
Follow this guide to learn how to quickly set up Arch and integrate it into your generative AI applications.
Prerequisites
Before you begin, ensure you have the following:
Docker
&Python
installed on your systemAPI Keys
for LLM providers (if using external LLMs)
The fastest way to get started using Arch is to use katanemo/archgw pre-built binaries. You can also build it from source.
Step 1: Install Arch
Arch’s CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:
$ pip install archgw
This will install the archgw command-line tool globally on your system.
Tip
We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that archgw and its dependencies do not interfere with other packages on your system.
To create and activate a virtual environment, you can run the following commands:
$ python -m venv venv
$ source venv/bin/activate # On Windows, use: venv\Scripts\activate
$ pip install archgw
Step 2: Config Arch
Arch operates based on a configuration file where you can define LLM providers, prompt targets, and guardrails, etc. Below is an example configuration to get you started, including:
endpoints
: Specifies where Arch listens for incoming prompts.system_prompts
: Defines predefined prompts to set the context for interactions.llm_providers
: Lists the LLM providers Arch can route prompts to.prompt_guards
: Sets up rules to detect and reject undesirable prompts.prompt_targets
: Defines endpoints that handle specific types of prompts.error_target
: Specifies where to route errors for handling.
version: v0.1
listen:
address: 0.0.0.0 # or 127.0.0.1
port: 10000
# Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
message_format: huggingface
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
- name: OpenAI
provider: openai
access_key: $OPENAI_API_KEY
model: gpt-4o
default: true
stream: true
# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
prompt_targets:
- name: reboot_devices
description: Reboot specific devices or device groups
path: /agent/device_reboot
parameters:
- name: device_ids
type: list
description: A list of device identifiers (IDs) to reboot.
required: false
- name: device_group
type: str
description: The name of the device group to reboot
required: false
# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
endpoints:
app_server:
# value could be ip address or a hostname with port
# this could also be a list of endpoints for load balancing
# for example endpoint: [ ip1:port, ip2:port ]
endpoint: 127.0.0.1:80
# max time to wait for a connection to be established
connect_timeout: 0.005s
Step 3: Start Arch Gateway
$ archgw up [path_to_config]
For detailed usage please refer to the Support <https://github.com/katanemo/arch/blob/main/arch/tools/README.md#setup-instructionsuser-archgw-cli>
Next Steps
Congratulations! You’ve successfully set up Arch and made your first prompt-based request. To further enhance your GenAI applications, explore the following resources:
Full Documentation: Comprehensive guides and references.
GitHub Repository: Access the source code, contribute, and track updates.
Support: Get help and connect with the Arch community .
With Arch, building scalable, fast, and personalized GenAI applications has never been easier. Dive deeper into Arch’s capabilities and start creating innovative AI-driven experiences today!