AnythingLLM is an all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more. TrueFoundry integrates seamlessly with AnythingLLM, providing enterprise-grade AI features including cost tracking, security guardrails, and access controls. TrueFoundry’s AI Gateway routes all LLM calls through the Gateway to ensure your AI applications are secure, compliant, and cost-effective.

Prerequisites

Before integrating AnythingLLM with TrueFoundry, ensure you have:
  1. TrueFoundry Account: Create a Truefoundry account with atleast one model provider and generate a Personal Access Token by following the instructions in Generating Tokens
  2. AnythingLLM Installation: Set up AnythingLLM using either the Desktop application or Docker deployment

Integration Steps

This guide assumes you have AnythingLLM installed and running, and have obtained your TrueFoundry AI Gateway base URL and authentication token.

Step 1: Generate Your TrueFoundry Access Token

  1. Navigate to your TrueFoundry dashboard and go to Access Management.
  2. Click New Personal Access Token to create a new token:
  3. Copy and securely store your Personal Access Token - you’ll need this for AnythingLLM configuration.

Step 2: Access AnythingLLM LLM Settings

  1. Launch your AnythingLLM application (Desktop or Docker).
  2. Navigate to Settings and go to LLM Preference:

Step 3: Configure Generic OpenAI Provider

  1. In the LLM provider search box, type “Generic OpenAI” and select it from the available options.
  2. Configure the TrueFoundry connection with the following settings:
    • Base URL: Enter your TrueFoundry Gateway base URL (e.g., https://internal.devtest.truefoundry.tech/api/llm/api/inference/openai)
    • API Key: Enter your TrueFoundry Personal Access Token
    • Chat Model Name: Enter the model name from the unified code snippet (e.g., openai-main/gpt-4o)
    • Token Context Window: Set based on your model’s limits (e.g., 16000, 128000)
    • Max Tokens: Configure according to your needs (e.g., 1024, 2048)

Step 4: Get Configuration from TrueFoundry

Get both the base URL and model name from the unified code snippet in our playground (ensure you use the same model name as written):

Get Base URL and Model Name from Unified Code Snippet

Copy the base URL and model ID and paste them into AnythingLLM’s configuration fields.

Step 5: Test Your Integration

  1. Save your configuration in AnythingLLM.
  2. Create a new workspace or open an existing one to test the integration:
  1. Send a test message to verify that AnythingLLM is successfully communicating with TrueFoundry’s AI Gateway.
Your AnythingLLM application is now integrated with TrueFoundry’s AI Gateway and ready for AI chat, RAG, and agent operations.