AnythingLLM is an all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more. TrueFoundry integrates seamlessly with AnythingLLM, providing enterprise-grade AI features including cost tracking, security guardrails, and access controls. TrueFoundry’s AI Gateway routes all LLM calls through the Gateway to ensure your AI applications are secure, compliant, and cost-effective.

Prerequisites

Before integrating AnythingLLM with TrueFoundry, ensure you have the following:
  1. Authentication Token: Create a Personal Access Token in TrueFoundry by following the instructions in Generating Tokens. This token will authenticate your AnythingLLM application to the TrueFoundry Gateway.
  2. Gateway Base URL: Locate your TrueFoundry Gateway base URL, which follows the format <control plane url>/api/llm. The control plane URL is where your TrueFoundry dashboard is hosted (e.g., https://company.truefoundry.cloud/api/llm).
  3. AnythingLLM Installation: Set up AnythingLLM using either the Desktop application or Docker deployment.
  4. Model Access: Ensure you have access to the AI models you want to use through TrueFoundry’s model catalog.

Integration Steps

This guide assumes you have AnythingLLM installed and running, and have obtained your TrueFoundry AI Gateway base URL and authentication token.

Step 1: Generate Your TrueFoundry Access Token

  1. Navigate to your TrueFoundry dashboard and go to Access Management.
  2. Click New Personal Access Token to create a new token:
  3. Copy and securely store your Personal Access Token - you’ll need this for AnythingLLM configuration.

Step 2: Access AnythingLLM LLM Settings

  1. Launch your AnythingLLM application (Desktop or Docker).
  2. Navigate to Settings and go to LLM Preference:

Step 3: Configure Generic OpenAI Provider

  1. In the LLM provider search box, type “Generic OpenAI” and select it from the available options.
  2. Configure the TrueFoundry connection with the following settings:
    • Base URL: Enter your TrueFoundry Gateway base URL (e.g., https://internal.devtest.truefoundry.tech/api/llm/api/inference/openai)
    • API Key: Enter your TrueFoundry Personal Access Token
    • Chat Model Name: Enter your model ID from TrueFoundry (e.g., openai-main/gpt-4o)
    • Token Context Window: Set based on your model’s limits (e.g., 16000, 128000)
    • Max Tokens: Configure according to your needs (e.g., 1024, 2048)

Step 4: Get Model Information from TrueFoundry

  1. Navigate to your TrueFoundry AI Gateway to get the correct model identifier:
  1. Copy the model ID (e.g., openai-main/gpt-4o) and paste it into AnythingLLM’s Chat Model Name field.
  2. Get the Base URL from the unified code snippet provided by TrueFoundry:

Step 5: Test Your Integration

  1. Save your configuration in AnythingLLM.
  2. Create a new workspace or open an existing one to test the integration:
  1. Send a test message to verify that AnythingLLM is successfully communicating with TrueFoundry’s AI Gateway.
Your AnythingLLM application is now integrated with TrueFoundry’s AI Gateway and ready for AI chat, RAG, and agent operations.