Dify is an open-source platform for building AI applications with built-in LLMOps, workflow automation, and agent capabilities. TrueFoundry integrates seamlessly with Dify, providing enterprise-grade AI features including cost tracking, security guardrails, and access controls. TrueFoundry’s AI Gateway routes all LLM calls through the Gateway to ensure your AI applications are secure, compliant, and cost-effective.

Prerequisites

Before integrating Dify with TrueFoundry, ensure you have the following:
  1. Authentication Token: Create a Personal Access Token in TrueFoundry by following the instructions in Generating Tokens. This token will authenticate your Dify applications to the TrueFoundry Gateway.
  2. Gateway Base URL: Locate your TrueFoundry Gateway base URL, which follows the format <control plane url>/api/llm. The control plane URL is where your TrueFoundry dashboard is hosted (e.g., https://company.truefoundry.cloud/api/llm).
  3. Dify Installation: Set up Dify using either the cloud version or self-hosted deployment with Docker.
  4. Model Access: Ensure you have access to the AI models you want to use through TrueFoundry’s model catalog.

Integration Steps

This guide assumes you have Dify installed and running, and have obtained your TrueFoundry AI Gateway base URL and authentication token.

Step 1: Generate Your TrueFoundry Access Token

  1. Navigate to your TrueFoundry dashboard and go to Access Management.
  2. Click New Personal Access Token to create a new token:
  3. Copy and securely store your Personal Access Token - you’ll need this for Dify configuration.

Step 2: Access Dify Model Provider Settings

  1. Log into your Dify workspace (cloud or self-hosted).
  2. Navigate to Settings and go to Model Provider:

Step 3: Install OpenAI-API-Compatible Provider

  1. In the Model Provider section, look for OpenAI-API-compatible and click Install.
  2. Configure the OpenAI-API-compatible provider with your TrueFoundry details:
Fill in the following configuration:
  • Model Name: Enter your TrueFoundry model ID (e.g., openai-main/gpt-4o)
  • Model display name: Enter a display name (e.g., Gpt-4o)
  • API Key: Enter your TrueFoundry Personal Access Token
  • API endpoint URL: Enter your TrueFoundry Gateway base URL (e.g., https://internal.devtest.truefoundry.tech/api/llm/api/inference/openai)
  • model name for API endpoint: Enter the endpoint model name (e.g., chatgpt4.0)
  • Completion mode: Select Chat

Step 4: Get Configuration Details from TrueFoundry

  1. Navigate to your TrueFoundry AI Gateway to get the correct model identifier:
  1. Copy the model ID (e.g., openai-main/gpt-4o) and paste it into Dify’s Model Name field.
  2. Get the API endpoint URL from the unified code snippet provided by TrueFoundry:

Step 5: Save and Test Your Configuration

  1. Click Save to apply your configuration in Dify.
  2. Create a new application or workflow to test the integration:
  3. Test the integration by creating a simple LLM workflow to verify that Dify is successfully communicating with TrueFoundry’s AI Gateway.
Your Dify workspace is now integrated with TrueFoundry’s AI Gateway and ready for building AI applications, workflows, and agents.