Skip to main content
TrueFoundry AI Gateway Playground provides a comprehensive environment to experiment with AI models, fine-tune configurations, configure safety measures, and create reusable prompts. This guide will help you get started quickly with all the essential features.

How to Access the Playground

To access the AI Gateway Playground:
  1. Log into your TrueFoundry dashboard
  2. Navigate to the AI Gateway section in the main menu
  3. Select Playground from the dropdown options
TrueFoundry AI Gateway Playground navigation

Playground Interface Overview

The Playground interface combines simplicity with powerful functionality, organized into three main sections:
  1. Top Bar: Contains Recent History, prompt management, and code generation options
  2. Left Sidebar: Access to model selection, guardrails, MCP servers, and configuration options
  3. Main Chat Area: Where you interact with the model and view responses Screenshot2025 08 12at2 23 42PM Pn

Key Components

  • Model Selection: Browse and select from over 1,000 models across 15+ providers
  • Interactive Chat Interface: Send prompts and view model responses in real-time
  • Configuration Controls: Adjust model parameters and settings
  • Guardrails: Configure safety measures for inputs and outputs
  • MCP Server Integration: Connect to MCP Servers for enhanced capabilities
  • Prompt Library: Save and load prompt templates with their configurations

Features

Fine-tune your model’s behavior by adjusting various parameters:
ParameterDescription
TemperatureControls randomness in responses
Response FormatChoose between streaming or complete responses
Maximum TokensSets the length limit for model responses
Stop SequenceDefines where the model should stop generating
Request LoggingEnables or disables logging of requests
To access these settings, click on the settings icon next to the model selection dropdown. Changes take effect immediately for your next interaction.Model configuration settings panel

Selecting a Model

To select a model for your session:
  1. Open the model dropdown menu from the left sidebar
  2. Browse through available models or use the search function
  3. Select your desired model
Models are organized by type (chat, embedding, rerank, image). You can filter models based on these categories or search for specific model names.Model selection interface

Interacting with Models

Once you’ve selected a model, you can begin interacting with it:
  1. Type your prompt in the text input field
  2. Click the Run button to send your prompt to the model
  3. View the model’s response in the chat interface
For multi-turn conversations, simply continue typing in the input field and clicking Run. The Playground maintains the conversation context between turns, allowing for natural dialogueBasic model interaction in the Playground
The Playground provides detailed performance metrics for each request, helping you understand where time is spent during model interactions:
StageDescriptionExample Time
Frontend to GatewayTime for request to travel from browser to AI Gateway127.9 ms
Gateway ProcessingTime spent on rate limiting, budget checks, load balancing, and guardrails2.9 ms
Gateway to ModelTime for request to reach the AI model provider468.2 ms
Model ProcessingTime taken by the model to generate a response467 ms
This breakdown helps identify bottlenecks and optimize your AI implementation.Latency breakdown visualizationFor more detailed timing information, you can inspect the server-timing header in your browser’s developer tools:Server timing header in developer tools
Access your previous conversations easily:
  1. Click on the Recent button in the top bar
  2. Browse through your previous chat sessions
  3. Select any conversation to resume from where you left off
This feature maintains continuity between sessions and allows you to reference or build upon previous interactions with the model.Recent chat history interface

Generating Code Snippets

The Playground makes it easy to integrate your configured model into your applications:
  1. Click the Code button located in the top bar
  2. Browse through the available code snippets for different libraries and frameworks
  3. Copy the generated code directly into your application
Available code snippets include: OpenAI SDK, Langchain, Langgraph, Google ADK, Stream API, Rest API, Go-OpenAI, Rust-OpenAI, cURL, NodeJS, llamaindex, Langchain4jThe generated code includes your current model selection and parameter settings, making it ready to use in your application.Code button location in the Playground interfaceCode snippet generation interface with language options

Loading Saved Prompts

To use previously created prompts:
  1. Click the Your Prompts button in the top bar
  2. Browse through your saved prompts or prompts shared with you
  3. Select the desired prompt to load its complete configuration
  4. The prompt is now ready to use Interface for loading saved prompts

Creating and Saving Prompts

When you’ve created a configuration you want to preserve:
  1. Set up your model, parameters, guardrails, and MCP servers as desired
  2. Click the Save Prompt button
  3. Choose the repository you want to save your prompt into
  4. Name your prompt and add a commit message
  5. Add optional tags to categorize your prompt
  6. Click Save to store your prompt
Saved prompts are versioned for tracking changes and can be used as templates for new projects. This feature is particularly useful for standardizing AI interactions across your organization.Interface for creating and saving new prompts
MCP (Model Control Protocol) Servers allow your AI models to perform complex operations and interact with external systems.

Setting Up MCP Servers

To integrate MCP Servers with your Playground session:
  1. Click on MCP Server in the left bar of Playground
  2. Choose the MCP Servers and tools you want to use
  3. Connect with the MCP Server if required by entering credentials
  4. The MCP Server is now ready to use
Once connected, your model can perform operations like managing calendars, retrieving data from databases, or controlling external systems, all through natural language requests.Interface for adding MCP Servers to the Playground

Using MCP Servers in Conversations

Once you’ve connected an MCP Server, you can use it in your conversations:
  1. Send a natural language request like Clear my calendar for tomorrow
  2. The model interprets the request and uses the MCP Server to access your calendar
  3. The operation is performed and confirmation is provided Example of calendar management using MCP Server integration
Guardrails help control AI model behavior by filtering or modifying inputs and outputs.

Types and Modes of Guardrails

The Playground supports two types of guardrails:
  1. Input Guardrails: Applied to user prompts before they reach the model
  2. Output Guardrails: Applied to model responses before they’re shown to the user
Each guardrail operates in one of two modes:
  1. Validation Mode: Blocks requests or responses that fail to meet criteria
  2. Mutation Mode: Automatically modifies requests or responses to comply with requirements
Output Guardrails will only work for non-streaming requests. Input guardrails work on both streaming and non-streaming responses.

Setting Up Guardrails

To configure guardrails in the Playground:
  1. Select either Input or Output guardrails from the left bar in Playground
  2. Choose from the dropdown of available guardrails
  3. You can select multiple guardrails for each Input and Output
  4. Send a request to see the guardrails in action

Guardrail Examples

Input Validation Example

When an input guardrail detects prohibited content, it prevents the request from reaching the model and displays an error message:Example of input guardrail blocking inappropriate content

Output Modification Example

Output guardrails can automatically transform model responses to meet specific requirements:Example of output guardrail modifying model response
I