Learn how to use TrueFoundry AI Gateway Playground to experiment with 1000+ AI models, configure settings, configure guardrails, use MCP servers, and create reusable prompts
AI Gateway
section in the main menuPlayground
from the dropdown optionsConfiguring Model Settings
Parameter | Description |
---|---|
Temperature | Controls randomness in responses |
Response Format | Choose between streaming or complete responses |
Maximum Tokens | Sets the length limit for model responses |
Stop Sequence | Defines where the model should stop generating |
Request Logging | Enables or disables logging of requests |
Working with Models
model dropdown
menu from the left sidebarchat, embedding, rerank, image
). You can filter models based on these categories or search for specific model names.Run
button to send your prompt to the modelRun
. The Playground maintains the conversation context between turns, allowing for natural dialogueLatency Breakdown
Stage | Description | Example Time |
---|---|---|
Frontend to Gateway | Time for request to travel from browser to AI Gateway | 160 ms |
Gateway Processing | Time spent on rate limiting, budget checks, load balancing, and guardrails | 1.9 ms |
Gateway to Model | Time for request to reach the AI model provider | 347 ms |
Model Processing | Time taken by the model to generate a response | 375 ms |
server-timing
header in your browser’s developer tools:Accessing Chat History
Recent
button in the top barCode Integration
Code
button located in the top barOpenAI SDK
, Langchain
, Langgraph
, Google ADK
, Stream API
, Rest API
, Go-OpenAI
, Rust-OpenAI
, cURL
, NodeJS
, llamaindex
, Langchain4j
The generated code includes your current model selection and parameter settings, making it ready to use in your application.Prompt Management
Your Prompts
button in the top barSave Prompt
buttonSave
to store your promptMCP Server Integration
MCP Server
in the left bar of PlaygroundClear my calendar for tomorrow
Configuring Guardrails
Input
or Output guardrails
from the left bar in Playground