Skip to main content
TrueFoundry’s LLM Playground is your space for experimenting, testing, and perfecting prompts before deploying them to production. Explore different models, test parameters, review completions, and fine-tune your prompt engineering strategy in a safe environment.

Creating Your First Prompt

Creating prompts in the Playground is straightforward. Follow these steps:
  1. Choose a Model from the dropdown menu
  2. Add System Message to define your model’s behavior and role
  3. Add User Message with your query or input
  4. Configure Parameters like creativity, response length, guardrails, or structured output
  5. Hit Run to test your prompt and see the response
  6. Save Your Prompt to capture all configurations as a reusable template
TrueFoundry AI Gateway Playground interface showing model selection, system message input, user message input, and parameter configuration options

Saving Your Prompt

When you’re satisfied with the results, click the Save button. Choose:
  • ML Repo: Select where to store your prompt (ML Repos act as organized storage units)
  • Name: Give your prompt a clear, descriptive name
  • Commit Message: Add optional notes about this version
TrueFoundry AI Gateway Playground with highlighted Save button in the top right corner after configuring prompt parameters Modal dialog for saving prompt template showing ML Repo selection, prompt name field, and commit message field

Working with Existing Prompts

Loading Saved Prompts

Instead of starting from scratch, you can load existing prompts that you have access to. This allows you to:
  • Build on previous work
  • Collaborate with team members
  • Continue from where you left off
TrueFoundry AI Gateway Playground Load Prompt interface showing dropdown to select existing prompt templates When you load a prompt, all configurations return exactly as saved - model parameters, guardrails, structured output settings, variables, and routing configurations. TrueFoundry AI Gateway Playground displaying a loaded prompt with all previous configurations restored including system message, parameters, and settings

Understanding Prompt Versions

TrueFoundry automatically tracks every change you make to a prompt template. This versioning system lets you:
  1. Experiment freely - try different variations without losing what works
  2. Maintain complete history - see how your prompts evolved over time
  3. Collaborate safely - work with others without conflicts

Finding Your Prompts

Access all your prompts through the Prompt Registry, where you can view any prompt you have read or write permissions for. TrueFoundry Prompt Registry dashboard showing list of saved prompt templates with names, repositories, and access permissions Click on any prompt to see all versions and configuration details. You can load any version directly into the playground for further editing. Prompt template details page showing version history, configuration settings, and options to load different versions into the playground

Advanced Prompt Configuration

Enhance your prompts with powerful configuration options that make them more effective, safe, and intelligent.
Guardrails help ensure your model responses stay appropriate and on-topic. Simply select the guardrails you want from the playground interface.Learn more: Guardrails and Security OverviewTrueFoundry AI Gateway Playground showing guardrails configuration panel with available guardrail options to selectTrueFoundry AI Gateway Playground guardrails configuration interface showing selected guardrail policies applied to the prompt
Routing Config enables you to apply routing policies at the gateway layer using which you can enable load-balancing, fallback and retries across the models.Learn more: Routing Configuration OverviewTrueFoundry AI Gateway Playground routing configuration panel showing model selection and routing policy optionsTrueFoundry AI Gateway Playground routing configuration interface showing configured routing policies with fallback and load balancing settings
Get responses in specific formats like JSON or custom schemas. You can:
  • Describe the format you want and let the agent create the schema
  • Paste your own schema for consistent, structured responses TrueFoundry AI Gateway Playground structured output configuration showing schema definition interface with agent assistance TrueFoundry AI Gateway Playground showing generated JSON schema for structured output with defined properties and data types
MCP Servers extend your AI’s capabilities beyond text generation, allowing access to external tools, databases, and services for more powerful workflows.TrueFoundry AI Gateway Playground MCP Servers configuration panel showing available MCP server options and connection settingsTrueFoundry AI Gateway Playground showing configured MCP Servers with connected external tools and services for enhanced AI capabilities
Create reusable prompts with variables like {customer_name} or {product_type} that change based on context. This makes your prompts:
  • Dynamic: Adapt to different scenarios
  • Scalable: Handle multiple use cases with one template
  • Maintainable: Update once, apply everywhere TrueFoundry AI Gateway Playground input variables configuration showing variable definition interface with placeholder examples like customer_name and product_type

Your Complete Prompt Template

After adding configurations, your prompt becomes a comprehensive template with all your settings: TrueFoundry AI Gateway Playground showing a complete prompt template with all configurations applied including guardrails, routing, structured output, MCP servers, and input variables

Using Prompts in Your Code

TrueFoundry automatically generates code snippets for each prompt version. Access these by visiting the prompt details page and switching to the Use in Code tab. TrueFoundry prompt details page showing the Use in Code tab with generated code snippets for different programming languages and usage methods Code snippet examples showing Python implementation for using prompt templates via TrueFoundry AI Gateway and external usage without gateway You have two options for using your prompts:
Recommended: Use prompts via the TrueFoundry AI Gateway by passing the prompt version FQN in the request body. The Gateway takes care of rendering, and execution automatically.Alternative: Prompts can also be used by fetching the template and rendering it client-side before sending it to the model.
from openai import OpenAI

client = OpenAI(
    api_key="your-tfy-api-key",
    base_url="https://{controlPlaneURL}/api/llm"
)

stream = client.chat.completions.create(
    messages=[],
    model="openai-main/gpt-4o",
    stream=True,
    extra_headers={
        "X-TFY-METADATA": '{"your_custom_key":"your_custom_value"}',
        "X-TFY-LOGGING-CONFIG": '{"enabled": true}',
    },
    extra_body={
        "prompt_version_fqn": "chat_prompt:truefoundry/default/my-second-prompt:1"
    },
)

for chunk in stream:
    if (
        chunk.choices
        and len(chunk.choices) > 0
        and chunk.choices[0].delta.content is not None
    ):
        print(chunk.choices[0].delta.content, end="", flush=True)
Your prompt templates are now ready for production use with all the configurations, safety measures, and optimizations you’ve built in the Playground.
I