TrueFoundry’s LLM Playground is your go-to space for experimenting, testing, and perfecting prompts for your AI application. Here, you can explore different models, test various variables, review completions, and fine-tune your prompt engineering strategy—all before deploying to production. It’s the ideal environment to ensure your prompts are optimized and ready for real-world use.
Writing prompts on Playground is pretty simple.
You can simply create variable by writing {{variable_name}}
as part of your messages. Once the variable is added you can provide values which will be reflect in the chat section as shown below:
Some models support function calling, enabling the AI to request specific information or perform actions. The Playground simplifies experimentation with these features.
Although the gateway does not directly execute calls to external tools, it allows you to describe the tool and simulate the call within the response. This simulation provides a detailed representation of the request and expected response, allowing developers to understand how the language model would interact with external systems.
To define functions that the model can call, simply click the “Add Tool” button and follow the steps as shown below:
Once you are happy with the prompt. You can click on “Save Template” button on the screen. All prompt templates are stored in ML Repos which is like a storage unit on TrueFoundry.
To update an existing template, follow the same steps.
Whenever you modify a prompt template, TrueFoundry automatically tracks these changes. The versioning system enables you to:
To load a version of an existing prompt template, you just need to visit playground and follow these steps:
TrueFoundry automatically generates a code snippet for you to use specific version of your prompt template. Simply visit the prompt details page and switch to “Use in Code” tab to copy the code snippet.
You can fetch latest version of a prompt using latest
tag instead of the version number
Use after careful consideration
Generally we do not recommend fetching using the latest
tag in production. Your prompt might change significantly between versions and that can affect your downstream use case.
Pinning to exact versions ensures predictability and reproduction
TrueFoundry’s LLM Playground is your go-to space for experimenting, testing, and perfecting prompts for your AI application. Here, you can explore different models, test various variables, review completions, and fine-tune your prompt engineering strategy—all before deploying to production. It’s the ideal environment to ensure your prompts are optimized and ready for real-world use.
Writing prompts on Playground is pretty simple.
You can simply create variable by writing {{variable_name}}
as part of your messages. Once the variable is added you can provide values which will be reflect in the chat section as shown below:
Some models support function calling, enabling the AI to request specific information or perform actions. The Playground simplifies experimentation with these features.
Although the gateway does not directly execute calls to external tools, it allows you to describe the tool and simulate the call within the response. This simulation provides a detailed representation of the request and expected response, allowing developers to understand how the language model would interact with external systems.
To define functions that the model can call, simply click the “Add Tool” button and follow the steps as shown below:
Once you are happy with the prompt. You can click on “Save Template” button on the screen. All prompt templates are stored in ML Repos which is like a storage unit on TrueFoundry.
To update an existing template, follow the same steps.
Whenever you modify a prompt template, TrueFoundry automatically tracks these changes. The versioning system enables you to:
To load a version of an existing prompt template, you just need to visit playground and follow these steps:
TrueFoundry automatically generates a code snippet for you to use specific version of your prompt template. Simply visit the prompt details page and switch to “Use in Code” tab to copy the code snippet.
You can fetch latest version of a prompt using latest
tag instead of the version number
Use after careful consideration
Generally we do not recommend fetching using the latest
tag in production. Your prompt might change significantly between versions and that can affect your downstream use case.
Pinning to exact versions ensures predictability and reproduction