Chat Completions
This provides universal api for all the supported models via standard openai /chat/completions
endpoint.
Universal OpenAI Compatible API
Truefoundry AI Gateway allows you to use any chat based llm model via standard openai /chat/completions
endpoint. You can use standard openai client to send requests to the gateway. Here is a sample code snippet for the same:
You will need to configure the following:
- base_url: This is the base url of the truefoundry dashbaord.
- api_key: This is the api key which can be generated from Personal Access Tokens
- model: This is the truefoundry model id. This is of the format
provider_account/model_name
. You can find this on truefoundry llm playground ui.
Sending a system prompt
You can include a system prompt to set the behavior and context for the model. Here’s how to do it:
The system message helps guide the model’s responses and can be used to set specific instructions, tone, or expertise areas.
Multimodal Inputs
Truefoundry AI Gateway supports various types of multimodal inputs, allowing you to work with different data formats.
Images
You can send images as part of your chat completion requests. You can either send a url or a base64 encoded image.
Send an image url to the model:
Sending a base64 encoded image to the model:
Audio
For audio inputs, you can send audio files in supported formats (MP3, WAV, etc.). Please make sure that the model supports audio inputs, otherwise the request will fail. Audio inputs in chat completions are currently supported for Google Gemini models.
Using audio input url:
Using local audio file as base64 encoded:
Video
Video processing is natively supported for Google Gemini models. But it can be used for other models with the help of sending frames as images.
Here is the code snippet to send a video to the model:
Send a video url to the model:
Send base64 encoded video to the model (please make sure the size of the video is within limits of the provider):
Parameters Supported
The chat completions API supports all openai compatible parameters.:
Function and Tool Calling
You can define functions that the model can call during the conversation. Here’s how to implement function calling:
The model can then call these functions when appropriate, and you can handle the function calls in your application logic. This enables the model to perform specific actions or retrieve information from external sources.