This guide explains how to use the OpenAI client to interact with TrueFoundry’s responses endpoint for making inference requests.

Authentication

You’ll need a TrueFoundry API key to authenticate your requests. You can find authentication details from here.

You’ll also need to set the x-tfy-provider-name header to the name of the provider integration you’re using.

For example, if you’re using an OpenAI provider integration named my-openai-provider, you’ll set the x-tfy-provider-name header to my-openai-provider.

from openai import OpenAI

client = OpenAI(
    base_url="<tfy-control-plane-url>/api/llm",
    api_key="your_truefoundry_api_key",
    default_headers={"x-tfy-provider-name": "my-openai-provider"}
)

Text Completion

To get a text completion response:

response = client.responses.create(
    model="my-openai-provider/gpt-3-5-turbo",
    input=[{"role": "user", "content": "Your prompt here"}]
)
response_id = response.id

Image Inputs

For tasks involving images:

response = client.responses.create(
    model="my-openai-provider/gpt-4o",
    input=[
        {"role": "user", "content": "Your prompt here"},
        {
            "role": "user",
            "content": [
                {
                    "type": "input_image",
                    "image_url": "your_image_url"
                }
            ]
        }
    ]
)
response_id = response.id

Managing Responses

Retrieve Response

To fetch a response using its ID:

response = client.responses.retrieve(response_id=response_id)

Delete Response

To delete a response:

delete_result = client.responses.delete(response_id=response_id)