Get consistent JSON outputs from chat completions API.
The chat completions API supports structured response formats, enabling you to receive consistent, predictable outputs in JSON format. This is useful for parsing responses programmatically.
JSON mode ensures the model’s output is valid JSON without enforcing a specific structure:
Copy
Ask AI
from openai import OpenAIclient = OpenAI( api_key="your_truefoundry_api_key", base_url="<truefoundry-base-url>/api/llm/api/inference/openai")response = client.chat.completions.create( model="openai-main/gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant designed to output JSON."}, {"role": "user", "content": "Extract information about the 2020 World Series winner"} ], response_format={"type": "json_object"})print(response.choices[0].message.content)
Output:
Copy
Ask AI
{ "team": "Los Angeles Dodgers", "year": 2020, "opponent": "Tampa Bay Rays", "games_played": 6, "series_result": "4-2"}
When using JSON schema with strict mode set to true, all properties defined in the schema must be included in the required array. If any property is defined but not marked as required, the API will return a 400 Bad Request Error.
Using Pydantic Models
Pydantic provides automatic validation, serialization, and type hints for structured data:
Copy
Ask AI
from openai import OpenAIfrom pydantic import BaseModel, Fieldfrom typing import Listclient = OpenAI( api_key="your_truefoundry_api_key", base_url="<truefoundry-base-url>/api/llm/api/inference/openai")# Define Pydantic modelclass UserInfo(BaseModel): name: str = Field(description="Full name of the user") age: int = Field(ge=0, description="Age in years") occupation: str = Field(description="Job title or profession") location: str = Field(description="City or location") skills: List[str] = Field(description="List of professional skills") class Config: extra = "forbid" # Prevent additional fieldsresponse = client.chat.completions.create( model="openai-main/gpt-4o", messages=[ { "role": "system", "content": "Extract user information and respond according to the provided schema." }, { "role": "user", "content": "Hi, I'm Mike Chen, a 32-year-old software architect from Seattle. I specialize in cloud computing, microservices, and Kubernetes." } ], response_format={ "type": "json_schema", "json_schema": { "name": "user_info", "schema": UserInfo.model_json_schema(), "strict": True } })# Parse and validate with Pydanticuser_data = UserInfo.model_validate_json(response.choices[0].message.content)
When using OpenAI models with Pydantic Models, there should not be any optional fields in the pydantic model when strict mode is true. This is because the corresponding JSON schema will have missing fields in the “required” section.
Using OpenAI's Beta Parse API
The beta parse client provides the most streamlined approach for Pydantic integration:
Copy
Ask AI
from openai import OpenAIfrom pydantic import BaseModel, Fieldfrom typing import List, Optionalclass UserInfo(BaseModel): name: str = Field(description="Full name of the user") age: int = Field(ge=0, description="Age in years") occupation: str = Field(description="Job title or profession") location: Optional[str] = Field(None, description="City or location") skills: List[str] = Field(default=[], description="List of professional skills")client = OpenAI( api_key="your_truefoundry_api_key", base_url="<truefoundry-base-url>/api/llm/api/inference/openai")completion = client.beta.chat.completions.parse( model="openai-main/gpt-4o", messages=[ { "role": "system", "content": "Extract user information from the provided text." }, { "role": "user", "content": "Hello, I'm Alex Rodriguez, a 29-year-old product manager from Austin. I have experience in agile methodologies, data analysis, and team leadership." } ], response_format=UserInfo,)user_result = completion.choices[0].message.parsed
This approach allows for optional fields in your Pydantic model and provides a cleaner API for structured responses.
Was this page helpful?
Assistant
Responses are generated using AI and may contain mistakes.