This guide demonstrates how to use OpenTelemetry SDK to instrument OpenAI API calls and send traces to TrueFoundry’s OtelCollector.

In this example, we’ll show how to instrument a Python application that makes calls to OpenAI’s API using OpenTelemetry’s context managers.

1

Install Dependencies

First, you need to install the following packages:

pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http python-dotenv openai
2

Setup environment variables

To enable tracing, you’ll need to configure a few environment variables in your application.

Before proceeding, make sure you’ve, Created a tracing project and Generated an API token. If you haven’t done this yet, follow the instructions in Getting Started.

# Tracing configs
OTEL_EXPORTER_OTLP_ENDPOINT=<<control-plane-url>>/api/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer%20<<api-key>>,tfy-tracing-project=<<tracing-project-fqn>>"
OPENAI_API_KEY=<<your-openai-api-key>>

Replace the placeholders above:

  • <<control-plane-url>>: Your actual TrueFoundry control plane URL
  • <<api-key>>: The API key associated with your tracing project
  • <<tracing-project-fqn>>: The fully qualified name of your tracing project
3

Initialize OpenTelemetry and OpenAI Client

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Setup tracer provider
provider = TracerProvider()
trace.set_tracer_provider(provider)

# OTLP exporter (HTTP)
otlp_exporter = OTLPSpanExporter()

# Span processor using batch (recommended for production)
span_processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(span_processor)

# Get tracer
tracer = trace.get_tracer(__name__)
4

Instrument your OpenAI API calls

This section shows how to instrument your OpenAI API calls using OpenTelemetry’s context managers. The example demonstrates how to trace a chat completion request with proper attributes.

Tracing related code has been highlighted in the below code block.

from openai import OpenAI

# Initialize OpenAI client
client = OpenAI()

def generate_ai_response(input_text):
    with tracer.start_as_current_span("OpenAI-Trace") as span:
        # Set up the chat messages
        input_prompt = [
            {"role": "user", "content": input_text}
        ]
        
        # Add relevant attributes to the span
        span.set_attribute("input.value", input_text)
        span.set_attribute("model.name", "gpt-4")
        span.set_attribute("temperature", 0.7)
        span.set_attribute("gen_ai.prompt.0.role", "user")
        span.set_attribute("gen_ai.prompt.0.content", input_text)

        # Make the API call
        response = client.chat.completions.create(
            messages=input_prompt,
            model="gpt-4",
            temperature=0.7,
        )

        # Add response attributes to the span
        output_content = response.choices[0].message.content
        span.set_attribute("output.value", output_content)
        span.set_attribute("gen_ai.completion.0.role", "assistant")
        span.set_attribute("gen_ai.completion.0.content", output_content)

        return output_content

# Example usage
if __name__ == "__main__":
    input_text = "Explain the concept of AI in 50 words"
    response = generate_ai_response(input_text)
    print(response)
5

Run your application and view logged trace