This guide demonstrates how to use TrueFoundry OtelCollector along with the Traceloop SDK to instrument Agno agent code. In this example, the Agno agent is a research agent which researches the latest trends to conduct detailed market research. For example, it can generate “A comprehensive report on AI and machine learning.”

1

Create Tracing Project, API Key and copy tracing code

Follow the instructions in Getting Started to create a tracing project, generate API key and copy the tracing code.

2

Install Dependencies

First, you need to install the following

pip install agno==1.2.6 traceloop-sdk openai
3

Add Tracing code to Agno agent application

For Agno agents, we need to add the Traceloop.init() call to the application. We should also add the @workflow, @task and @tool decorators to the workflow, task and tool functions respectively.

Agno Agent Code
from dotenv import load_dotenv
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.workflow import RunResponse
import random
import os

# importing traceloop sdk and decorators
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow, task, tool

load_dotenv()

# Add the traceloop init code to your application
TFY_API_KEY = os.environ.get("TFY_API_KEY")
Traceloop.init(
    api_endpoint="<enter_your_api_endpoint>",
    headers = {
        "Authorization": f"Bearer {TFY_API_KEY}",
        "TFY-Tracing-Project": "<enter_your_tracing_project_fqn>",
    },
)

# traceloop tool decorator
@tool(name="get_random_topic")
def get_random_topic() -> str:
    """Get a random topic from a list of AI-related subjects for research."""
    print("Getting random topic")
    words = ["AI", "Machine Learning", "Data Science", "Deep Learning", "Computer Vision", "Natural Language Processing", "Robotics", "Blockchain", "Quantum Computing", "Gen AI", "LLMs", "RAG", "LLM Agents", "LLM Orchestration", "LLM Tool Calling", "LLM Memory", "LLM Planning", "LLM Reasoning", "LLM Chain of Thought", "LLM Self-Reflection", "LLM Self-Improvement", "LLM Self-Evaluation", "LLM Self-Correction", "LLM Self-Debugging", "LLM Self-Healing", "LLM Self-Repairing", "LLM Self-Optimizing", "LLM Self-Adjusting", "LLM Self-Adapting", "LLM Self-Learning", "LLM Self-Training", "LLM Self-Testing", "LLM Self-Verification", "LLM Self-Validation", "LLM Self-Improvement", "LLM Self-Correction", "LLM Self-Debugging", "LLM Self-Healing", "LLM Self-Repairing", "LLM Self-Optimizing", "LLM Self-Adjusting", "LLM Self-Adapting", "LLM Self-Learning", "LLM Self-Training", "LLM Self-Testing", "LLM Self-Verification", "LLM Self-Improvement", "LLM Self-Correction", "LLM Self-Debugging", "LLM Self-Healing", "LLM Self-Repairing", "LLM Self-Optimizing", "LLM Self-Adjusting", "LLM Self-Adapting", "LLM Self-Learning"]
    return random.choice(words)

# Traceloop workflow decorator
@workflow(name="research_workflow")
def research(topic: str):
    research_agent = Agent(
        model=OpenAIChat(id="gpt-4o-mini"),
        description="Expert in market analysis with keen attention to detail",
        tool_choice="auto",
        tools=[get_random_topic]
    )

    # Traceloop task decorator
    @task(name="research_task")
    def research_task(topic: str):
        researcher_response: RunResponse = research_agent.run(topic)
        serializable_response = researcher_response.dict() if hasattr(researcher_response, "dict") else str(researcher_response)
        return serializable_response

    return research_task(topic)

if __name__ == "__main__":
    research(topic=(
    "Use the `get_random_topic` tool **exactly once** to select a trending topic in AI. "
    "Do not call the tool multiple times. Once a topic is selected, research the latest trends in that area "
))

4

Run your application and view logged trace