Tracing in Agno

Tracing helps you understand what’s happening under the hood when an agent run is called. You get to understand the path, tools calls made, context used, latency taken when you run your agent using TrueFoundry’s tracing functionality.

TrueFoundry enhances Agno with powerful observability features, offering real-time monitoring of AI agent workflows, task execution, and LLM performance. By integrating with TrueFoundry's tracing, metrics collection, and error detection capabilities, you gain deep insights into how CrewAI agents interact, complete tasks, and optimize workflows. Additionally, TrueFoundry provides end-to-end traceability for AI-driven systems, making your CrewAI implementations more transparent, reliable, and scalable.

lnstall dependencies:

First, you need to install the following

pip install agno==1.2.6 traceloop-sdk==0.38.12

Setup environment variables:

Add the necessary environment variables to enable tracing

OPENAI_API_KEY=sk-proj-*
TRACELOOP_BASE_URL=<<control-plane-url>>/api/otel
TRACELOOP_HEADERS="Authorization=Bearer <<api-key>>

Generate API key from here

Demo Agno Agent

The following demo agent is a research agent which researches the latest trends to conduct detailed market research with keen attention to detail. For example, it can generate "A comprehensive report on AI and machine learning"

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.workflow import RunResponse
from dotenv import load_dotenv

from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow, task

load_dotenv()

# Initialize the Traceloop SDK
Traceloop.init(app_name="agno")

@workflow(name="research_workflow")
def research(topic: str):
    research_agent = Agent(
        model=OpenAIChat(id="gpt-4o-mini"),
        description="Expert in market analysis with keen attention to detail"
    )

    @task(name="research_task")
    def research_task(topic: str):
        researcher_response: RunResponse = research_agent.run(topic)
        # Convert the response to a serializable format if necessary
        serializable_response = researcher_response.dict() if hasattr(researcher_response, "dict") else str(researcher_response)
        return serializable_response

    return research_task(topic)

if __name__ == "__main__":
    research(topic="Research the latest trends in AI and machine learning")

View Logged Trace on UI