Langroid is a Python framework for building LLM-powered applications with a focus on Multi-Agent Programming. It provides intuitive, flexible, and powerful tools for creating sophisticated conversational AI systems and multi-agent workflows.
Multi-Agent Architecture: Build complex AI systems with multiple specialized agents that can collaborate and delegate tasks to each other in sophisticated workflows
Conversation Management: Advanced conversation handling with context management, memory persistence, and natural dialogue flow control
Tool Integration: Seamless integration with external tools and function calling capabilities, enabling agents to interact with APIs, databases, and system resources
Retrieval-Augmented Generation (RAG): Built-in support for document ingestion, vector search, and knowledge retrieval to enhance agent responses with relevant context
Connect Langroid to TrueFoundry’s unified LLM gateway:
Copy
Ask AI
from langroid.language_models.openai_gpt import OpenAIGPTConfigfrom langroid.agent.chat_agent import ChatAgent, ChatAgentConfigTRUEFOUNDRY_PAT = "your-truefoundry-api-key"TRUEFOUNDRY_BASE_URL = "your-truefoundry-base-url"# Configure TrueFoundry connectionconfig = OpenAIGPTConfig( chat_model="openai-main/gpt-4o", # Similarly you can call any model from any model provider like anthropic, gemini api_key=TRUEFOUNDRY_PAT, api_base=TRUEFOUNDRY_BASE_URL)# Create a chat agent with the configured modelagent_config = ChatAgentConfig(llm=config)agent = ChatAgent(agent_config)# Test the integrationresponse = agent.llm_response("Tell me a recipie with bread and eggs")print(response.content)
The request is routed through your TrueFoundry gateway to the specified model provider. TrueFoundry automatically handles authentication, load balancing, and logging.
Build sophisticated multi-agent systems with TrueFoundry’s model access:
Copy
Ask AI
from langroid.language_models.openai_gpt import OpenAIGPTConfigfrom langroid.agent.chat_agent import ChatAgent, ChatAgentConfigTRUEFOUNDRY_PAT = "your-truefoundry-api-key" # Your TrueFoundry Personal Access TokenTRUEFOUNDRY_BASE_URL = "your-truefoundry-base-url" # Your TrueFoundry unified endpoint# Configure different agents with different models through TrueFoundryresearcher_config = OpenAIGPTConfig( chat_model="anthropic-main/claude-3-5-sonnet-20241022", api_key=TRUEFOUNDRY_PAT, api_base=TRUEFOUNDRY_BASE_URL)writer_config = OpenAIGPTConfig( chat_model="openai-main/gpt-4o", api_key=TRUEFOUNDRY_PAT, api_base=TRUEFOUNDRY_BASE_URL)# Create specialized agentsresearcher = ChatAgent(ChatAgentConfig(llm=researcher_config))writer = ChatAgent(ChatAgentConfig(llm=writer_config))# Agents collaborate on a taskresearch_data = researcher.llm_response("Research the latest trends in AI for 2024")final_report = writer.llm_response(f"Write a comprehensive summary based on: {research_data.content}")print("Research:", research_data.content)print("\nFinal Report:", final_report.content)
Monitor your Langroid agents through TrueFoundry’s metrics tab:With Truefoundry’s AI gateway, you can monitor and analyze:
Performance Metrics: Track key latency metrics like Request Latency, Time to First Token (TTFS), and Inter-Token Latency (ITL) with P99, P90, and P50 percentiles
Cost and Token Usage: Gain visibility into your application’s costs with detailed breakdowns of input/output tokens and the associated expenses for each model
Usage Patterns: Understand how your application is being used with detailed analytics on user activity, model distribution, and team-based usage
Rate limit and Load balancing: You can set up rate limiting, load balancing and fallback for your models