LLM Gateway

LLM Gateway provides a single API using which you can call any LLM provider - including OpenAI, Anthropic, Bedrock, your self-hosted model and the open source LLMs. The key advantages of this gateway are:

Unified API to call any LLM

LLM Gateway eliminates the need to learn and manage multiple APIs for different LLM providers. Developers can use a single API to access the functionalities of various LLMs, streamlining their development workflow.

Add multiple providers to the gateway

Call any provider using the unified API

Configure retries

Coming soon

Configure fallbacks

Coming soon

Metrics Collection by default

Configure Logging

Coming soon

Configure guardrails

Coming soon

Imagine you're working on a natural language processing (NLP) project that involves utilizing a chatbot capable of engaging in meaningful conversations with users. You want to experiment with OpenAi's LLMs like GPT-4, and text-davinci-3 to achieve this goal. However, you also want to play around with the hyperparameters of the LLM, and benchmark the LLM Metrics and Logs for your LLMs

To use OpenAI in LLM Gateway, you will have to first integrate it. You can check this by seeing if any OpenAI LLMs are coming in the Models list:

If you don't see any, you can integrate OpenAI models into the LLM Gateway. To do this however you need to be Tenant admin. If you do not have administrator privileges, please request your Tenant Admini to integrate OpenAI on your behalf following this guide

Truefoundry's LLM Gateway simplifies the process of using OpenAI's LLMs by providing a secure and seamless interface. It acts as an intermediary between your application and OpenAI's LLMs, handling all the complexities of authentication, authorization, and communication. This allows you to focus on developing your application's core functionality without getting bogged down in the technical details of using LLMs.

Metrics Dashboard for LLM Gateway

You can also track various performance metrics, enabling you to quantitatively assess each LLM's performance during your experimentation phase.