Adding Models to LLM Gateway
Understanding the concept of a Provider and a Model -
What is a Provider?
A provider is a company or organization that offers access to various language models. These providers offer different models, each with unique capabilities, performance characteristics, and pricing structures. Examples of popular providers include OpenAI and Anthropic.
What is a Model?
A model is a specific instance of an AI language model offered by a provider. Each model has distinct features and is trained on different datasets, leading to variations in performance and suitability for different tasks. For instance, OpenAI's GPT-3 and GPT-4 are different models, each with its own set of capabilities.
Steps to Add Providers and Models
This section captures how to add models along with setting up required access controls.
- Navigate to the Integrations Page: Start by accessing the Integrations page in your TrueFoundry dashboard.
- Choose a Provider: Select the provider you would like to integrate. Options might include OpenAI, Anthropic, and others.
- Add Models: After selecting a provider, you will be able to add specific models offered by that provider. A model can be added by providing the
model_id
as provided by the original provider- For example, if you choose OpenAI, you might add models such as GPT-3.5 (
gpt-3.5-turbo
) or GPT-4 (gpt-4
) by specifying the corresponding model id from the OpenAI docs. - You can find the model ids for other providers from their corresponding docs.
- For example, if you choose OpenAI, you might add models such as GPT-3.5 (
- Configure Integration Settings: Each model may require specific configurations. Ensure you enter the necessary API keys, access tokens, or any other required credentials and settings.
Save and Test the Integration: After adding the models, save the integration and test it to ensure everything is functioning correctly. You can typically do this by running sample queries or using the Playground within the TrueFoundry LLM Gateway.
Adding Model using GitOps
You can use GitOps to configure TrueFoundry resources. You can edit/create resources using tfy apply
command from your CI/CD pipelines.
The equivalent YAML spec for a LLM Gateway Model Provider Account to connect OpenAI account with GPT-3.5, GPT-4 and GPT-4o added with permission for everyone. (Note: everyone
is a team that is automatically created by TrueFoundry that has all the users in an org.)
name: my-openai-account
type: provider-account/openai
auth_data:
api_key: xxxyyyzzz
type: api-key
integrations:
- name: gpt-3-5-turbo
type: integration/model/openai
model_id: gpt-3.5-turbo
model_types:
- chat
authorized_subjects:
- team:everyone
- name: gpt-4-turbo-2024-04-09
type: integration/model/openai
model_id: gpt-4-turbo-2024-04-09
model_types:
- chat
authorized_subjects:
- team:everyone
- name: gpt-4o
type: integration/model/openai
model_id: gpt-4o
model_types:
- chat
authorized_subjects:
- team:everyone
You can also generate the above YAML from the UI:
In the above YAML, everyone gets access to all the models. You can read more about Access Control in this doc.
Updated about 1 month ago