AWS Bedrock
TrueFoundry offers a secure and efficient gateway to seamlessly integrate various Large Language Models (LLMs) into your applications, including models hosted on AWS Bedrock.
Adding Models
This section explains the steps to add AWS Bedrock models and configure the required access controls.
- From the TrueFoundry dashboard, navigate to
Integrations
>Add Provider Integration
and selectAWS
. - Complete the form with your AWS account details, including the authentication information (Access Key + Secret) or Assume Role ID.
- Select the Bedrock Model option and provide the model ID and other required details to add one or more model integrations.
Inference
After adding the models, you can perform inference using an OpenAI-compatible API via the TrueFoundry AI Gateway. For instance, you can directly utilize the OpenAI library as well.:
Supported Models
A list of models supported by AWS Bedrock, along with their corresponding model IDs, can be found here: View Full List
The TrueFoundry AI Gateway supports all text and image models in Bedrock. We are also working on adding support for additional modalities, including speech, in the near future.
Extra Parameters
Internally, the TrueFoundry AI Gateway utilizes the Bedrock Converse API for chat completion.
To pass additional input fields or parameters, such as top_k, frequency_penalty, and others specific to a model, include them using this key:
Cross-Region Inference
To manage traffic during on-demand inferencing by utilising compute across regions, you can use AWS Bedrock Cross-Region Inference. While setting model ID in TrueFoundry, use the Inference Profile ID instead of the model ID.
You can more information about cross-region inferencing here.
Use Inference Profile ID as model ID while adding model to TFY AI Gateway
Authentication Methods
Using AWS Access Key and Secret
-
Create an IAM user (or choose an existing IAM user) following these steps.
-
Add required permission for this user. The following policy grants permission to invoke all model
-
-
Create an access key for this user as per this doc.
-
Use this access key and secret while adding the provider account to authenticate requests to the Bedrock model.
Using Assumed Role
- You can also directly specify a role that can be assumed by the service account attached to the pods running AI Gateway.
- Read more about how assumed roles work here.
Using Bedrock Guardrails
-
Create a Guardrail in AWS. More information at this link - https://aws.amazon.com/bedrock/guardrails
-
Copy the Guardrails ID and the version number
-
While calling a AWS bedrock model through TFY AI Gateway, pass the following object along with it:
-
This should ensure the response will have guardrails enforced. Consider this input where the guardrail is configured to censor PII like name, email etc.:
-
Sample output:
-
If you’re using a library like Langchain, you might have to pass the extra param in a parameter like
extra_body
as required by the library. For example, refer this Langchain OpenAI class doc.