Skip to main content
We often have a requirement to access cloud managed services like blob storage, queues, databases etc from our services. A most common use case is access S3 or GCS bucket from our services to read or write data. To enable this in Truefoundry, the process is the same as you would do in a normal Kubernetes cluster. In this example below we explain the process to read and write data to blob storage - however, the concept remains roughly the same in case you are connecting to other cloud services like SQS. The key steps are:

1. Add the cloud specific code access the service in your code

  • AWS
  • GCP
  • Azure
from fastapi import FastAPI, UploadFile
from fastapi.responses import StreamingResponse
import boto3
import io

app = FastAPI()

AWS_REGION = "YOUR_AWS_REGION"
S3_BUCKET_NAME = "your-s3-bucket-name"

s3 = boto3.client("s3")

@app.post("/upload/{object_name}")
async def upload_file(object_name: str, file: UploadFile):
    """Uploads a file to S3."""
    contents = await file.read()
    s3.upload_fileobj(io.BytesIO(contents), S3_BUCKET_NAME, object_name)
    return {"message": f"File uploaded to s3://{S3_BUCKET_NAME}/{object_name}"}


@app.get("/download/{object_name}")
async def download_file(object_name: str):
    """Downloads a file from S3."""
    obj = s3.get_object(Bucket=S3_BUCKET_NAME, Key=object_name)
    return StreamingResponse(
        obj["Body"].iter_chunks(chunk_size=4096),
        media_type=obj["ContentType"],
        headers={"Content-Disposition": f"attachment;filename={object_name}"},
    )

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

2. Authenticate your service to access the cloud service

We need to provide the correct credentials to our code so that it can authenticate and connect to the cloud services. The exact approach depends on the cloud provider. Here’s how you can do it for the most common cloud providers:
  • AWS
  • GCP
  • Azure
There are two ways to authenticate your service to access AWS services:

1. Access Key and Secret Access Key

This involves setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. AWS SDKs will automatically pick these up from the environment variables and authenticate with the corresponding AWS service. The accesskey and secretaccesskey can be found in the AWS console and can be generated by your Infra team.
This approach typically involves creating IAM roles, associating them with Kubernetes service accounts, and configuring your deployments to use those service accounts. Here’s a detailed breakdown:Key Concepts
  • Kubernetes Service Accounts (SA): These are identities for processes running inside a pod. They provide a way to authenticate your pods with other Kubernetes services and external resources.
  • IAM Roles: IAM roles are sets of permissions that define what actions an AWS entity (like a user, application, or service) can perform.
  • IAM Roles for Service Accounts (IRSA): This is the key technology that allows you to map a Kubernetes service account to an IAM role. It uses AWS’s OpenID Connect (OIDC) provider capability.
Using IRSA (IAM Role for Service Accounts), you can securely grant Kubernetes deployments access to cloud services using service accounts and IAM roles, leveraging the power of IRSA for authentication and authorization. This is recommended and well understood by the Infrastructure / Devops teams. Please reach out to them to get the IAM role and service account created.
1

Get cluster and accountdetails

We will need the name of the cluster, AWS account ID, region, namespace (workspace in Truefoundry) in which the application is to be deployed. Set the following variables:
export CLUSTER_NAME="your-cluster-name"
export ACCOUNT_ID="your-aws-account-id"
export AWS_REGION="your-aws-region"
export NAMESPACE="your-namespace / workspace in Truefoundry"
export SERVICE_ACCOUNT_NAME="your-service-account-name" # You can set this to anything descriptive like s3-<bucketname>access-sa
2

Get Cluster's OIDC Provider URL

OIDC_ISSUER_URL=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed 's/https:\/\///')
The value of OIDC_ISSUER_URL will be the OIDC provider URL. It will be something like: oidc.eks.YOUR_REGION.amazonaws.com/id/YOUR_OIDC_ID
3

Create an IAM Policy

Create an IAM policy with the required permissions. This example grants full access to S3. It’s strongly recommended to scope down the permissions to only what’s necessary for security best practices.
s3-access-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::your-bucket-name",
        "arn:aws:s3:::your-bucket-name/*"
      ]
    }
  ]
}
Replace your-bucket-name with the actual name of your S3 bucket. You can also use wildcards to specify multiple buckets or prefixes within a bucket.Create the policy using the AWS CLI:
aws iam create-policy \
  --policy-name "${CLUSTER_NAME}-s3-policy" \
  --policy-document file://s3-access-policy.json
4

Create an IAM Role

Create the assume role policy file.
cat > assume-role-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_ISSUER_URL}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_ISSUER_URL}:aud": "sts.amazonaws.com",
          "${OIDC_ISSUER_URL}:sub": "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    }
  ]
}
Create an IAM role using this assume role policy:
IAM_ROLE_ARN=$(aws iam create-role --role-name access-to-s3-role --assume-role-policy-document file://s3-access-policy.json --output text --query 'Role.Arn')
5

Attach the IAM Policy to the Role

aws iam attach-role-policy --role-name "$CLUSTER_NAME-s3-role" --policy-arn="arn:aws:iam::${ACCOUNT_ID}:policy/${CLUSTER_NAME}-s3-policy"
6

Create and Apply the Kubernetes Service Account

Create a Kubernetes service account in your desired namespace. You can apply this either via Kubectl or using the Truefoundry UI.This is a simple YAML file (e.g., service-account.yaml):
service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ${SERVICE_ACCOUNT_NAME}
  namespace: ${NAMESPACE}
  annotations:
    eks.amazonaws.com/role-arn: $IAM_ROLE_ARN
Apply using Kubectl:
kubectl apply -f service-account.yaml
or using the Truefoundry UI:TODO: Add supademo link here
7

Verify the Service Account

Run a pod and test if you are able to perform operations on the AWS S3 bucket
kubectl apply -f -<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: aws-cli-container
      image: amazon/aws-cli
      env:
        - name: AWS_S3_BUCKET_NAME
          value: "${S3_BUCKET}"
      command: ["/bin/bash"]
      args: ["-c", "sleep 3600"]
  serviceAccountName: demo
EOF
Kubectl exec into the pod and test if you are able to perform operations on the AWS S3 bucket
kubectl exec -it my-pod -- /bin/bash
bash-4.2# aws s3 ls
Run the command aws s3 ls to verify if you are able to access the S3 bucket.
aws s3 ls s3://your-bucket-name/

3. Select the service account for the service

Once you’ve configured the Service Account in Kubernetes following the steps above, you can select the service account for the service in the Truefoundry UI. This can be viewed after switching on the advanced options in the service deployment form.
I