This page provides an architecture overview, requirements and steps to setup a TrueFoundry compute plane cluster in AWS
The architecture of a TrueFoundry compute plane is as follows:
Access Policies Overview
Policy | Description |
---|---|
ELBControllerPolicy | Role assumed by load balancer controller to provision ELB when a service of type LoadBalancer is created |
KarpenterPolicy and SQSPolicy | Role assumed by Karpenter to dynamically provision nodes and handle spot node termination |
EFSPolicy | Role assumed by EFS CSI to provision and attach EFS volumes |
EBSPolicy | Role assumed by EBS CSI to provision and attach EBS volumes |
RolePolicy with policies for:- ECR, S3, SSM, EKS Use the trust relationship. | Role assumed by TrueFoundry to allow access to ECR, S3, and SSM services. If you are using TrueFoundry’s control plane the role will be assumed by arn:aws:iam::416964291864:role/tfy-ctl-euwe1-production-truefoundry-deps otherwise it will be your control plane’s IAM role |
ClusterRole with policies: - AmazonEKSClusterPolicy - AmazonEKSVPCResourceControllerPolicy - EncryptionPolicy | Role that provides Kubernetes permissions to manage the cluster lifecycle, networking, and encryption |
NodeRole with policies: AmazonEC2ContainerRegistryReadOnlyPolicy, AmazonEKS_CNI_Policy, AmazonEKSWorkerNodePolicy, AmazonSSMManagedInstanceCorePolicy | Role assumed by EKS nodes to work with AWS resources for ECR access, IP assignment, and cluster registration |
EncryptionPolicy to create and manage key for encryption:
The requirements to setup compute plane in each of the scenarios is as follows:
Regarding the VPC and EKS cluster, you can decide between the following scenarios:
/24
or larger. This is to ensure capacity for ~250 instances and 4096 pods.public.ecr.aws
, quay.io
, ghcr.io
, tfy.jfrog.io
, docker.io/natsio
, nvcr.io
, registry.k8s.io
so that we can download the docker images for argocd, nats, gpu operator, argo rollouts, argo workflows, istio, keda, etc.services.example.com/tfy/*
, however, many frontend applications do not support this./24
or larger. This is to ensure capacity for ~250 instances and 4096 pods.public.ecr.aws
, quay.io
, ghcr.io
, tfy.jfrog.io
, docker.io/natsio
, nvcr.io
, registry.k8s.io
so that we can download the docker images for argocd, nats, gpu operator, argo rollouts, argo workflows, istio, keda, etc.services.example.com/tfy/*
, however, many frontend applications do not support this.services.example.com/tfy/*
, however, many frontend applications do not support this.Your subnets must have the following tags for the TrueFoundry terraform code to work with them.
Resource Type | Required Tags | Description |
---|---|---|
Private Subnets | - kubernetes.io/cluster/${clusterName} : "shared" - subnet : "private" - kubernetes.io/role/internal-elb : "1" | Tags required for EKS to properly manage internal load balancers and subnet identification |
Public Subnets | - kubernetes.io/cluster/${clusterName} : "shared" - subnet : "public" - kubernetes.io/role/elb : "1" | Tags required for EKS to properly manage external load balancers and subnet identification |
EKS Node Security Group | - karpenter.sh/discovery : "${clusterName}" | This tag is required for Karpenter to discover and manage node provisioning for the cluster |
TrueFoundry compute plane infrastructure is provisioned using terraform. You can download the terraform code for your exact account by filling up your account details and downloading a script that can be executed on your local machine.
Choose to create a new cluster or attach an existing cluster
Go to the platform section in the left panel and click on Clusters
. You can click on Create New Cluster
or Attach Existing Cluster
depending on your use case. Read the requirements and if everything is satisfied, click on Continue
.
Fill up the form to generate the terraform code
A form will be presented with the details for the new cluster to be created. Fill in with your cluster details. Click Submit
when done
The key fields to fill up here are:
Cluster Name
- A name for your cluster.Region
- The region where you want to create the cluster.Network Configuration
- Choose between New VPC
or Existing VPC
depending on your use case.Authentication
- This is how you are authenticated to AWS on your local machine. It’s used to configure Terraform to authenticate with AWS.S3 Bucket for Terraform State
- Terraform state will be stored in this bucket. It can be a preexisting bucket or a new bucket name. The new bucket will automatically be created by our script.Load Balancer Configuration
- This is to configure the load balancer for your cluster. You can choose between Public
or Private
Load Balancer, it defaults to Public
. You can also add certificate ARNs and domain names for the load balancer but these are optional.Platform Features
- This is to decide which features like BlobStorage, ClusterIntegration, ParameterStore, DockerRegistry and SecretsManager will be enabled for your cluster. To read more on how these integrations are used in the platform, please refer to the platform features page.The key fields to fill up here are:
Cluster Name
- A name for your cluster.Region
- The region where you want to create the cluster.Network Configuration
- Choose between New VPC
or Existing VPC
depending on your use case.Authentication
- This is how you are authenticated to AWS on your local machine. It’s used to configure Terraform to authenticate with AWS.S3 Bucket for Terraform State
- Terraform state will be stored in this bucket. It can be a preexisting bucket or a new bucket name. The new bucket will automatically be created by our script.Load Balancer Configuration
- This is to configure the load balancer for your cluster. You can choose between Public
or Private
Load Balancer, it defaults to Public
. You can also add certificate ARNs and domain names for the load balancer but these are optional.Platform Features
- This is to decide which features like BlobStorage, ClusterIntegration, ParameterStore, DockerRegistry and SecretsManager will be enabled for your cluster. To read more on how these integrations are used in the platform, please refer to the platform features page.The key fields to fill up here are:
Region
- The region where your cluster is already created.Cluster Configuration
- Provide the details of the existing cluster like the name of the cluster, URL of the OIDC provider, and the other required ARNs on the form.Cluster Addons
- TrueFoundry needs to install addons like ArgoCD, ArgoWorkflows, Keda, Istio, etc. Please disable the addons that are already installed on your cluster so that truefoundry installation does not overrride the existing configuration and affect your existing workloads.Network Configuration
- Provide the details of the existing VPC and subnets where the cluster is already created.Authentication
- This is how you are authenticated to AWS on your local machine. It’s used to configure Terraform to authenticate with AWS.S3 Bucket for Terraform State
- Terraform state will be stored in this bucket. It can be a preexisting bucket or a new bucket name. The new bucket will automatically be created by our script.Load Balancer Configuration
- This is to configure the load balancer for your cluster. You can choose between Public
or Private
Load Balancer, it defaults to Public
. You can also add certificate ARNs and domain names for the load balancer but these are optional.Platform Features
- This is to decide which features like BlobStorage, ClusterIntegration, ParameterStore, DockerRegistry and SecretsManager will be enabled for your cluster. To read more on how these integrations are used in the platform, please refer to the platform features page.Copy the curl command and execute it on your local machine
You will be presented with a curl
command to download and execute the script. The script will take care of installing the pre-requisites, downloading terraform code and running it on your local machine to create the cluster. This will take around 40-50 minutes to complete.
Verify the cluster is showing as connected in the platform
Once the script is executed, the cluster will be shown as connected in the platform.
Create DNS Record
We can get the load-balancer’s IP address by going to the platform section in the bottom left panel under the Clusters section. Under the preferred cluster, you’ll see the load balancer IP address under the Base Domain URL
section.
Create a DNS record in your route 53 or your DNS provider with the following details
Record Type | Record Name | Record value |
---|---|---|
CNAME | *.tfy.example.com | LOADBALANCER_IP_ADDRESS |
Setup routing and TLS for deploying workloads to your cluster
Follow the instructions here to setup DNS and TLS for deploying workloads to your cluster.
Start deploying workloads to your cluster
You can start by going here
There are three ways primarily through which we can add TLS to the load balancer in AWS: