AWS
Requirements
Requirements for Truefoundry installation on AWS
Following is the list of requirements to set up compute plane in your AWS account
AWS Infra Requirements
New VPC + New Cluster
These are the requirements for a fresh Truefoundry installation. If you are reusing an existing network or cluster, refer to the sections further below, in addition to this one
Requirements | Description | Reason for Requirement |
---|---|---|
AWS Account | Billing must be enabled for the AWS account. | |
VPC | * VPC CIDR should be atleast /20 for the VPC* Min 2 availability zone and /24 for private subnets | This is needed to ensure around 250 instances and 4096 pods can be run in the Kubernetes cluster. If we expect the scale to be higher, the subnet range should be increased.A NAT Gateway must be connected in the VPC and the route tables should allow outbound internet access for private subnets through this NAT gateway. |
Egress access For Docker Registry | 1. public.ecr.aws 2. quay.io 3. ghcr.io 4. docker.io/truefoundrycloud 5. docker.io/natsio 6. nvcr.io 7. registry.k8s.io | This is to download docker images for TrueFoundry, ArgoCD, NATS, GPU operator, ArgoRollouts, ArgoWorkflows, Istio, Keda. |
DNS with SSL/TLS | Set of endpoints (preferably wildcard) to point to the deployments being made. Something like *.internal.example.com, *.external.example.com. An ACM certificate with the chosen domains as SAN is required in the same region | When developers deploy their services, they will need to access the endpoints of their services to test it out or call from other services. This is why we need the DNS along with TLS on the compute plane. Its better if we can make it a wildcard since then developers can deploy services like service1.internal.example.com, service2.internal.example.com |
ACM Certificate | We need to have a certificate for the domains listed above. The certificate ARN will be passed to the Istio Ingress config. | If you have a certificate from some other source, that can also work by creating a secret with the certificate in istio-system namespace. |
Cloud Quotas | GPUIf you are planning to use GPU machines, make sure you have quotas for: - G and VT Spot/On-demand Instances - P Spot/On-demand Instance RequestsInferentia (Optional)- If you are planning to use Inferentia machines, make sure you have quota for Inferentia Spot/On-demand machines. | This is to make sure that TrueFoundry can bring up the instances as requested by developers. A request needs to be raised to AWS for increasing the limits for instances in case we don’t have quotas. You can check and increase your quotas at AWS EC2 service quotas |
User / ServiceAccount to provision the cluster | 1. sts must be enabled for the user which is being used to create the cluster.2. User must have the list of permissions listed below | See Enabling STS in a region |
Existing network
Requirements | Description | Reason for Requirement |
---|---|---|
VPC | * Minimum 2 private subnets in different availability zone with min CIDR /24 * Tags should be present on the VPC as described below * NAT gateway for private subnets * Minimum 1 public subnet for a public load balancer if endpoints are to be exposed to internet. Auto-assign IP address must be enabled. Min CIDR /28 * DNS support and DNS hostnames must be enabled for your VPC | This is needed to ensure around 250 instances and 4096 pods can be run in the Kubernetes cluster. If we expect the scale to be higher, the subnet range should be increased.A NAT Gateway must be connected in the VPC and the route tables should allow outbound internet access for private subnets through this NAT gateway. |
VPC Tags
Your subnets must have the following tags for the Truefoundry terraform code to work with them. You can skip it if you are creating a new network in which case these will automatically be created.
Resource Type | Required Tags |
---|---|
Private Subnets | ”kubernetes.io/cluster/$”: “shared" "subnet”: “private" "kubernetes.io/role/internal-elb”: “1” |
Public Subnets | ”kubernetes.io/cluster/$”: “shared" "subnet”: “public" "kubernetes.io/role/elb”: “1” |
EKS Node Security Group | ”karpenter.sh/discovery”: ”$“ This tag is required for Karpenter to discover and manage node provisioning for the cluster. |
Existing cluster
Requirements | Description | Reason for Requirement |
---|---|---|
Compute | CPU- All Standard (A, C, D, H, I, M, R, T, Z) Spot/On -demand must have min 4vCPU and 8 GB RAM - At least 2 nodes should be available for system components | |
EKS Version | * EKS version 1.30 or higher | Required for compatibility with TrueFoundry components and latest security features. Newer versions provide better performance and stability. |
Storage | - EBS CSI Driver must be installed * Installation Guide for EBS CSI Driver - EFS CSI Driver (if using shared storage) * Installation Guide for EFS CSI Driver | Required for persistent volume provisioning and shared storage support. |
Load Balancer | * AWS Load Balancer Controller version 2.12.0 or higher * Installation Guide for AWS Load Balancer Controller * Appropriate IAM roles for service account (IRSA) | Required for Ingress and Service type LoadBalancer support. |