AWS

Provisioning Control Plane Infrastructure on AWS

Setting up Truefoundry control plane on your own cloud involves creating the infrastructure to support the platform and then installing the platform itself.

Setting up Infrastructure

Requirements

These are the infrastructure components required to set up a production grade Truefoundry control plane.

📘

If you have the below requirements already set up then skip directly to the Installation section

RequirementsDescriptionReason for Requirement
Kubernetes ClusterAny Kubernetes cluster will work here - we can also choose the compute-plane cluster itself to install Truefoundry helm chartThe Truefoundry helm chart will be installed here.
Postgres RDSPostgres >= 13The database is used by Truefoundry control plane to store all its metadata.
S3 bucketAny S3 bucket reachable from control-plane.This is used by control-plane to store the intermediate code while building the docker image.
Egress Access for TruefoundryAuthEgress access to https://auth.truefoundry.comThis is needed to verify the users logging into the Truefoundry platform for licensing purposes
Egress access For Docker Registry1. public.ecr.aws
2. quay.io
3. ghcr.io
4. docker.io/truefoundrycloud
5. docker.io/natsio
6. nvcr.io
7. registry.k8s.io
This is to download docker images for Truefoundry, ArgoCD, NATS, ArgoRollouts, ArgoWorkflows, Istio.
DNS with TLS/SSLOne endpoint to point to the control plane service (something like platform.example.com where example.com is your domain. There should also be a certificate with the domain so that the domains can be accessed over TLS.

The control-plane url should be reachable from the compute-plane so that compute-plane cluster can connect to the control-plane
The developers will need to access the Truefoundry UI at domain that is provided here.
User/ServiceAccount to provision the infrastructureThis is the set of permissions needed to provision the infrastructure for Truefoundry control-plane.

Permissions Required

We will be using OCLI (Onboarding CLI) to create the infrastructure. We will be using a locally setup AWS profile. Please make sure the user has the following permissions

export REGION="" # us-east-1
export SHORT_REGION="" #usea1
export ACCOUNT_ID="" #123524493244
export NAME="" 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "rds:AddTagsToResource",
                "iam:GetInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "rds:DeleteTenantDatabase",
                "iam:AddRoleToInstanceProfile",
                "rds:CreateDBInstance",
                "rds:DescribeDBInstances",
                "rds:RemoveTagsFromResource",
                "rds:CreateTenantDatabase",
                "iam:TagInstanceProfile",
                "rds:DeleteDBInstance"
            ],
            "Resource": [
                "arn:aws:iam::$ACCOUNT_ID:instance-profile/*",
                "arn:aws:rds:$REGION:$ACCOUNT_ID:db:tfy-$SHORT_REGION-$NAME-*"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "rds:AddTagsToResource",
                "rds:DeleteDBSubnetGroup",
                "rds:DescribeDBSubnetGroups",
                "iam:DeleteOpenIDConnectProvider",
                "iam:GetOpenIDConnectProvider",
                "rds:CreateDBSubnetGroup",
                "rds:ListTagsForResource",
                "rds:RemoveTagsFromResource",
                "iam:TagOpenIDConnectProvider",
                "iam:CreateOpenIDConnectProvider",
                "rds:CreateDBInstance",
                "rds:DeleteDBInstance"
            ],
            "Resource": [
                "arn:aws:rds:$REGION:$ACCOUNT_ID:subgrp:tfy-$SHORT_REGION-$NAME-*",
                "arn:aws:iam::$ACCOUNT_ID:oidc-provider/*"
            ]
        },
        {
            "Sid": "VisualEditor9",
            "Effect": "Allow",
            "Action": [
                "rds:DescribeDBInstances"
            ],
            "Resource": [
                "arn:aws:rds:$REGION:$ACCOUNT_ID:db:*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
                "iam:CreatePolicy",
                "iam:GetPolicyVersion",
                "iam:GetPolicy",
                "iam:ListPolicyVersions",
                "iam:DeletePolicy",
                "iam:TagPolicy"
            ],
            "Resource": [
                "arn:aws:iam::$ACCOUNT_ID:policy/tfy-*",
                "arn:aws:iam::$ACCOUNT_ID:policy/truefoundry-*",
                "arn:aws:iam::$ACCOUNT_ID:policy/AmazonEKS_Karpenter_Controller_Policy*",
                "arn:aws:iam::$ACCOUNT_ID:policy/AmazonEKS_CNI_Policy*",
                "arn:aws:iam::$ACCOUNT_ID:policy/AmazonEKS_AWS_Load_Balancer_Controller*",
                "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess"
            ]
        },
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": [
                "iam:ListPolicies",
                "elasticfilesystem:*",
                "iam:GetRole",
                "s3:ListAllMyBuckets",
                "kms:*",
                "ec2:*",
                "s3:ListBucket",
                "route53:AssociateVPCWithHostedZone",
                "sts:GetCallerIdentity",
                "eks:*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor4",
            "Effect": "Allow",
            "Action": "dynamodb:*",
            "Resource": "arn:aws:dynamodb:$REGION:$ACCOUNT_ID:table/$NAME-$REGION-tfy-ocli-table"
        },
        {
            "Sid": "VisualEditor5",
            "Effect": "Allow",
            "Action": "iam:*",
            "Resource": [
                "arn:aws:iam::$ACCOUNT_ID:role/tfy-*",
                "arn:aws:iam::$ACCOUNT_ID:role/initial-*"
            ]
        },
        {
            "Sid": "VisualEditor6",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::tfy-$SHORT_REGION-$NAME-*/*",
                "arn:aws:s3:::$NAME-$REGION-tfy-ocli-bucket/*",
                "arn:aws:s3:::tfy-$SHORT_REGION-$NAME*",
                "arn:aws:s3:::$NAME-$REGION-tfy-ocli-bucket",
                "arn:aws:s3:::tfy-$SHORT_REGION-$NAME-truefoundry*",
                "arn:aws:s3:::tfy-$SHORT_REGION-$NAME-truefoundry*/*"
            ]
        },
        {
            "Sid": "VisualEditor7",
            "Effect": "Allow",
            "Action": "events:*",
            "Resource": "arn:aws:events:$REGION:$ACCOUNT_ID:rule/tfy-$SHORT_REGION-$NAME*"
        },
        {
            "Sid": "VisualEditor8",
            "Effect": "Allow",
            "Action": "sqs:*",
            "Resource": "arn:aws:sqs:$REGION:$ACCOUNT_ID:tfy-$SHORT_REGION-$NAME-karpenter"
        }
    ]
}

Run installation using OCLI

Prerequisites

  1. Install git if not already present.
  2. Install aws cli == 2.x.x and create an AWS profile locally with the specified access to the AWS account where you want to create the new cluster.

Installing OCLI

  1. Download the binary using the below command.
    curl -H 'Cache-Control: max-age=0' -s "https://releases.ocli.truefoundry.tech/binaries/ocli_$(curl -H 'Cache-Control: max-age=0' -s https://releases.ocli.truefoundry.tech/stable.txt)_darwin_arm64" -o ocli
    
    curl -H 'Cache-Control: max-age=0' -s "https://releases.ocli.truefoundry.tech/binaries/ocli_$(curl -H 'Cache-Control: max-age=0' -s https://releases.ocli.truefoundry.tech/stable.txt)_darwin_amd64" -o ocli
    
    curl -H 'Cache-Control: max-age=0' -s "https://releases.ocli.truefoundry.tech/binaries/ocli_$(curl -H 'Cache-Control: max-age=0' -s https://releases.ocli.truefoundry.tech/stable.txt)_linux_arm64" -o ocli
    
    curl -H 'Cache-Control: max-age=0' -s "https://releases.ocli.truefoundry.tech/binaries/ocli_$(curl -H 'Cache-Control: max-age=0' -s https://releases.ocli.truefoundry.tech/stable.txt)_linux_amd64" -o ocli
    
  2. Make the binary executable and move it to $PATH
    sudo chmod +x ./ocli
    sudo mv ocli /usr/local/bin
    
  3. Confirm by running the command
    ocli --version
    

Configuring Input Config file

  1. To create a new cluster, you would require your AWS Account ID, Region, and an AWS Profile
  2. Run the following command to fill in the inputs interactively
    ocli infra-init
    
  3. For networking, there are two possible configurations:
    1. New VPC (Recommended) - This creates a new VPC for your new cluster.
    2. Existing VPC - You can enter your existing VPC and subnet IDs.
  4. Once all the inputs are filled, a config file with the name tfy-config.yaml would be generated in your current directory
  5. Modify the file to enable control plane installation by setting aws.tfy_control_plane: true. Below is the sample for the same:
aws:
  account:
    id: "xxxxxxxxxxxxxxxxx"
  cluster:
    name: "coolml"
    public_access:
      cidrs:
        - 0.0.0.0/0
      enabled: true
    version: "1.28"
  iam_role:
    assume_role_arns:
      - arn:aws:iam::416964291864:role/tfy-ctl-euwe1-production-truefoundry-deps
    ecr:
      enabled: true
    enabled: true
    role_enable_override: false
    role_override_name: ""
    s3:
      bucket_enable_override: false
      bucket_override_name: ""
      enabled: true
    ssm:
      enabled: true
  network:
    existing: true
    private_subnets_cidrs: []
    private_subnets_ids:
      - subnet-xxxxxxxxxxxxxxxxx
      - subnet-xxxxxxxxxxxxxxxxx
      - subnet-xxxxxxxxxxxxxxxxx
    public_subnets_cidrs: []
    public_subnets_ids:
      - subnet-xxxxxxxxxxxxxxxxx
      - subnet-xxxxxxxxxxxxxxxxx
      - subnet-xxxxxxxxxxxxxxxxx
    vpc_cidr: ""
    vpc_id: vpc-xxxxxxxxxxxxxxxxx
  profile:
    name: administrator-xxxxxxxxxxxxxxxxx
  region:
    availability_zones:
      - us-east-1a
      - us-east-1b
      - us-east-1c
    name: us-east-1
  tags: {}
  tfy_control_plane:
  	enabled: true
azure: null
binaries:
  terraform:
    binary_path: null
  terragrunt:
    binary_path: null
gcp: null
provider: aws

aws:
  account:
    id: "xxxxxxxxxxxxxxxxx"
  cluster:
    name: coolml
    public_access:
      cidrs:
      - 0.0.0.0/0
      enabled: true
    version: "1.28"
  iam_role:
    assume_role_arns:
    - arn:aws:iam::416964291864:role/tfy-ctl-euwe1-production-truefoundry-deps
    ecr:
      enabled: true
    enabled: true
    role_enable_override: false
    role_override_name: ""
    s3:
      bucket_enable_override: false
      bucket_override_name: ""
      enabled: true
    ssm:
      enabled: true
  network:
    existing: false
    private_subnets_cidrs:
    - 10.222.0.0/20
    - 10.222.16.0/20
    - 10.222.32.0/20
    private_subnets_ids: []
    public_subnets_cidrs:
    - 10.222.176.0/20
    - 10.222.192.0/20
    - 10.222.208.0/20
    public_subnets_ids: []
    vpc_cidr: 10.222.0.0/16
    vpc_id: ""
  profile:
    name: administrator-xxxxxxxxxxxxxxxxx
  region:
    availability_zones:
    - us-east-2a
    - us-east-2b
    - us-east-2c
    name: us-east-2
  tags: {}
  tfy_control_plane:
  	enabled: true
azure: null
binaries:
  terraform:
    binary_path: null
  terragrunt:
    binary_path: null
gcp: null
provider: aws

Create the cluster

Run the following command to create the EKS cluster and IAM roles needed to provide access to various infrastructure components as per the inputs configured above.

ocli infra-create --file tfy-config.yaml

This command may take around 30-45 minutes to complete.

Installation

Installation steps

  1. Install argoCD -
    kubectl create namespace argocd
    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/core-install.yaml
    
  2. Add the truefoundry helm repo
    helm repo add truefoundry https://truefoundry.github.io/infra-charts/
    helm repo update
    
  3. We will create a values.yaml for the helm chart installation -
    1. Download the values.yaml from helm chart repo -
      curl https://raw.githubusercontent.com/truefoundry/infra-charts/main/charts/tfy-k8s-aws-eks-inframold/values-cp.yaml > values.yaml
      
    2. Fill in the tenant_name, cluster_name, truefoundry_image_pull_config_json and tfy_api_key in the downloaded file. You can get these from the Truefoundry team
    3. You can also fill in the database details.
  4. Apply the helm chart with the values.yaml
    helm install -n argocd inframold truefoundry/tfy-k8s-generic-inframold --version 0.0.14 -f values.yaml
    

Test the installation

  1. Port forward the frontend application to access the Truefoundry dashboard -
    kubectl port-forward svc/truefoundry-truefoundry-frontend-app -n truefoundry 5000
    
  2. Access the truefoundry dashboard from a browser by opening http://localhost:5000. You can login with the username and password provided by the Truefoundry team.
  3. Now you are ready to connect a cluster to the Truefoundry platform and get deploying. Go here for the directions. You can also onboard the same cluster as the control plane