Karpenter installation on AWS cluster
The following steps take care of enabling karpenter on an AWS account -
- Create and bootstrap the node role which karpenter nodes will use
$ CLUSTER_NAME=<cluster_name>
$ echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}' > node-trust-policy.json
$ aws iam create-role --role-name karpenter-node-role-${CLUSTER_NAME} \
--assume-role-policy-document file://node-trust-policy.json
$ aws iam attach-role-policy --role-name karpenter-node-role-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
$ aws iam attach-role-policy --role-name karpenter-node-role-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
$ aws iam attach-role-policy --role-name karpenter-node-role-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
$ aws iam attach-role-policy --role-name karpenter-node-role-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
$ aws iam create-instance-profile \
--instance-profile-name karpenter-instance-profile-${CLUSTER_NAME}
$ aws iam add-role-to-instance-profile \
--instance-profile-name karpenter-instance-profile-${CLUSTER_NAME} \
--role-name karpenter-node-role-${CLUSTER_NAME}
- Create service account for the karpenter controller
$ CLUSTER_ENDPOINT="$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.endpoint" \
--output text)"
$ OIDC_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} \
--query "cluster.identity.oidc.issuer" --output text)"
$ AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' \
--output text)
$ echo "{
\"Version\": \"2012-10-17\",
\"Statement\": [
{
\"Effect\": \"Allow\",
\"Principal\": {
\"Federated\": \"arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT#*//}\"
},
\"Action\": \"sts:AssumeRoleWithWebIdentity\",
\"Condition\": {
\"StringEquals\": {
\"${OIDC_ENDPOINT#*//}:aud\": \"sts.amazonaws.com\",
\"${OIDC_ENDPOINT#*//}:sub\": \"system:serviceaccount:karpenter:karpenter\"
}
}
}
]
}" > controller-trust-policy.json
$ aws iam create-role --role-name karpenter-controller-role-${CLUSTER_NAME} \
--assume-role-policy-document file://controller-trust-policy.json
$ echo '{
"Statement": [
{
"Action": [
"ssm:GetParameter",
"iam:PassRole",
"ec2:DescribeImages",
"ec2:RunInstances",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeInstanceTypeOfferings",
"ec2:DescribeAvailabilityZones",
"ec2:DeleteLaunchTemplate",
"ec2:CreateTags",
"ec2:CreateLaunchTemplate",
"ec2:CreateFleet",
"ec2:DescribeSpotPriceHistory",
"pricing:GetProducts"
],
"Effect": "Allow",
"Resource": "*",
"Sid": "Karpenter"
},
{
"Action": "ec2:TerminateInstances",
"Condition": {
"StringLike": {
"ec2:ResourceTag/Name": "*karpenter*"
}
},
"Effect": "Allow",
"Resource": "*",
"Sid": "ConditionalEC2Termination"
}
],
"Version": "2012-10-17"
}' > controller-policy.json
$ aws iam put-role-policy --role-name karpenter-controller-role-${CLUSTER_NAME} \
--policy-name karpenter-controller-policy-${CLUSTER_NAME} \
--policy-document file://controller-policy.json
- We need to tag all the subnets where karpenter nodes should be created -
# This will give you all the subnet ids available. Choose the subnets that karpenter should create nodes in
$ aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.resourcesVpcConfig.subnetIds"
# Execute the following two commands for each of the subnets
$ aws ec2 create-tags --tags "Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=shared" --resources <subnet_id>
$ aws ec2 create-tags --tags "Key=subnet,Value=private" --resources <subnet_id>
- We also need to tag the security group where the karpenter nodes are to be created -
$ SECURITY_GROUP_ID=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" --output text)
$ aws ec2 create-tags --tags "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" --resources ${SECURITY_GROUP_ID}
- Update the aws-auth configmap for the karpenter nodes to access the control plane -
$ kubectl edit configmap aws-auth -n kube-system
- Add this section under mapRoles -
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/karpenter-node-role-${CLUSTER_NAME}
username: system:node:{{EC2PrivateDNSName}}
- We might need to enable spot instance creation. Execute the following for it. If it returns an error, that means spot instances were already enabled -
$ aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
This will prepare the infra for karpenter installation. We can install karpenter now.
Updated about 1 month ago