EFS in AWS EKS
Setting up EFS support in your EKS cluster
This section describe how you can achieve EFS support in your EKS cluster. EFS persistent volume claims go across availability zones and support RWX modes.
Pre-requisites
- Authenticate to aws cli.
- Set these values before running the below commands
export CLUSTER_NAME="" export AWS_REGION="" export ACCOUNT_ID=""
- Get the cluster OIDC issuer URL. If you don't have
sed
installed then manually remove thehttps://
from the OIDC by running the command without the pipe andsed
OIDC_ISSUER_URL=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed 's/https:\/\///')
- Get the VPC ID in which. you cluster is running
# get the VPC ID VPC_ID=$(aws eks describe-cluster \ --name "${CLUSTER_NAME}" \ --query "cluster.resourcesVpcConfig.vpcId" \ --region "${AWS_REGION}" \ --output text) # get the CIDR range VPC_CIDR_RANGE=$(aws ec2 describe-vpcs \ --vpc-ids $vpc_id \ --query "Vpcs[].CidrBlock" \ --output text \ --region "${AWS_REGION}")
Setting up IAM policy and roles
- Create the below IAM policy
- Download the IAM policy
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json
- Create the policy
aws iam create-policy \ --policy-name "EFS_CSI_DRIVER_POLICY-${CLUSTER_NAME}" \ --policy-document file://iam-policy-example.json
- Download the IAM policy
- We will create a role which has the above policy.
- Create the file by running the below command. This allows EFS CSI driver pods to use their serviceaccount to talk to EFS.
cat > trust-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_ISSUER_URL}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_ISSUER_URL}:sub": "system:serviceaccount:aws-efs-csi-driver:efs-csi-controller-sa" } } } ] } EOF
- Create the role with the above trust relationship.
EFS_ROLE_ARN=$(aws iam create-role \ --role-name "EFS_CSI_DRIVER_ROLE-${CLUSTER_NAME}" \ --assume-role-policy-document file://"trust-policy.json" \ --query 'Role.Arn' --output text)
- Attach the policy that we created in Step 1 to the role
aws iam attach-role-policy \ --policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/EFS_CSI_DRIVER_POLICY-${CLUSTER_NAME}" \ --role-name "EFS_CSI_DRIVER_ROLE-${CLUSTER_NAME}"
- Create the file by running the below command. This allows EFS CSI driver pods to use their serviceaccount to talk to EFS.
Creating AWS EFS
-
Creating a security group to allow 2049 port access from VPC.
- Create a security group
SECURITY_GROUP_ID=$(aws ec2 create-security-group \ --group-name TfyEfsSecurityGroup \ --description "Truefoundry EFS security group" \ --vpc-id "${VPC_ID}" \ --region "${AWS_REGION}" \ --output text)
- Authorise the security group for nodes in vpc to connect to EFS at port 2049. You can customise this security group to make it more restrictive by using the subnet CIDR where your nodes are running instead of VPC CIDR.
aws ec2 authorize-security-group-ingress \ --group-id $SECURITY_GROUP_ID \ --protocol tcp \ --port 2049 \ --region "${AWS_REGION}" \ --cidr "${VPC_CIDR_RANGE}"
- Create a security group
-
Create the file system
FILE_SYSTEM_ID=$(aws efs create-file-system \ --region "${AWS_REGION}" \ --performance-mode generalPurpose \ --encrypted \ --throughput-mode elastic \ --tags Key=Name,Value="${CLUSTER_NAME}-efs" Key=Created-By,Value=Truefoundry Key=cluster-name,Value=$CLUSTER_NAME \ --query 'FileSystemId' \ --output text)
-
Create the mount targets for subnets in your nodes
-
Capture all your subnets in the below list and run the for loop command to mount the EFS in all private subnets. Through this all your nodes will be able to mount EFS. Run the below command to get all the subnets in your VPC ID
aws ec2 describe-subnets \ --filters "Name=vpc-id,Values=${VPC_ID}" \ --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \ --output table
-
Command to mount target
for subnet_id in {<LIST OF SUBNETS COMMA SEPERATED>} do aws efs create-mount-target \ --file-system-id "${FILE_SYSTEM_ID}" \ --subnet-id $subnet_id \ --security-groups "${SECURITY_GROUP_ID}" \ --region "${AWS_REGION}" done ################# EXAMPLE ######################## #for subnet_id in {"subnet-09842fe267b586972","subnet-030f97cf00b7a9459"} #do #aws efs create-mount-target \ # --file-system-id "${FILE_SYSTEM_ID}" \ # --subnet-id $subnet_id \ # --security-groups "${SECURITY_GROUP_ID}" \ # --region "${REGION}" #done #################################################
-
Installing AWS EFS CSI Driver
-
Once the file system is created we need to install the driver in our kubernetes cluster so that it can create volumes for EFS on our behalf
-
From the Integrations tab , click on Manage Applications tab from the three dots on your cluster card.
-
Install Aws Efs Csi Driver which will first prompt for workspace creation and then will ask to confirm the applications installation.
-
Fill you region and the Role ARN which was created while Setting up IAM policy and roles
image: repository: 602401143452.dkr.ecr.$AWS_REGION.amazonaws.com/eks/aws-efs-csi-driver controller: serviceAccount: create: true annotations: eks.amazonaws.com/role-arn: "${EFS_ROLE_ARN}"
-
Once the applications is installed a storage class needs to be created for persistent volume to use.
Installing a storage class and testing EFS volumes
- Create a storage with the
FILE_SYSTEM_ID
that we createdkubectl apply -f -<<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: "${FILE_SYSTEM_ID}" directoryPerms: "700" gidRangeStart: "1000" # optional gidRangeEnd: "2000" # optional basePath: "/truefoundry" EOF
- Check if the storage class is created by the name
efs-sc
kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE efs-sc efs.csi.aws.com Delete Immediate false 20s gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 99d
- Deploy a sample pod which will create a pvc and pod using that pvc. If pod comes up successfully then EFS is working fine
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml
References
Updated 3 months ago