Creating Statically Provisioned Volumes

This section covers how to create statically provisioned volumes. First, you need to create a PersistentVolume object in the cluster. Once this object is created you can use this persistent volume object to create a "Volume" on TrueFoundry.

In this section, we will cover how to use the following a volume:

  • GCS Bucket
  • S3 bucket
  • Existing EFS
  • Any general volume on Kubernetes

Mount a GCS bucket as volume

To mount a GCS bucket as a volume on truefoundry, you need to follow the following steps. You can refer to this document for more details:

Create a GCS bucket

Create a GCS bucket and ensure the following:

  • Should be single region ( multi-region will work but speed will be slower and costs will be higher )
  • Region should be the same as that of your Kubernetes Cluster

Create Serviceaccount and Grant relevant permissions

You need to run the following script. This does the following:

  • Enables GCS Fuse Driver on the cluster
  • Create IAM Policy to access your bucket
  • Create K8s service-account and add policy to this service-account
  • Enables role-binding of service account to the desired K8s namespace.
#!/bin/bash

PROJECT=<Enter your Project Name>
CLUSTER_NAME=<Enter your Cluster Name>
REGION=<Region of your Cluster>
BUCKET_NAME=<Name of GCS Bucket>
TARGET_NAMESPACE=<Namespace where you want to mount your GCS bucket>


K8S_SA_NAME="$BUCKET_NAME-bucket-sa"
GCP_SA_NAME="$BUCKET_NAME-bucket-gcp-sa"
GCP_SA_ID="$GCP_SA_NAME@$PROJECT.iam.gserviceaccount.com"


gcloud container clusters update $CLUSTER_NAME \
    --update-addons GcsFuseCsiDriver=ENABLED \
    --region=$REGION \
    --project=$PROJECT
gcloud iam service-accounts create $GCP_SA_NAME  --project=$PROJECT
gcloud storage buckets add-iam-policy-binding gs://$BUCKET_NAME \
    --member "serviceAccount:$GCP_SA_ID" \
    --role "roles/storage.objectAdmin" \
    --project $PROJECT

gcloud iam service-accounts add-iam-policy-binding $GCP_SA_ID \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:$PROJECT.svc.id.goog[$TARGET_NAMESPACE/$K8S_SA_NAME]" \
    --project $PROJECT

Create Service-Account in Workspace from Truefoundry UI

We now need to create a serviceaccount on truefoundry in the same workspace with name: TARGET_NAMESPACEand the serviceaccount must have the name GCP_SA_NAME.

  1. Go to Workspaces -> Choose Your Workspace and Click on three dots on the right and click on Edit:
  1. Open Advanced Options from bottom left of the form and fill the Serviceaccount section:

📘

Important

The service account name and workspace should be exactly same as the previous step.

Create a PersistentVolume object as shown below:

Create a persistent volume object with the following step. (by doing a kubectl apply)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <Enter a unique PV name here>
spec:
  capacity:
    storage: 30Gi
  csi:
    driver: gcsfuse.csi.storage.gke.io
    volumeHandle: <Enter your Bucket Name here>
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: np-fuse-test
  mountOptions:
    - implicit-dirs
  volumeMode: Filesystem

Create a Volume on TrueFoundry:

Please follow this section to create volume on TrueFoundry

Mount an S3 bucket as a Volume

To mount an S3 bucket as a volume on Truefoundry, you need to follow the following steps:

Setting up IAM Policies and Relevant Roles

Please follow this document of AWS to set up mount point of S3 in an EKS cluster.

This will guide you to do the following things:

  • Create an IAM policy to give permissions for mount point to access the s3 bucket
  • Create an IAM role.
  • Install the mountpoint for Amazon S3 CSI driver and attach the role that was created above.

Creating a Persistent Volume on the Kubernetes Cluster

Create a PV with the following spec (by doing a kubectl apply):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <Enter persistent volume name here>
spec:
  capacity:
    storage: 100Gi
  csi:
    driver: s3.csi.aws.com
    volumeHandle: s3-csi-driver-volume # must be unique
    volumeAttributes:
      bucketName: <Enter s3 bucket name here>
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: s3-test # put any value here
  mountOptions:
    - allow-delete
    - region <enter aws region here>
    - allow-other
    - uid=1000
  volumeMode: Filesystem
 

Create a Volume on TrueFoundry:

Please follow this section to create volume on TrueFoundry

Mount an Existing EFS as a Volume

To mount a S3 bucket as a volume on truefoundry, you need to follow the following steps:

Install EFS CSI driver on your cluster

  • To install EFS CSI driver on your cluster, go to Truefoundry UI -> Clusters-> Installed Applications-> Manage

  • From the Volumessection click on install AWS EFS CSI driver and click on Install.

Create an Access Point for your EFS

  • Locate your EFS in the AWS console and open it. Ensure that the EFS and the K8S cluster are in the same VPC. Click on "Create access point"

  • Enter details like name, Root directory path(please make sure you fill the Root directory creation permissionssection, can fill in with UID:1000, GID:1000 if you want to attach it to notebook)

  • Click on create.

Create a PersistentVolume on the cluster

Create a PV with the following spec (by doing a kubectl apply):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <Enter your PV name here>
spec:
  capacity:
    storage: 5Gi # this number doesn't matter for EFS, any number will work
  csi:
    driver: efs.csi.aws.com
    volumeHandle: <file_system_id>::<access_point_id> # e.g. fs-036e93cbb1fabcdef::fsap-0923ac354cqwerty
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  volumeMode: Filesystem

Create a Volume on TrueFoundry:

Please follow this section to create volume on TrueFoundry