Understanding Azure Node Pools

Node pools in Azure AKS enable the creation and management of distinct worker node groups within a single cluster. They offer the flexibility to allocate resources based on workload requirements and cost considerations. By utilizing different node pools, organizations can optimize resource allocation, achieve scalability, and take advantage of cost savings through the use of spot instances for non-critical workloads or development environments. Additionally, node pools enable efficient application packing and isolation, ensuring performance guarantees and fault tolerance. They also facilitate the utilisation of GPU resources by dedicating on-demand node pools for high-performance production workloads and spot node pools for cost-effective development workloads.

In this document we will understand how to better utilise the node pools in AKS for better management of your resources.

System Node pool

If you have followed the steps of Creating the AKS cluster a system node pool must have been created which works on on-demand nodes to deploy all the necessary applications that are required to power the platform. These includes

  • Argocd
  • Argo rollouts
  • Istio
  • tfy-agent

It is advisable to use atleast 2 nodes with 2vCPU and 8GB RAM to successfully install all the necessary applications.

πŸ“˜

One on-demand node pool is always required

BY default the primary node pool of AKS (system node pool) should always be on-demand. It is not possible to create a SPOT node pool initially.

User based node pools

User based node pools are used to run the user applications. These pools can be of type on-demand or spot .

Spot node pool

Spot node pools are used to host the user workloads which can tolerate interruptions. As spot instances are the machines which are used from left-overs they can bring significant cost savings in the cloud billing and because of this there are certain applications, dev workloads and un-important job runs which can be promoted to run over spot instances.

Creating a SPOT CPU node pool

Spot CPU node pool should be used for the cases where the applications can tolerate significant interruptions. By default TrueFoundry can tolerate interruptions on these applications which are supporting the platform

  • Prometheus
  • Loki
  • cert-manager
  • argo-workflows

A right instance size can be selected from this page which can help you select the right size/price ratio for your workloads. With the below command you can create a spot instance

# export 
export RESOURCE_GROUP=""
export CLUSTER_NAME=""

# enter the instance size from the page linked above
export INSTANCE_SIZE=""

Command to create a spot CPU pool

az aks nodepool add \
    --resource-group $RESOURCE_GROUP \
    --cluster-name $CLUSTER_NAME \
    --name spotnodepool \
    --priority Spot \
    --eviction-policy Delete \
    --spot-max-price -1 \
    --enable-cluster-autoscaler \
    --enable-encryption-at-host \
    --node-vm-size $INSTANCE_SIZE \
    --min-count 2 \
    --node-count 2 \
    --max-count 10 \
    --no-wait

With this command a SPOT node pool with minimum 2 nodes will spin up which can autoscale to 10.

Spot node pools by default create a taint kubernetes.azure.com/scalesetpriority:spot which means all the pods which you want to get deployed on these spot instances must tolerate this taint. The toleration must happen like this

tolerations:
- key: kubernetes.azure.com/scalesetpriority
	value: spot
	effect: NoSchedule

However, this doesn't guarantee that the pods that you meant to deploy on these spot instances will always be deployed there. You have to select the spot node pool while deploying your services to force them to get deployed on spot pools only. Check Adding node pools to the platform to know more.

Creating a spot GPU node pool

Creating a spot GPU node pool is similar to the CPU spot node pool except for two things

  • Select the right instance size for the GPU workload and make sure you have the required Quotas for GPU instance in your specific region
  • Taint of nvidia.com/gpu=Present:NoSchedule. It is important to add this taint to avoid non-GPU workloads to get deployed on GPU machines

Execute the below command to create a node pool in your AKS cluster. Make sure to replace the variable correctly

az aks nodepool add \
--cluster-name $CLUSTER_NAME \
--name gpuspotpool \
--resource-group $RESOURCE_GROUP \
--enable-cluster-autoscaler \
--enable-encryption-at-host \
--priority Spot \
--spot-max-price -1 \
--eviction-policy Delete \
--node-vm-size $INSTANCE_SIZE \
--node-taints nvidia.com/gpu=Present:NoSchedule \
--max-count 2 \
--min-count 1 \
--node-count 1 \
--mode user \
--tags team=datascience owner=truefoundry

On-demand or Regular node pools

On-demand or Regular node pools are used for deploying your applications which require a dedicated machine to run. These workloads are important or nearly important to run at the required time. On-demand nodes are generally expensive to their counterparts (spot) but have SLA available on them. As a general practice, on-demand nodes don't suffer downtime and doesn't face interruptions. However, It is always to be noted that nodes are ephemeral in nature and upon excess threshold utilisation they can go down.

Creating an on-demand CPU node pool

Run the below command in your cluster by selecting the right instance size

az aks nodepool add \
    --resource-group $RESOURCE_GROUP \
    --cluster-name $CLUSTER_NAME \
    --name odnodepool \
    --eviction-policy Delete \
    --enable-cluster-autoscaler \
    --enable-encryption-at-host \
    --node-vm-size $INSTANCE_SIZE \
    --min-count 2 \
    --node-count 2 \
    --node-osdisk-size 100 \
    --max-count 10 \
    --no-wait

You can set the count of nodes according to your needs and it is advisable to keep the autoscaling part enabled for your cluster.

Creating an on-demand GPU node pool

Creating an on-demand GPU node pool is similar to the CPU on-demand node pool except for two things

  • Select the right instance size for the GPU workload and make sure you have the required Quotas for GPU instance in your specific region
  • Taint of nvidia.com/gpu=Present:NoSchedule. It is important to add this taint to avoid non-GPU workloads to get deployed on GPU machines

Execute the below command to create a node pool in your AKS cluster. Make sure to replace the variable correctly

az aks nodepool add \
--cluster-name $CLUSTER_NAME \
--name odgpupool \
--resource-group $RESOURCE_GROUP \
--enable-cluster-autoscaler \
--enable-encryption-at-host \
--node-vm-size $INSTANCE_SIZE \
--node-taints nvidia.com/gpu=Present:NoSchedule \
--max-count 2 \
--min-count 1 \
--node-count 1 \
--node-osdisk-size 100 \
--mode user \
--tags team=datascience owner=truefoundry

Adding node pools in the platform

To add node pools of the cluster into the platform.

  • Go to Integrations tabs and identify your cluster.
  • Click on the Edit option against your cluster card and add the node pools name.
  • Make sure to add the node pools names correctly
Adding Nodepools

Adding Nodepools

  • Now these node pools can be used in service deployments.
  • All services, models and notebook deployments tolerate the spot and nvidia.com/gpu taints by default when needed.

Understanding Cost implications of spot and on-demand (regular) node-pools

A huge difference can be observed while analysing the cloud cost for both spot and on-demand where spot nodes seems to be a highly cheap option. Moreover, this brings lot of uncertainty for the node uptime leading to trade-off. Below is a sample collection of nodes running for a month to analyse costs for spot and on-demand machines.

All these are pricing based out of South central US region.

PriorityInstance typeComputeGPUCost( per month)
SpotStandard_D2s_v52vCPU/8 GB RAMFalse$ 10.19
On-demandStandard_D2s_v52vCPU/8 GB RAMFalse$ 83.95
SpotStandard_NC66vCPU/56 GB RAMTrue$ 78.84
On-demandStandard_NC66vCPU/56 GB RAMTrue$ 788.40