gopaddle supports 3 different types of nodepools :

(a) Managed - Uses EKS AMIs and standard nodepool supported by AWS

(b) Custom - Supports Custom AMI. For example, you can choose an ubuntu custom AMI from here based on the region and cluster version that you choose to provider.

(c) Storidge - Supports Custom AMI. Installs and configures Storidge CIO as the underlying storage provisioner on the nodes within the nodepool.

Common Attributes :

  • AWS Key - All the AWS private keys available within the selected region will be listed. If no keys are available, then create a new private Key in AWS console before adding a node pool. To create a private key, follow the AWS documentation on Amazon EC2 key pairs.

  • Disk Size - Minimum disk size in GB

  • Min Nodes - Minimum Number of nodes in the node pool.

  • Max Nodes - Maximum Number of nodes in the node pool.

  • Desired Size - The desired number of nodes to be created while creating the node pool. Desired size must be within the min and max node counts.
    Node Type (GPU) - Choose GPU for compute intensive workloads like Deep Learning.

  • AMI Type - Available type is AL2_x86_64, AL2_x86_64_GPU and AL2_ARM_64. Only when GPU node type is selected, AL2_x86_64_GPU type will be available.

  • Availability Zone - One or more availability zones where the nodes within the nodepool needs to be provisioned.

Managed Node Pool

Fill the common attributes required to create a node pool and add the node pool to the cluster.

Create a new node pool role and enter the nodepool role ARN before adding the node pool. Check the Role ARN reference here for more information.

Once the nodepool is provisioned, AWS sets a label "eks.amazonaws.com/nodegroup:<nodepoolname>" to group the nodes within the nodepool and identify their readiness. These labels can be used as nodeSelectors at the time of deploying workloads to this node pool. When an autoscaling event occurs, new nodes are labeled with nodepool name as well.

Custom Node Pool

When custom node pool type is chosen, provide the custom AMI to be used while creating the node pool.

Create a new node pool role and enter the nodepool role ARN before adding the node pool. Check the Role ARN reference here for more information. gopaddle uses Cloud Formation Template (CFT) to provision the custom node pool. You can check the Stack Log section for the CFT logs for further debugging.

Once the nodepool is provisioned, gopaddle sets 2 labels "nodePoolName:<nodepoolname>" and "nodePoolType:customNodePool" to group the nodes within the nodepool and identify their readiness. These labels can be used as nodeSelectors at the time of deploying workloads to this node pool. When an autoscaling event occurs, new nodes are labeled with nodepool name as well.

Storidge Node Pool

Creating a Storidge node pool is support only for EKS master version 1.8 and above. When Storidge node pool type is chosen, provide the custom AMI to be used while creating the node pool. Desired node pool size should be set to a minimum of 3 nodes and the disk size should be a minimum of 100 GB in each node.

Create a new node pool role and enter the nodepool role ARN before adding the node pool. Check the Role ARN reference here for more information. gopaddle uses Cloud Formation Template (CFT) and AWS Systems Manager (SSM) to provision the custom node pool. You can check the Stack Log section for the CFT & SSM logs for further debugging.

Once the nodepool is provisioned, gopaddle installs and configures the CIO driver in each of the nodes in the node pool and sets 2 labels "nodePoolName:<nodepoolname>" and "nodePoolType:storidgeNodePool" to group the nodes within the nodepool and identify their readiness. When an autoscaling event occurs, new nodes are labeled with nodepool name as well. By using these labels as nodeSelectors at the time of deploying workloads to this nodepool, stateful services no longer need to be confined to a single node. Stateful services can be scheduled across any nodes within this node pool and the storidge CIO driver replicates and moves the data to these nodes based on where the service is scheduled.

Did this answer your question?