eksctl creates nodepools that are mostly immutable, except for autoscaling properties -
minimum and maximum number of nodes. In certain cases, it might be helpful to ‘scale up’
a nodegroup in a cluster before an event, to test cloud provider quotas or to make user
server startup faster.
Open the appropriate
.jsonnetfile for the cluster in question (located in the
eksctlfolder). Depending on how you intend to scale the nodepools, there are two approaches you may take.
Scale all nodepools. To scale all nodepools, locate the
minSizeproperty of the
nbnode group and change the value to what you want. An example can be found here: 2i2c-org/infrastructure
Scale a specific nodepool. If you only wish to scale a specific nodepool, you can add the
minSizeproperty to the local
notebookNodesvariable next to the
instanceTypethat you wish to scale. An example can be found here: 2i2c-org/infrastructure
It is currently unclear if lowering the
minSizeproperty just allows the autoscaler to reclaim nodes, or if it actively destroys nodes at time of application! If it actively destroys nodes, it is unclear if it does so regardless of user pods running in these nodes! Until this can be determined, please do scale-downs only when there are no users on the nodes.
.jsonnetfile into a YAML file with:
jsonnet $CLUSTER_NAME.jsonnet > $CLUSTER_NAME.eksctl.yaml
eksctlto scale the cluster.
eksctl scale nodegroup --config-file=$CLUSTER_NAME.eksctl.yaml
eksctlmight print warning messages such as
retryable error (Throttling: Rate exceeded. This is just a warning, and shouldn’t have any actual effects on the scaling operation except cause a delay.
Validate that appropriate new nodes are coming up by authenticating to the cluster, and running
kubectl get node.
Commit the change and make a PR, and note that you have already completed the scaling operation. This is flexible, as the scaling operation might need to be timed differently in each case. The goal is to make sure that the
minSizeparameter in the github repository matches reality.