eksctl creates nodepools that are mostly immutable, except for autoscaling properties -
minimum and maximum number of nodes. In certain cases, it might be helpful to ‘scale up’
a nodegroup in a cluster before an event, to test cloud provider quotas or to make user
server startup faster.
Open the appropriate
.jsonnetfile for the cluster in question (located in the
eksctlfolder), and set a
notebookNodesto the appropriate number you want.
It is currently unclear if lowering the
minSizeproperty just allows the autoscaler to reclaim nodes, or if it actively destroys nodes at time of application! If it actively destroys nodes, it is unclear if it does so regardless of user pods running in these nodes! Until this can be determined, please do scale-downs only when there are no users on the nodes.
.jsonnetfile into a YAML file with:
jsonnet $CLUSTER_NAME.jsonnet > $CLUSTER_NAME.eksctl.yaml
eksctlto scale the cluster.
eksctl scale nodegroup --config-file=$CLUSTER_NAME.eksctl.yaml
eksctlmight print warning messages such as
retryable error (Throttling: Rate exceeded. This is just a warning, and shouldn’t have any actual effects on the scaling operation except cause a delay.
Validate that appropriate new nodes are coming up by authenticating to the cluster, and running
kubectl get node.
Commit the change and make a PR, and note that you have already completed the scaling operation. This is flexible, as the scaling operation might need to be timed differently in each case. The goal is to make sure that the
minSizeparameter in the github repository matches reality.