All Collections
Troubleshooting
Troubleshooting Cluster (or) Node Failures
Troubleshooting Cluster (or) Node Failures
Troubleshooting node events when node is not in ready status
Vinothini Raju avatar
Written by Vinothini Raju
Updated over a week ago

Scenario: Node not in ready status

πŸ”” Event: FreeDiskSpaceFailed

πŸ” Reason: Not enough disk space in the node

By default, the Kubernetes garbage collection (GC) gets triggered when the disk usage on a node crosses the HighThresholdPercent value (90% default). The ImageGCManager deletes images starting with the oldest and last used image until the disk usage reaches the LowThresholdPercent value. In some cases, GC does not get triggered. In such scenarios, the FreeDiskSpaceFailed event occurs.

πŸ’‘ Solution

Clean up some space or resize the volume. Look for unused docker images and clean up the unused images. Say, for instance, you can run the Spotify's GC to manually clean up the images on the node.

docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc:ro spotify/docker-gc


πŸ”” Event: ImageGCFailed

πŸ” Reason: If the disk space threshold hits default 90%, then ImageGCManager does the cleanup automatically. Sometimes ImageGCFailed error appears in the node events if the garbage collection fails.

πŸ’‘ Solution: Same as FreeDiskSpaceFailed


πŸ”” Event: InvalidDiskCapacity

πŸ” Reason 1: invalid capacity 0 on image filesystem & the node is in 'NotReady' status

This occurs when kubelet does not recognize the disk availability.

πŸ’‘ Solution 1: Restart containerd and kubelet daemons on the node.

systemctl restart containerd
systemctl restart kubelet

(or on microk8s)

sudo systemctl restart snap.microk8s.daemon-kubelet
sudo systemctl status snap.microk8s.daemon-kubelet

πŸ” Reason 2: cgroups not enabled on the node(edge ARM)

πŸ’‘ Solution 2: Enable cgroups and reboot the node

sudo echo "cgroup_enable=memory cgroup_memory=1" >> /boot/firmware/cmdline.txt
reboot


πŸ”” Event: ContainerGCFailed

πŸ” Reason: Node is overloaded (not always reflected as disk or memory pressure). Not enough resources are allocated to Docker and it fails to respond in time.

πŸ’‘ Solution

  1. Set limits for pods to prevent overloading the Nod

  2. Cordon and evict the pods

  3. Reboot the server

πŸ” Reason 2: Evictions thresholds are too close to the node's physical memory limits

πŸ’‘ Solution 2: Leave some buffer while setting eviction thresholds

For more troubleshooting check - https://kubernetes.feisky.xyz/v/en/index/cluster

Did this answer your question?