Resolve worker nodes failure to join with the Kubernetes cluster

Asanka Vithanage
2 min readApr 30, 2021

Recently, I created a Kubernetes 1.18 cluster using Amazon EKS service, following the steps listed in my article https://gvasanka.medium.com/how-to-create-a-kubernetes-cluster-using-amazon-eks-da0911ea62e2

Though the Control plane and worker nodes successfully created, I noticed worker nodes haven't joined with the cluster.

kubectl get nodes and kubectl get pods commands were giving empty results.

EKS cluster overview page also didn’t list the node details.

So I logged in to a worker node machine to figure out what's going on, First validated whether kubelet service is running and it was in a running state.

Then checked on the kubelet logs using the commandjournalctl -u kubelet -f

Logs had the following important entries.

kubelet[5323]: I0430 07:40:43.272559    5323 csi_plugin.go:945] Failed to contact API server when waiting for CSINode publishing: Unauthorized
kubelet[5323]: E0430 07:40:43.282461 5323 kubelet.go:2275] node "ip- 100-138-136-157.eu-west-2.compute.internal" not found
kubelet[5323]: E0430 07:40:43.646953 5323 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Unauthorized
kubelet[5323]: E0430 07:40:43.683373 5323 kubelet.go:2275] node "ip- 100-138-136-157.eu-west-2.compute.internal" not found
kubelet[5323]: E0430 07:40:44.184455 5323 kubelet.go:2275] node "ip…

--

--

Asanka Vithanage

Software Quality Assurance Professional, Problem Solver, SOA Tester, Automation Engineer, CI/CD Practitioner, DevOps enthusiast