How to solve AWS EFS mount failure on Pods with error MountVolume.SetUp failed for volume

Asanka Vithanage
2 min readOct 5, 2021

Context:

We are running several applications like Jenkins, Grafana, InfluxDB,..etc on the Amazon EKS Kubernetes cluster.

Also, those applications use Amazon EFS as persistent storage. For example, Jenkins builds data get stores on an EFS file system. EFS provisioner service running on to keep the connection between EFS and Kubernetes Pods. This setup was working fine for us for several months without issues.

Issue:

Recently, We upgraded our Kubernetes worker nodes using a new EKS AMI image. After the upgrade, We noticed our pods that use EFS mounts are failing.

For troubleshooting, I started looking into pod logs and pod events. Noticed something similar on failing pod events.

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/efs-provisioner-111111111-x3vlm to ip-10-111-11-11.ec2.internal
Warning FailedMount 2m23s kubelet, 10-111-11-11.ec2.internal MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for…

--

--

Asanka Vithanage

Software Quality Assurance Professional, Problem Solver, SOA Tester, Automation Engineer, CI/CD Practitioner, DevOps enthusiast