At various points in an Amazon EKS cluster’s lifecycle, direct access to a worker node may be required. This can be done using kubectl debug to open an interactive shell on the target node.
# start debug container
kubectl debug nodes/<nodename> --profile=sysadmin -it --image=<image>
- The root filesystem of the Node will be mounted at
/host. - The container runs in the host IPC, Network, and PID namespaces, although the pod isn’t privileged, so reading some process information may fail, and
chroot /hostmay fail. - If you need a privileged pod, create it manually or use the
--profile=sysadminflag.
The sysadmin profile typically sets up the pod with:
- privileged: true
- hostPID: true (for node networking)
- hostNetwork: true (for process visibility)
- host filesystem mounted at
/host
To help ensure no vulnerabilities are introduced into the cluster, an easy approach is to use something like Docker Hardened Images(since DHI repositories require authentication, I’ve mirrored Alpine in my own repository) i.e. kubectl debug nodes/node01 --profile=sysadmin -it --image=dejanualex/alpine:3.23
Next to “switch” from the container filesystem you need to chroot /host And from there you are effectively in the node’s userland, and you can use the node’s binaries:
kubectl debug creates a debug pod with a name derived from the node name, so remember to delete the pod once you’re done debugging.



Top comments (0)