ebtables
or executable not found during installationIf you see the following warnings while running kubeadm init
[preflight] WARNING: ebtables not found in system path
[preflight] WARNING: ethtool not found in system path
Then you may be missing ebtables
, ethtool
or a similar executable on your Linux machine. You can install them with the following commands:
apt install ebtables ethtool
.yum install ebtables ethtool
.If you notice that kubeadm init
hangs after printing out the following line:
[apiclient] Created API client, waiting for the control plane to become ready
This may be caused by a number of problems. The most common are:
the default cgroup driver configuration for the kubelet differs from that used by Docker.
Check the system log file (e.g. /var/log/message
) or examine the output from journalctl -u kubelet
. If you see something like the following:
error: failed to run Kubelet: failed to create kubelet:
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
There are two common ways to fix the cgroup driver problem:
docker ps
and investigating each container by running docker logs
.RunContainerError
, CrashLoopBackOff
or Error
stateRight after kubeadm init
there should not be any such Pods. If there are Pods in
such a state right after kubeadm init
, please open an issue in the kubeadm repo.
kube-dns
should be in the Pending
state until you have deployed the network solution.
However, if you see Pods in the RunContainerError
, CrashLoopBackOff
or Error
state
after deploying the network solution and nothing happens to kube-dns
, it’s very
likely that the Pod Network solution that you installed is somehow broken. You
might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers’ issue tracker and get the issue triaged there.
kube-dns
is stuck in the Pending
stateThis is expected and part of the design. kubeadm is network provider-agnostic, so the admin
should install the pod network solution
of choice. You have to install a Pod Network
before kube-dns
may deployed fully. Hence the Pending
state before the network is set up.
HostPort
services do not workThe HostPort
and HostIP
functionality is available depending on your Pod Network
provider. Please contact the author of the Pod Network solution to find out whether
HostPort
and HostIP
functionality are available.
Verified HostPort CNI providers:
For more information, read the CNI portmap documentation.
If your network provider does not support the portmap CNI plugin, you may need to use the NodePort feature of
services or use HostNetwork=true
.
Many network add-ons do not yet enable hairpin mode which allows pods to access themselves via their Service IP if they don’t know about their podIP. This is an issue related to CNI. Please contact the providers of the network add-on providers to get timely information about whether they support hairpin mode.
If you are using VirtualBox (directly or via Vagrant), you will need to
ensure that hostname -i
returns a routable IP address (i.e. one on the
second network interface, not the first one). By default, it doesn’t do this
and kubelet ends-up using first non-loopback network interface, which is
usually NATed. Workaround: Modify /etc/hosts
, take a look at this
Vagrantfile
ubuntu-vagrantfile for how this can be achieved.
The following error indicates a possible certificate mismatch.
# kubectl get po
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Verify that the $HOME/.kube/config
file contains a valid certificate, and regenerate a certificate if necessary.
Another workaround is to overwrite the default kubeconfig
for the “admin” user:
mv $HOME/.kube $HOME/.kube.bak
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
If you are using CentOS and encounter difficulty while setting up the master node, verify that your Docker cgroup driver matches the kubelet config:
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
If the Docker cgroup driver and the kubelet config don’t match, change the kubelet config to match the Docker cgroup driver. The
flag you need to change is --cgroup-driver
. If it’s already set, you can update like so:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.
Then restart kubelet:
systemctl daemon-reload
systemctl restart kubelet
The kubectl describe pod
or kubectl logs
commands can help you diagnose errors. For example:
kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}
The following error might indicate that something was wrong in the pod network:
Error from server (NotFound): the server could not find the requested resource
If you’re using flannel as the pod network inside vagrant, then you will have to specify the default interface name for flannel.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address 10.0.2.15
, is for external traffic that gets NATed.
This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the --iface eth1
flag to flannel so that the second interface is chosen.