skip to content
Alvin Lucillo

kubectl unable to connect

/ 4 min read

There are many ways why kubectl can’t connect to the server. We’ll explore one today. It’s common to see error messages with did you specify the right host or port?. The first thing to do is to check if kube-system components are all up and running, but you can’t check that with kubectl. To check them, use either crictl or docker.

cluster2-controlplane ~  k get all
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?
The connection to the server cluster2-controlplane:6443 was refused - did you specify the right host or port?

In crictl output below, we can see that the kube-api server isn’t up for some reason. Some components are also in exited state.

cluster2-controlplane ~ crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
9456aeed5acb3       6331715a2ae96       26 minutes ago      Running             calico-kube-controllers   0                   76a2953ceeb78       calico-kube-controllers-5745477d4d-srmzq   kube-system
417cc9bed413d       ead0a4a53df89       26 minutes ago      Running             coredns                   0                   2e831f18839f5       coredns-7484cd47db-vs9th                   kube-system
56d3831054b82       ead0a4a53df89       26 minutes ago      Running             coredns                   0                   5843ceb2704cd       coredns-7484cd47db-7zkvc                   kube-system
24201b2ed0c3f       c9fe3bce8a6d8       26 minutes ago      Running             kube-flannel              0                   f027b4a01122b       canal-f9b44                                kube-system
96d8428d95398       feb26d4585d68       26 minutes ago      Running             calico-node               0                   f027b4a01122b       canal-f9b44                                kube-system
a8e440d48b786       040f9f8aac8cd       26 minutes ago      Running             kube-proxy                0                   5d26ec0b194e8       kube-proxy-8gn5b                           kube-system
31f87034b463b       a9e7e6b294baf       26 minutes ago      Running             etcd                      0                   2981519db4082       etcd-cluster2-controlplane                 kube-system

cluster2-controlplane ~  crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
9456aeed5acb3       6331715a2ae96       27 minutes ago      Running             calico-kube-controllers   0                   76a2953ceeb78       calico-kube-controllers-5745477d4d-srmzq        kube-system
417cc9bed413d       ead0a4a53df89       27 minutes ago      Running             coredns                   0                   2e831f18839f5       coredns-7484cd47db-vs9th                        kube-system
56d3831054b82       ead0a4a53df89       27 minutes ago      Running             coredns                   0                   5843ceb2704cd       coredns-7484cd47db-7zkvc                        kube-system
24201b2ed0c3f       c9fe3bce8a6d8       27 minutes ago      Running             kube-flannel              0                   f027b4a01122b       canal-f9b44                                     kube-system
96d8428d95398       feb26d4585d68       27 minutes ago      Running             calico-node               0                   f027b4a01122b       canal-f9b44                                     kube-system
db6b93745bb19       7dd6ea186aba0       27 minutes ago      Exited              install-cni               0                   f027b4a01122b       canal-f9b44                                     kube-system
a8e440d48b786       040f9f8aac8cd       27 minutes ago      Running             kube-proxy                0                   5d26ec0b194e8       kube-proxy-8gn5b                                kube-system
e0827a9a77899       a389e107f4ff1       27 minutes ago      Exited              kube-scheduler            0                   4f88f38271501       kube-scheduler-cluster2-controlplane            kube-system
31f87034b463b       a9e7e6b294baf       27 minutes ago      Running             etcd                      0                   2981519db4082       etcd-cluster2-controlplane                      kube-system
d3fdd69d04a5d       8cab3d2a8bd0f       27 minutes ago      Exited              kube-controller-manager   0                   f8c7e69edc5c1       kube-controller-manager-cluster2-controlplane   kube-system

Check if the manifest for the kube-api server exists: cat /etc/kubernetes/manifests/kube-apiserver.yaml

Check the status of kubelet, which is the component responsible for keeping the containers are running.

cluster2-controlplane ~  systemctl status kubelet
Unit kubelet.service could not be found.

With the output above, we can see that there’s no kubelet installed. We need to reinstall it.

First, look for the kube-apiserver version from the kube-apiserver manifest: v1.32.0

cluster2-controlplane ~  cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  ...
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.81.149
    ...
    image: registry.k8s.io/kube-apiserver:v1.32.0

Get the actual kubeadm version with minor version. Let’s choose 1.32.0-1.1.

sudo apt update
sudo apt-cache madison kubeadm
Get:2 https://download.docker.com/linux/ubuntu jammy InRelease [48.5 kB]
...
Reading state information... Done
84 packages can be upgraded. Run 'apt list --upgradable' to see them.
   kubeadm | 1.32.10-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.9-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.8-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.7-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.6-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.4-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages
   kubeadm | 1.32.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.32/deb  Packages

Let’s update kubelet

sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.32.0-1.1' kubectl='1.32.0-1.1' && \
sudo apt-mark hold kubelet kubectl
kubelet was already not on hold.
Canceled hold on kubectl.
Hit:2 https://download.docker.com/linux/ubuntu jammy InRelease
...
kubelet set on hold.
kubectl set on hold.

Reload the daemon and kubelet

sudo systemctl daemon-reload
sudo systemctl restart kubelet

After those steps, we are now able to access kubectl and see the kubelet service up and running:

sudo systemctl status kubelet
 kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Mon 2025-11-17 12:26:20 UTC; 12s ago
       Docs: https://kubernetes.io/docs/
   Main PID: 24642 (kubelet)
      Tasks: 32 (limit: 154668)
     Memory: 31.0M

Source: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/#upgrading-control-plane-nodes