I am having some problem when trying to init my kubernetes cluster with kubeadm init
:
m0:~ # kubeadm init --kubernetes-version v1.29.1 --v=5
I0221 19:38:43.018447 2557 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0221 19:38:43.019458 2557 interface.go:432] Looking for default routes with IPv4 addresses
I0221 19:38:43.019500 2557 interface.go:437] Default route transits interface "enp1s0"
I0221 19:38:43.020545 2557 interface.go:209] Interface enp1s0 is up
I0221 19:38:43.020710 2557 interface.go:257] Interface "enp1s0" has 4 addresses :[192.168.0.40/24 <censored>].
I0221 19:38:43.020775 2557 interface.go:224] Checking addr 192.168.0.40/24.
I0221 19:38:43.020792 2557 interface.go:231] IP found 192.168.0.40
I0221 19:38:43.020865 2557 interface.go:263] Found valid IPv4 address 192.168.0.40 for interface "enp1s0".
I0221 19:38:43.020881 2557 interface.go:443] Found active IP 192.168.0.40
I0221 19:38:43.020938 2557 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.29.1
[preflight] Running pre-flight checks
I0221 19:38:43.039220 2557 checks.go:563] validating Kubernetes and kubeadm version
I0221 19:38:43.039321 2557 checks.go:168] validating if the firewall is enabled and active
I0221 19:38:43.067748 2557 checks.go:203] validating availability of port 6443
I0221 19:38:43.069656 2557 checks.go:203] validating availability of port 10259
I0221 19:38:43.069826 2557 checks.go:203] validating availability of port 10257
I0221 19:38:43.069917 2557 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0221 19:38:43.069951 2557 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0221 19:38:43.069966 2557 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0221 19:38:43.069980 2557 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0221 19:38:43.069997 2557 checks.go:430] validating if the connectivity type is via proxy or direct
I0221 19:38:43.070192 2557 checks.go:469] validating http connectivity to first IP address in the CIDR
I0221 19:38:43.070226 2557 checks.go:469] validating http connectivity to first IP address in the CIDR
I0221 19:38:43.070309 2557 checks.go:104] validating the container runtime
I0221 19:38:43.141926 2557 checks.go:639] validating whether swap is enabled or not
I0221 19:38:43.142227 2557 checks.go:370] validating the presence of executable crictl
I0221 19:38:43.142381 2557 checks.go:370] validating the presence of executable conntrack
I0221 19:38:43.142673 2557 checks.go:370] validating the presence of executable ip
I0221 19:38:43.142750 2557 checks.go:370] validating the presence of executable iptables
I0221 19:38:43.142920 2557 checks.go:370] validating the presence of executable mount
I0221 19:38:43.143243 2557 checks.go:370] validating the presence of executable nsenter
I0221 19:38:43.143405 2557 checks.go:370] validating the presence of executable ebtables
I0221 19:38:43.143502 2557 checks.go:370] validating the presence of executable ethtool
I0221 19:38:43.143560 2557 checks.go:370] validating the presence of executable socat
I0221 19:38:43.143717 2557 checks.go:370] validating the presence of executable tc
I0221 19:38:43.143787 2557 checks.go:370] validating the presence of executable touch
I0221 19:38:43.143913 2557 checks.go:516] running all checks
I0221 19:38:43.171507 2557 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0221 19:38:43.175974 2557 checks.go:605] validating kubelet version
I0221 19:38:43.533244 2557 checks.go:130] validating if the "kubelet" service is enabled and active
I0221 19:38:43.566515 2557 checks.go:203] validating availability of port 10250
I0221 19:38:43.567306 2557 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0221 19:38:43.567670 2557 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0221 19:38:43.567754 2557 checks.go:203] validating availability of port 2379
I0221 19:38:43.568046 2557 checks.go:203] validating availability of port 2380
I0221 19:38:43.568464 2557 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0221 19:38:43.568962 2557 checks.go:828] using image pull policy: IfNotPresent
I0221 19:38:43.600013 2557 checks.go:854] pulling: registry.opensuse.org/kubic/kube-apiserver:v1.29.1
I0221 19:39:17.876318 2557 checks.go:854] pulling: registry.opensuse.org/kubic/kube-controller-manager:v1.29.1
I0221 19:39:52.975900 2557 checks.go:854] pulling: registry.opensuse.org/kubic/kube-scheduler:v1.29.1
I0221 19:40:22.978915 2557 checks.go:854] pulling: registry.opensuse.org/kubic/kube-proxy:v1.29.1
I0221 19:40:51.554536 2557 checks.go:854] pulling: registry.opensuse.org/kubic/coredns:v1.11.1
W0221 19:41:20.193466 2557 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.opensuse.org/kubic/pause:3.9" as the CRI sandbox image.
I0221 19:41:20.225594 2557 checks.go:854] pulling: registry.opensuse.org/kubic/pause:3.9
I0221 19:41:33.693945 2557 checks.go:854] pulling: registry.opensuse.org/kubic/etcd:3.5.10-0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0221 19:42:05.471892 2557 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0221 19:42:05.714235 2557 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m0.k8b.intranet.domain] and IPs [10.96.0.1 192.168.0.40]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0221 19:42:06.528754 2557 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0221 19:42:07.202650 2557 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0221 19:42:08.162011 2557 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0221 19:42:09.294361 2557 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m0.k8b.intranet.domain] and IPs [192.168.0.40 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m0.k8b.intranet.domain] and IPs [192.168.0.40 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0221 19:42:10.980973 2557 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0221 19:42:11.569973 2557 kubeconfig.go:112] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0221 19:42:12.008538 2557 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0221 19:42:13.221761 2557 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0221 19:42:13.977142 2557 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0221 19:42:14.335375 2557 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0221 19:42:15.187392 2557 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0221 19:42:15.187531 2557 manifests.go:102] [control-plane] getting StaticPodSpecs
I0221 19:42:15.188295 2557 certs.go:519] validating certificate period for CA certificate
I0221 19:42:15.188484 2557 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0221 19:42:15.188566 2557 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0221 19:42:15.188591 2557 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0221 19:42:15.188614 2557 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0221 19:42:15.190624 2557 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0221 19:42:15.190770 2557 manifests.go:102] [control-plane] getting StaticPodSpecs
I0221 19:42:15.191335 2557 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0221 19:42:15.191412 2557 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0221 19:42:15.191438 2557 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0221 19:42:15.191462 2557 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0221 19:42:15.191485 2557 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0221 19:42:15.191509 2557 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0221 19:42:15.193500 2557 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0221 19:42:15.193831 2557 manifests.go:102] [control-plane] getting StaticPodSpecs
I0221 19:42:15.194388 2557 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0221 19:42:15.195703 2557 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0221 19:42:15.195802 2557 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0221 19:42:17.638548 2557 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:109
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/lib64/go/1.21/src/runtime/proc.go:267
runtime.goexit
/usr/lib64/go/1.21/src/runtime/asm_amd64.s:1650
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/lib64/go/1.21/src/runtime/proc.go:267
runtime.goexit
/usr/lib64/go/1.21/src/runtime/asm_amd64.s:1650
My combustion script:
#!/bin/bash
cp vconsole.conf /etc/vconsole.conf && chmod 644 /etc/vconsole.conf
rm /etc/localtime && ln -sf /usr/share/zoneinfo/Etc/UTC /etc/localtime
echo root:root | chpasswd
cp sshd_config /etc/ssh
mkdir -p /root/.ssh
echo <censored> >> /root/.ssh/authorized_keys
# combustion: network
exec > >(exec tee -a /dev/tty0) 2>&1
echo test
zypper --non-interactive refresh
zypper --non-interactive install --no-recommends containerd conntrack-tools socat ethtool kubernetes1.29-kubelet kubernetes1.29-kubeadm
systemctl enable containerd
systemctl enable kubelet
From the kubelet logs I looks like it is trynig to register against itself before even initialize itself: openSUSE Paste