Kubernetes: The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again

I have two Leap Micro https://download.opensuse.org/distribution/leap-micro/5.5/appliances/openSUSE-Leap-Micro.x86_64-Default-qcow.qcow2 setup with the combustion script:

cp vconsole.conf /etc/vconsole.conf && chmod 644 /etc/vconsole.conf
rm /etc/localtime && ln -sf /usr/share/zoneinfo/Etc/UTC /etc/localtime
echo root:root | chpasswd

cp sshd_config /etc/ssh
mkdir -p /root/.ssh
echo <censored> >> /root/.ssh/authorized_keys

The command historical in the guest vm0.intranet.domain:

systemctl enable containerd.service 
CNI_PLUGINS_VERSION="v1.3.0"
ARCH="amd64"
DEST="/opt/cni/bin"
sudo mkdir -p "$DEST"
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | sudo tar -C "$DEST" -xz
DOWNLOAD_DIR="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR"
CRICTL_VERSION="v1.28.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet
zypper install conntrack-tools
transactional-update pkg in -y cnotainerd
transactional-update pkg in -y containerd
history 
transactional-update --continue shell
reboot
DOWNLOAD_DIR="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR"
CRICTL_VERSION="v1.28.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
cd
systemctl enable --now kubelet
history 
vim /etc/hostname 
hostname
vim /etc/hostname 
hostname
cat /sys/class/dmi/id/product_uuid
lsblk 
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
echo $?
sudo modprobe br_netfilter
echo $?
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system
sudo tee /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF

sudo sysctl --system
toolbox 
ip a
kubeadm init
cat /etc/kubernetes/admin.conf
toolbox
kubeadm token create --print-join-command
kubeadm token list
history 

And the second machine called vm1.intranet.domain:

systemctl enable containerd.service 
CNI_PLUGINS_VERSION="v1.3.0"
ARCH="amd64"
DEST="/opt/cni/bin"
sudo mkdir -p "$DEST"
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | sudo tar -C "$DEST" -xz
DOWNLOAD_DIR="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR"
CRICTL_VERSION="v1.28.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet
zypper install conntrack-tools
transactional-update pkg in -y cnotainerd
transactional-update pkg in -y containerd
transactional-update --continue shell
reboot
DOWNLOAD_DIR="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR"
CRICTL_VERSION="v1.28.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
cd
systemctl enable --now kubelet
vim /etc/hostname 
hostname
cat /sys/class/dmi/id/product_uuid
lsblk 
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
echo $?
sudo modprobe br_netfilter
echo $?
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system
sudo tee /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF

sudo sysctl --system
kubeadm join 192.168.0.40:6443 --token fvdecu.its17d6ti0kw48fi         --discovery-token-ca-cert-hash sha256:42b4f7038ccd3564a889b1355471af4de6b88fc817e97a1f178a8f20a1808919
kubeadm join 192.168.0.40:6443 --token yau33m.z0i8pu6poi533z9x --discovery-token-ca-cert-hash sha256:42b4f7038ccd3564a889b1355471af4de6b88fc817e97a1f178a8f20a1808919 
kubeadm join 192.168.0.40:6443 --token 462w80.ijkdd5cvhrtxr4yh --discovery-token-ca-cert-hash sha256:42b4f7038ccd3564a889b1355471af4de6b88fc817e97a1f178a8f20a1808919
kubeadm join 192.168.0.40:6443 --token 462w80.ijkdd5cvhrtxr4yh --discovery-token-ca-cert-hash sha256:42b4f7038ccd3564a889b1355471af4de6b88fc817e97a1f178a8f20a1808919 --v=5
history 

I am getting the following error:

kubeadm join 192.168.0.40:6443 --token 462w80.ijkdd5cvhrtxr4yh --discovery-token-ca-cert-hash sha256:42b4f7038ccd3564a889b1355471af4de6b88fc817e97a1f178a8f20a1808919 --v=5
I0207 23:37:17.527675    3630 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
I0207 23:37:17.530090    3630 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
[preflight] Running pre-flight checks
I0207 23:37:17.531390    3630 preflight.go:93] [preflight] Running general checks
I0207 23:37:17.548369    3630 checks.go:280] validating the existence of file /etc/kubernetes/kubelet.conf
I0207 23:37:17.548475    3630 checks.go:280] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0207 23:37:17.548511    3630 checks.go:104] validating the container runtime
I0207 23:37:17.663600    3630 checks.go:639] validating whether swap is enabled or not
I0207 23:37:17.663811    3630 checks.go:370] validating the presence of executable crictl
I0207 23:37:17.663934    3630 checks.go:370] validating the presence of executable conntrack
I0207 23:37:17.664199    3630 checks.go:370] validating the presence of executable ip
I0207 23:37:17.664332    3630 checks.go:370] validating the presence of executable iptables
I0207 23:37:17.664424    3630 checks.go:370] validating the presence of executable mount
I0207 23:37:17.664686    3630 checks.go:370] validating the presence of executable nsenter
I0207 23:37:17.666394    3630 checks.go:370] validating the presence of executable ebtables
I0207 23:37:17.666528    3630 checks.go:370] validating the presence of executable ethtool
        [WARNING FileExisting-ethtool]: ethtool not found in system path
I0207 23:37:17.666687    3630 checks.go:370] validating the presence of executable socat
        [WARNING FileExisting-socat]: socat not found in system path
I0207 23:37:17.666763    3630 checks.go:370] validating the presence of executable tc
I0207 23:37:17.666794    3630 checks.go:370] validating the presence of executable touch
I0207 23:37:17.666844    3630 checks.go:516] running all checks
I0207 23:37:17.736808    3630 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0207 23:37:17.745328    3630 checks.go:605] validating kubelet version
I0207 23:37:18.092605    3630 checks.go:130] validating if the "kubelet" service is enabled and active
I0207 23:37:18.549335    3630 checks.go:203] validating availability of port 10250
I0207 23:37:18.550801    3630 checks.go:280] validating the existence of file /etc/kubernetes/pki/ca.crt
I0207 23:37:18.551669    3630 checks.go:430] validating if the connectivity type is via proxy or direct
I0207 23:37:18.551920    3630 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0207 23:37:18.552418    3630 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0207 23:37:18.552582    3630 join.go:532] [preflight] Discovering cluster-info
I0207 23:37:18.552719    3630 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "192.168.0.40:6443"
I0207 23:37:18.850824    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:23.907908    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:29.414180    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:35.419239    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:41.822105    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:47.918657    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:53.560166    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:37:59.172189    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:05.212596    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:10.726382    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:16.036165    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:22.581801    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:28.844878    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:34.634021    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:41.344687    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:49.705913    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:38:56.009367    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:02.501849    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:08.897644    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:15.874913    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:21.717574    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:27.160011    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:33.275298    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:38.598028    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:44.673214    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:51.120797    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:39:57.273370    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:03.706384    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:09.075032    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:15.554817    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:20.958124    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:27.334006    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:32.958723    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:39.324558    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:45.232767    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:50.468903    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:40:55.815744    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:02.374672    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:07.969926    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:14.376113    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:20.554503    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:27.254523    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:33.236254    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:39.769166    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:46.180749    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:51.744161    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:41:57.611531    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:42:02.877452    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:42:08.426893    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
I0207 23:42:14.179151    3630 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "462w80", will try again
could not find a JWS signature in the cluster-info ConfigMap for token ID "462w80"
k8s.io/kubernetes/cmd/kubeadm/app/discovery/token.getClusterInfo.func1
        cmd/kubeadm/app/discovery/token/token.go:224
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
        vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
        vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227
k8s.io/apimachinery/pkg/util/wait.JitterUntil
        vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204
k8s.io/kubernetes/cmd/kubeadm/app/discovery/token.getClusterInfo
        cmd/kubeadm/app/discovery/token/token.go:214
k8s.io/kubernetes/cmd/kubeadm/app/discovery/token.retrieveValidatedConfigInfo
        cmd/kubeadm/app/discovery/token/token.go:81
k8s.io/kubernetes/cmd/kubeadm/app/discovery/token.RetrieveValidatedConfigInfo
        cmd/kubeadm/app/discovery/token/token.go:53
k8s.io/kubernetes/cmd/kubeadm/app/discovery.DiscoverValidatedKubeConfig
        cmd/kubeadm/app/discovery/discovery.go:83
k8s.io/kubernetes/cmd/kubeadm/app/discovery.For
        cmd/kubeadm/app/discovery/discovery.go:43
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).TLSBootstrapCfg
        cmd/kubeadm/app/cmd/join.go:533
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).InitCfg
        cmd/kubeadm/app/cmd/join.go:543
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runPreflight
        cmd/kubeadm/app/cmd/phases/join/preflight.go:98
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
        cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650
couldn't validate the identity of the API Server
k8s.io/kubernetes/cmd/kubeadm/app/discovery.For
        cmd/kubeadm/app/discovery/discovery.go:45
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).TLSBootstrapCfg
        cmd/kubeadm/app/cmd/join.go:533
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).InitCfg
        cmd/kubeadm/app/cmd/join.go:543
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runPreflight
        cmd/kubeadm/app/cmd/phases/join/preflight.go:98
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
        cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
        cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650

Just discovered that is because:

kube-system 37s Warning FailedMount pod/kube-controller-manager-m0.k8b.raymwm.mwm MountVolume.SetUp failed for volume flexvolume-dir : mkdir /usr/libexec/kubernetes: read-only file system

thus Micro inmutability is messing with kubernetes installation

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.