Kubic - External ip <none> or <pending> in kubernetes

Hi all,

Sorry if this post is not for this topic.

Recently I started and installed the OS Kubic solution with the option Single-node cluster Kubernetes according this wiki (link) in my home with DHCP networking.

# cat /usr/lib/os-release
NAME="openSUSE MicroOS"
VERSION="20200604"
ID="opensuse-microos"
ID_LIKE="suse opensuse opensuse-tumbleweed"
VERSION_ID="20200604"
#kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2"}

I can deploy correctly container, however I can’t expose my application with IP externally.

Following tutorials from the kubernetes and kubic tests, I have deploy the Hello-kubic (link). This is running correctly, but when i creating an service to route the traffic to access the application externally.

I have executed the both solution:

With Type=“Loadbalancer” (This expose solution comes by default in the yaml apply):

NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-kubic   LoadBalancer   10.111.94.94   <pending>     80:30307/TCP   14s
kubernetes    ClusterIP      10.96.0.1      <none>        443/TCP        40h

With type=“NodePort” with the command:


#kubectl expose deployment/hello-kubic --type="NodePort" --port 80

NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-kubic   NodePort    10.101.38.52   <none>        80:31514/TCP   11s
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP        40h

So I can’t have external IP, which makes it quite impossible for me to access the application from outside of the node with curl.

Note: For the Load Balancer option this issue appears in different forums like this one(link).
Therefore I installed and applied the MetalLB package from SUSE Package with the Layer2 configuration.


# kubectl get pods -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-6c76dd6474-l8h8z   1/1     Running   0          16h
speaker-l7vrl                 1/1     Running   0          16h

Configuration code for Metalb.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.50/70


But I still have this issue and some post propose to adapt the IPtable but I think for this version of SUSE the IPtable or SuSeFirewall2 is not installed.


# zypper search firewall

S | Name                       | Summary                                                             | Type
--+----------------------------+---------------------------------------------------------------------+--------
  | firewall-applet            | Firewall panel applet                                               | package
  | firewall-config            | Firewall configuration application                                  | package
  | firewall-macros            | FirewallD RPM macros                                                | package
  | firewalld                  | A firewall daemon with D-Bus interface providing a dynamic firewall | package
  | firewalld-lang             | Translations for package firewalld                                  | package
  | firewalld-rpcbind-helper   | Tool for static port assignment of NFSv3, ypserv, ypbind services   | package
  | python3-firewall           | Python3 bindings for FirewallD                                      | package
  | susefirewall2-to-firewalld | Basic SuSEfirewall2 to FirewallD migration script                   | package
  | yast2-firewall             | YaST2 - Firewall Configuration                                      | package

Please, If any of you have some information or advice I will really appreciated.

Thank you very much.

Hi all,

Here I put the solution founded for the load balancer service and my error if someone else have the same issue. However, I will continue to try to find the error for the “NodePort” option.
The error is due to the wrong parameter in the configmap file for the MetalLB.


apiVersion: v1 kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.50/70 => The correct is 192.168.1.50-192.168.1.70 

If you still deploy the metallB you can edit with



kubectl edit configmap -n  metallb-system

I deployed again the hello-kubic and voila!
I can have external IP.


                       
 kubectl get svc
 NAME          TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
 hello-kubic   LoadBalancer   10.97.119.87   192.168.1.50   80:31473/TCP   6s
 kubernetes    ClusterIP      10.96.0.1      <none>         443/TCP        5d3h
  

Regards