After updates of 2019-11-09 a NFS mount fails

opensuse 15.1

I updated our systems, clients and server, today 2019-11-09. Now the clients cannot mount a NFS volume on the server.

$ sudo mount /u
mount.nfs: Network is unreachable

Looking at the NFS Server configuration in Yast, this oddity is displayed:

Firewall not configurable
  Some firewalld services are not available:
   - nfs-kernel-server (Not available)
  These services must be defined to configure the firewall.

So far, this is the only peculiarity I can find. And I do not understand what the message is.
nfs-kernel-server, “Support Utilities for Kernel nfsd,” is installed.
NFS is open in the firewall. I can connect to port 2049 at the server.

The Services Manager shows this:

nfs-server │ On Boot  │ Active (Exited) │NFS server and services

which seems to imply the NFS server is not active? (It can be Active and Exited simultaneously?) Although I suspect it is not the same as nfs-kernel-server.

I have removed/added the exported volumes on the server.
I have removed/added the NFS clients on the client systems.

I note that the NFS mounts on a NAS (linux-based) work as expected. The implication is this is a server issue.

Is the firewall active?

sudo systemctl status firewalld

If active, check

firewall-cmd --list-services

If necessary, do

sudo firewall-cmd --add-service=nfs --permanent
sudo firewall-cmd --reload 

These may be helpful as well…
https://forums.opensuse.org/showthread.php/531849-nfs-kernel-service-error-in-exporting-NFS-directory
https://www.hiroom2.com/2018/06/12/opensuse-15-nfs-kernel-server-en/

I missed this comment…

Check

sudo systemctl status nfsserver
$ sudo systemctl status nfsserver
● nfsserver.service - Alias for NFS server
   Loaded: loaded (/usr/lib/systemd/system/nfsserver.service; enabled; vendor preset: disabled)
   Active: active (exited) since Sat 2019-11-09 23:38:12 MST; 14h ago
 Main PID: 23708 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/nfsserver.service

Nov 09 23:38:12 sma-server3 systemd[1]: Starting Alias for NFS server...
Nov 09 23:38:12 sma-server3 systemd[1]: Started Alias for NFS server.

That looks ok…and the firewall?

$ sudo systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2019-11-09 21:36:17 MST; 16h ago
     Docs: man:firewalld(1)
 Main PID: 1122 (firewalld)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/firewalld.service
           └─1122 /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopid
Nov 09 21:36:16 sma-server3 systemd[1]: Starting firewalld - dynamic firewall daemon...
Nov 09 21:36:17 sma-server3 systemd[1]: Started firewalld - dynamic firewall daemon.

$ sudo firewall-cmd --list-services
ssh dhcpv6-client dhcp dhcpv6 dns http https imap imaps ipp ipp-client mdns nfs ntp pop3s samba samba-client smtp smtps smtp-submission squid mysql apache2 apache2-ssl apcupsd

$ sudo firewall-cmd --add-service=nfs --permanent
Warning: ALREADY_ENABLED: nfs
success

$ sudo firewall-cmd --reload
success

After all of that, at the client system:

$ sudo mount /u
mount.nfs: Network is unreachable

It is not the firewall. I stopped the firewall; the mount still failed.

Hmm. Here’s a hint. It appears to be using ipv6, and failing. 2002:c0a8:45f6::c0a8:45f6 is the ipv6 address of the server derived from 192.168.69.246.

$ mount -v /u
mount.nfs: timeout set for Sun Nov 10 15:01:34 2019
mount.nfs: trying text-based options 'vers=4.2,addr=2002:c0a8:45f6::c0a8:45f6,clientaddr=::'
mount.nfs: mount(2): Network is unreachable
mount.nfs: Network is unreachable

$ ping -6 2002:c0a8:45f6::c0a8:45f6
connect: Network is unreachable

That also looks as expected (although there was no need to try to open the nfs port again).

Can you share your client IP configuration?

ip a
ip r

Can you reach the server by IP address? (For example ping the server.)

Client system:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 30:85:a9:ad:05:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.69.115/24 brd 192.168.69.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::3285:a9ff:fead:531/64 scope link 
       valid_lft forever preferred_lft forever
3: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff

$ ip r
default via 192.168.69.1 dev eth0 proto dhcp 
192.168.69.0/24 dev eth0 proto kernel scope link src 192.168.69.115 

Server system:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:24:8c:9a:f4:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.69.246/24 brd 192.168.69.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::224:8cff:fe9a:f4f4/64 scope link 
       valid_lft forever preferred_lft forever

$ ip r
default via 192.168.69.1 dev eth0 
192.168.69.0/24 dev eth0 proto kernel scope link src 192.168.69.246 
$ ping 192.168.69.246
PING 192.168.69.246 (192.168.69.246) 56(84) bytes of data.
64 bytes from 192.168.69.246: icmp_seq=1 ttl=64 time=0.557 ms
64 bytes from 192.168.69.246: icmp_seq=2 ttl=64 time=0.219 ms
64 bytes from 192.168.69.246: icmp_seq=3 ttl=64 time=0.530 ms
64 bytes from 192.168.69.246: icmp_seq=4 ttl=64 time=0.574 ms

Sorry, I missed this earlier. How are the mounts defined in the client?

…and at the server end

exportfs

or

cat /etc/exports

In /etc/fstab at the client:

sma-server3.sma.com:/data01/t-drv          /u                 nfs    defaults             0  0

At the server:

$ exportfs
/data01/w-drv     <world>
/data01/t-drv     <world>

$ cat /etc/exports
/data01/w-drv    *(rw,root_squash,sync,no_subtree_check)
/data01/t-drv    *(rw,root_squash,sync,no_subtree_check)

A name resolution issue then? I assume a DNS issue in your network?

Why is NFS using ipv6?
I even tried adding “addr=192.168.69.246” to the options. Completely ignored.

(rant) I do not understand ipv6. I get that it is an important improvement. I just cannot get a grip on the simplest thing: How do I assign/generate/invoke/whatever a proper ipv6 address to a given host? I have read a number of articles and books on the subject. They all say this: magic happens. Magic happens magically, magic happens by DHCP, magic happens by EUI-48 (I almost understand this one), magic happens manually. (/rant)

I’m still trying to get a handle on this - is it just sma-server3.sma.com isn’t resolving with an IPv4 address? Or something specific to NFS?

getent hosts sma-server3.sma.com
$ getent hosts sma-server3.sma.com
2002:c0a8:45f6::c0a8:45f6 sma-server3.sma.com

I do not recall how I acquired 2002:c0a8:45f6::c0a8:45f6 as the ipv6 address for the server; although c0a8:45f6 is 192.168.69.246. It seems in conflict with fe80::224:8cff:fe9a:f4f4/64, which is an incomplete address.

I found what the problem was. Yay! It was the “getent” result that encouraged me to look at the “hosts” file.

I had added
2002:c0a8:45f6::c0a8:45f6 sma-server3.sma.com
to the /etc/hosts file.
After removing it, the volume mounted as expected.

Thank you for your help with this.

That explains it! Glad it had a simple explanation. Happy to have been of help. :slight_smile: