Unable to make NFS connection from laptop.

I have a new problem making an NFS connection between my laptop which is running Tumbleweed and connecting through WiFi to my workstation running Leap15.3 and connecting through lan. If needed I can give details of our network but essentially all devices are on the same subnet.

I believe the problem is with the firewall setup on the workstation and I am including some basic tests here in the hope that somebody will spot my error.

From my laptop with both machine firewalls enabled;-
I can ping the workstation:

alastair@IBMW530:~> ping 192.168.169.134 
PING 192.168.169.134 (192.168.169.134) 56(84) bytes of data. 
64 bytes from 192.168.169.134: icmp_seq=1 ttl=64 time=8.10 ms 
64 bytes from 192.168.169.134: icmp_seq=2 ttl=64 time=3.23 ms 
64 bytes from 192.168.169.134: icmp_seq=3 ttl=64 time=3.25 ms 
64 bytes from 192.168.169.134: icmp_seq=4 ttl=64 time=3.25 ms 
64 bytes from 192.168.169.134: icmp_seq=5 ttl=64 time=3.34 ms 
64 bytes from 192.168.169.134: icmp_seq=6 ttl=64 time=5.11 ms 
^C 
--- 192.168.169.134 ping statistics --- 
6 packets transmitted, 6 received, 0% packet loss, time 5007ms 
rtt min/avg/max/mdev = 3.229/4.380/8.099/1.794 ms 
alastair@IBMW530:~> 

I am unable to detect the firewall port with nmap:

alastair@IBMW530:~> nmap -sV -p 2049 192.168.169.134 
Starting Nmap 7.92 ( https://nmap.org ) at 2022-06-20 18:47 BST 
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn 
Nmap done: 1 IP address (0 hosts up) scanned in 0.27 seconds 
alastair@IBMW530:~> 

My laptop firewall configuration is below:

alastair@IBMW530:~> sudo firewall-cmd --list-all-zones                                                          
[sudo] password for root:
block 
  target: %%REJECT%% 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: yes 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

dmz 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: yes 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

docker (active) 
  target: ACCEPT 
  icmp-block-inversion: no 
  interfaces: docker0 
  sources:  
  services:  
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

drop 
  target: DROP 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: yes 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

external 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports: 1900/udp 9790/tcp 9791/tcp 2049/tcp 
  protocols:  
  forward: no 
  masquerade: yes 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

home 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: http samba ssh 
  ports: 1900/udp 9790/tcp 9791/tcp 2049/tcp 
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

internal 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: http mdns samba-client ssh 
  ports: 1900/udp 9790/tcp 9791/tcp 2049/tcp 
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

nm-shared 
  target: ACCEPT 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports: 1900/udp 9790/tcp 9791/tcp 2049/tcp 
  protocols: icmp ipv6-icmp 
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  
        rule priority="32767" reject 

public 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: pcns 
  ports: 1900/udp 9790/tcp 9791/tcp 1714-1764/tcp 1714-1764/udp 
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

trusted 
  target: ACCEPT 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports: 1900/udp 9790/tcp 9791/tcp 2049/tcp 
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

work (active) 
  target: default 
  icmp-block-inversion: no 
  interfaces: enp0s25 wlp3s0 
  sources:  
  services: ftp https nfs ssh 
  ports: 1900/udp 9790/tcp 9791/tcp 21/tcp 22/tcp 6547/tcp 3052/tcp 3052/udp 6547/udp 2049/tcp 
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

alastair@IBMW530:~> 

Working through ssh connection to my workstation I have the following results with firewall enabled;

I can ping the laptop:-

alastair@ibmserv2:~> ping 192.168.169.223 
PING 192.168.169.223 (192.168.169.223) 56(84) bytes of data. 
64 bytes from 192.168.169.223: icmp_seq=1 ttl=64 time=16.0 ms 
64 bytes from 192.168.169.223: icmp_seq=2 ttl=64 time=7.32 ms 
64 bytes from 192.168.169.223: icmp_seq=3 ttl=64 time=4.26 ms 
64 bytes from 192.168.169.223: icmp_seq=4 ttl=64 time=4.87 ms 
64 bytes from 192.168.169.223: icmp_seq=5 ttl=64 time=3.77 ms 
^C 
--- 192.168.169.223 ping statistics --- 
5 packets transmitted, 5 received, 0% packet loss, time 4006ms 
rtt min/avg/max/mdev = 3.770/7.253/16.029/4.555 ms 
alastair@ibmserv2:~> 



nmap can confirm port on laptop and this tells me the port is closed:

alastair@ibmserv2:~> nmap -sV -p 2049 192.168.169.223 
Starting Nmap 7.70 ( https://nmap.org ) at 2022-06-20 19:00 BST 
Nmap scan report for 192.168.169.223 
Host is up (0.0041s latency). 

PORT     STATE  SERVICE VERSION 
2049/tcp closed nfs 

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . 
Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds 
alastair@ibmserv2:~> 


and the firewall details are:-

alastair@ibmserv2:~> sudo firewall-cmd --list-all-zones 
[sudo] password for root:  
block 
  target: %%REJECT%% 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

dmz 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: ssh 
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

docker 
  target: ACCEPT 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

drop 
  target: DROP 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

external 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: ssh 
  ports:  
  protocols:  
  forward: no 
  masquerade: yes 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

home 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: dhcpv6-client mdns samba-client ssh 
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

internal 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: dhcpv6-client mdns samba-client ssh 
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

libvirt 
  target: ACCEPT 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: dhcp dhcpv6 dns ssh tftp 
  ports:  
  protocols: icmp ipv6-icmp 
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  
        rule priority="32767" reject 

public 
  target: default 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services: dhcpv6-client ssh 
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

trusted 
  target: ACCEPT 
  icmp-block-inversion: no 
  interfaces:  
  sources:  
  services:  
  ports:  
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

work (active) 
  target: default 
  icmp-block-inversion: no 
  interfaces: br0 docker0 eth0 eth1 
  sources:  
  services: mdns nfs slp ssh 
  ports: 2049/tcp 
  protocols:  
  forward: no 
  masquerade: no 
  forward-ports:  
  source-ports:  
  icmp-blocks:  
  rich rules:  

alastair@ibmserv2:~> 

I have been using Yast to implement both the firewall configuration and nfs on both machines and I have not yet spotted the problem because the nmap scan from the workstation tells me that the laptop port is closed. If I turn off the workstation firewall and rebuild the nfs server and the nfs client then I can get a connection. When I then run nmap on the server I still see the port is closed but ths may be my ignorance again.

Please could somebody tell me where I am going wrong.
Budge.

I see a lot about firewalls, but nothing about NFS. Not even why you think "I have a new problem making an NFS connection ". What do you do, what do you get?

What do you export (on the server):

cat /etc/exports

What do you mount (on the client):

grep nfs /etc/fstab

Hi Henk, thanks for getting back to me.
From the server I have:-

alastair@ibmserv2:~> cat /etc/exports 
/home/alastair/Mastermedia/multimedia   *(rw,root_squash,sync,no_subtree_check) 
alastair@ibmserv2:~> 

and from the client I have:-

alastair@IBMW530:~> grep nfs /etc/fstab 
192.168.169.134:/multimedia                /home/alastair/NFS_Multimedia_NFS  **nfs**    **nfs**vers=4.2                   0 
 0 
alastair@IBMW530:~> 

Does this seem right.

Hi Budge,

Shouldn’t there be a space after the first “nfs” like “nfs nfsvers=4.2”? Else, are you sure client and server are using the same nfs version? What if you just skip that part “nfsvers=4.2”?

Why? What tests you performed to come to this conclusion?


My laptop firewall configuration is below:

and the firewall details are:-

Without knowing which interfaces are actually used for communication between two systems firewall configuration cannot be evaluated. You always needs to show at least

ip address show
ip route show
ip -6 route show

This could be an issue but all the work had been done using Yast rather than cli.

My reason for using the force the v4.2 is that only if that is forced is the share visible. Using yast to set up the client if I do not use the v4.2 option I cannot create the connection. I can have a look at this again now you have drawn my attention to this. Thanks.

Hi arvidjaar, I have described with above details what I have tried so far and are what have led me to my conclusion. In short the NFS connection can be made successfully if I stop the firewall on the server and not if both firewalls are running.

I appreciate that this may only be an indicator and not the cause, which is why I have posted here.

The interface info is partly available in the above info on firewall configuration but here is the additional info from the server.

alastair@ibmserv2:~> ip route show 
default via 192.168.169.129 dev eth0  
192.168.169.128/25 dev eth0 proto kernel scope link src 192.168.169.134  
alastair@ibmserv2:~>  
alastair@ibmserv2:~> ip -6 route show 
::1 dev lo proto kernel metric 256 pref medium 
fe80::/64 dev eth0 proto kernel metric 256 pref medium 
fe80::/64 dev br0 proto kernel metric 256 pref medium 
alastair@ibmserv2:~> 


I confess I have ipv6 turned off in most situations as many of my devices are not ipv6 capable and I know absolutely nothing about ipv6.

I shall have to post separately for the laptop as it is not with me and it is turned off so I shall send the laptop info shortly.

Hope this helps.

Further to my last post, here are the details from the laptop:-

alastair@IBMW530:~> ip route show 
default via 192.168.169.129 dev wlp3s0 proto dhcp src 192.168.169.223 metric 600  
192.168.169.128/25 dev wlp3s0 proto static scope link metric 600  
192.168.169.128/25 dev wlp3s0 proto kernel scope link src 192.168.169.223 metric 600  
alastair@IBMW530:~> ip -6 route show 
fe80::/64 dev wlp3s0 proto kernel metric 1024 pref medium 
alastair@IBMW530:~> 

Please let me know if you need more info.

Is there anything else I can try and is anybody still looking into this please?
Budge.

Configuring nfs with yast2 nfs_server is straight forward. That’s all I needed to do here. Details of firewall configuration: https://unix.stackexchange.com/questions/243756/nfs-servers-and-firewalld

I had been setting up the nfs server using the nfsv4.2 option when creating the share using Yast. I have been looking again at this and thought I would remove the election for nfsv4 option when setting up the server.

I also note however that no nfs share is available in the client if I try to Chose a host. I get the following:-

No NFS server has been found on your network.

This could be caused by a running firewall,

which probably blocks the network scanning.

Up to now I have always used the IP address for building the connection and never had a problem once I forced nfsv4.2 because that is handled differently but if I am using “highest available” I would have expected to have been able to see the host. When yast scans for hosts what protocol is it using to find the hosts?

Strange thing is that if nfs server is set up on workstation and then I also try and create an nfs client connection on the same machine and enter the IP address, no nfs exported directory is shown for that address.

Any thoughts?

I have all these running:

**erlangen:~ #** systemctl status nfs-* 
**●** nfs-server.service - NFS server and services 
     Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled) 
    Drop-In: /usr/lib/systemd/system/nfs-server.service.d 
             └─options.conf 
             /run/systemd/generator/nfs-server.service.d 
             └─order-with-mounts.conf 
     Active: **active (exited)** since Tue 2022-06-21 21:45:54 CEST; 1min 16s ago 
    Process: 8212 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) 
    Process: 8213 ExecStart=/usr/sbin/rpc.nfsd $NFSD_OPTIONS (code=exited, status=0/SUCCESS) 
   Main PID: 8213 (code=exited, status=0/SUCCESS) 
        CPU: 2ms 

Jun 21 21:45:54 erlangen systemd[1]: Starting NFS server and services... 
Jun 21 21:45:54 erlangen systemd[1]: Finished NFS server and services. 

**●** nfs-mountd.service - NFS Mount Daemon 
     Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static) 
    Drop-In: /usr/lib/systemd/system/nfs-mountd.service.d 
             └─options.conf 
     Active: **active (running)** since Tue 2022-06-21 21:45:54 CEST; 1min 16s ago 
    Process: 8207 ExecStart=/usr/sbin/rpc.mountd $MOUNTD_OPTIONS (code=exited, status=0/SUCCESS) 
   Main PID: 8211 (rpc.mountd) 
      Tasks: 1 (limit: 4915) 
        CPU: 10ms 
     CGroup: /system.slice/nfs-mountd.service 
             └─ 8211 /usr/sbin/rpc.mountd

Jun 21 21:45:54 erlangen systemd[1]: Starting NFS Mount Daemon... 
Jun 21 21:45:54 erlangen rpc.mountd[8211]: **Version 2.6.1 starting**
Jun 21 21:45:54 erlangen systemd[1]: Started NFS Mount Daemon. 

**●** nfs-idmapd.service - NFSv4 ID-name mapping service 
     Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static) 
     Active: **active (running)** since Tue 2022-06-21 21:45:54 CEST; 1min 16s ago 
    Process: 8206 ExecStart=/usr/sbin/rpc.idmapd (code=exited, status=0/SUCCESS) 
   Main PID: 8208 (rpc.idmapd) 
      Tasks: 1 (limit: 4915) 
        CPU: 2ms 
     CGroup: /system.slice/nfs-idmapd.service 
             └─ 8208 /usr/sbin/rpc.idmapd

Jun 21 21:45:54 erlangen systemd[1]: Starting NFSv4 ID-name mapping service... 
Jun 21 21:45:54 erlangen rpc.idmapd[8208]: **Setting log level to 0**
Jun 21 21:45:54 erlangen systemd[1]: Started NFSv4 ID-name mapping service. 
**erlangen:~ #**

Like kasi042, I wonder about the lack of white space between the fstype specification filed and the options field.

And you did not answer my first and thus foremost question: what did you do and why do you think “unable to make NFS connection”!

At least show

mount /home/alastair/NFS_Multimedia_NFS

Hi Henk,
I have explained what I did and answered the question as far as I am able. The absence of the gap is an error created when I copied and pasted the line because running this again to check on laptop with the command
grep nfs /etc/fstab was correct and with a gap. (Am on workstation at present and laptop off so cannot show directly)

Playing around with Yast on the workstation I think I have found one possible problem, as usual probably through my ignorance, but I shall explain what I have found.

When I set up the NFS server initially I enabled the NFSv4 option. I then selected the directory to share and selected the directory I usually work on which is:-

/home/alastair/Mastermedia/multimedia

What I had forgotten is that Mastermedia is soft linked from the actual directory, which is on a different raid array on my system.
The actual files are on a raid array mounted on /multimedia and through historical changes actually mounted on /multimedia/multimedia and symlinked to /home/alastair/Mastermedia.

[FONT=monospace]lrwxrwxrwx   1 alastair users     11 Sep 18  2021 Mastermedia -> **/multimedia**

I suspect that this is the underlying cause of the problem. I shall make the necessary changes and report findings.

Meanwhile I tried setting up an NFS share without enabling NFSv4. This seemed to ignore the symlink and take directly to the source file. No time to play with this more as I want to stay with NFSv4.

[/FONT]

At last I have it, at least to the extent that my NFS is working and I am working on the laptop.

In summary the problem had been cased by my selecting the symlinked directory from my working home directory tree. Once I selected the real mounted directory the NFS worked as it should.

The problem with yast not showing the NFSv4 remote directory for selection until the -force nfsv4.2 has been selected remains, but once the nfsv4.2 is selected then the remote directory is available for selection. I have still not been able to browse for remote directories and always have to use the ip address when setting up a client.

Initial experiments indicate that if I do not enable NFSv4 on the server the system looks past the symlink and masks my error and works as expected.

My problem has nothing to do with firewalls so apologies to arvidjaar and henk. I have no idea why stopping the firewall on the server did make the connection possible although only when rebuilt. I have no idea why what worked on another workstation client a while ago did not work on the laptop or why my NFS server has been running without a problem until now but it is now set up correctly.

Many thanks to all once more.

Thanks for the comprehensive feedback. When I used a symlink in /home I experienced some nasty side effects. I replaced it by a bind mount which works perfectly.

As a footnote to this issue I today checked the setup of NFS server and Client using Yast and find that the problem which caused me to have such difficulty has gone.

When I use the Yast tool now to serve a directory, even if I select the directory shown in my home tree, which is actually softlinked from another drive, the actual directory served is the correct one. In other words Yast goes correctly to the source and ignores the softlink. This is helpful because although Dolphin identifies that a directory is from a softlink by using italics, when browsing for a directory for selection when setting up to serve a directory, the fact that it is actually a softlink is not apparent. This is why I had so much difficulty earlier.
I hope this helps.

My very, very personal idea:

When working as superuser, one does not use extra hiding layers like Dolphin. One uses the shell and in this case the ls tool shows very clear when something is a symbolic link.

Hi Henk,
No argument from me and I agree but I had been using Yast, not Dolphin, and just selected the directory as usual from within Yast.

Of course I should be using cli to set up the server and client but still not comfortable with that and Yast is supposed to work for simple folk like me.
Thanks for all your help and patience earlier.
Regards,
Budge

OK budge. Nice you have it running now.