mount.nfs: Protocol not supported

I have about 2000 updates gathered, I did zypper dup and seems that my NFS client cant reach NFS server. I have errors for all my shares
**An error occurred while accessing ‘Home’, the system responded: mount.nfs: Protocol not supported. **
I’m a bit worried.
I have s star on each NFS mount pint.
I have tried o delete it and add again, but with the following errors (screenshot attached)
I have seen on the mailing list and everywhere else people having similar problems. I thought I will revert snapshot back, but for some reason, snapper has only PRE snapshot but no POST snapshot linked to for some reason it didn’t happen, so it doesn’t allow me to roll back.

I as mentioned in the first thred, my Snapper refuses revert snapshot to PRE state of the distro upgrade.

You have to merge the changes in /etc/nsswitch.conf.rpmnew with those in /etc/nsswitch.conf.

Yes, it did work, but really annoying that user must do such manual tasks to get back into normal state. What exactly happened here?

They have been moving files (such as “services” and “protocols”) to “/usr/etc” instead of “/etc” The updated “nsswitch.conf” indicates that with the “usrfiles” entries. But they could not update “nsswitch.conf” itself, because that is defined as a file for the local system administrator. So the new version was provided, and the command “rpmconfigcheck” did list it as a file that needs to be checked for updating.

If you are using a rolling distribution such as Tumbleweed, then you need to take care of such things. I do recommend running “rpmconfigcheck” occasionally.

As was explained many times in discussion about this problem - this file was never consciously touched by any user that had this problem. If this file was ever modified, it was done by system itself without user being aware of this modification. So this is no excuse to screw up users.

If “they” (whoever “they” refers to) were not able to automatically adjust this file then “they” should no do this change until the automatic adjustment is solved. As was pointed out, it could also have been implemented differently, without requiring manual file modification in the first place.

I actually agree with that.

I notice that for some configuration updates, the old configuration file is renamed as filename.rpmsave and the new one put in place. That would have been a better strategy for this change (in my opinion).

Exactly my point. I find it extremely peculiar to fiddle around with something basic like /etc/nsswitch.conf and then not providing a mechanism to adapt affected systems automatically. It took me several hours to figure that out, I had at least two (maybe three) heavy errors: A client suddenly refusing to mount an NFS share and a perl script that had “forgotten” what the protocol tcp would be. I assume my failing squeezebox server fell also a victim to that.

I must be missing something here. I’ve read up on this issue, made the merge changes to /etc/nsswitch.conf from /etc/nsswitch.conf.rpmnew, and rebooted my system to no avail. My NFS server is still not working as it should. Is there more to it than just the /etc/nsswitch.conf file?

How should we know? You never described what “not working” means nor provided any information allowing to start guessing. Start new thread, describe your problem, provide logs from system. The more details the better.

My apologies for offending you. I hope my horrible question didn’t cause your day to be ruined.