I have always used Yast to set up an NFS client in the past but have just changed to Leap 15.3 on a newer system and cannot find the configuration tool in yast.
What has happened to it, why and how do I get this working please?
Solved it myself after a bit more reading but am annoyed that what works out of the box on TW did not work on Leap 15.3 without my trial and error approach.
You have to install the package “yast2-nfs-client” – it seems that, there’re no dependencies present to automatically pull it in at installation time …
BTW, on the NFS clients, I prefer to use on-demand mounting of NFS exports – <On-demand mounting with autofs | Reference | openSUSE Leap 15.5.
I also would advise to use automounting, but I use systemd for that. Even easier to configure then good old Automounter.
See https://wiki.archlinux.org/index.php/fstab#Automount_with_systemd
In fact all I had to do is to add these options to the enty in/etc/fstab:
x-systemd.automount,x-systemd.mount-timeout=10,x-systemd.idle-timeout=5min
My personal entry then reads:
boven.henm.xs4all.nl:/home/wij /home/wij nfs noauto,nofail,x-systemd.automount,x-systemd.mount-timeout=10,_netdev,x-systemd.idle-timeout=5min 0 0
The noauto and the nofail are for not mounting it on boot (that is not needed, because we mount it on need) and nofail to avoid extra problems when the moujnting fails. The _netdev is superfluous.
You can of cource vary in the value of the x-stystemd.idle-timeout, after that many minutes of not being in use the fs will unmounted.
I have the NFS accessible now and can read files on the remote NAS server but I can no longer right click to create a new directory on the remote system.
I assume it is a permissions issue but too difficult for me. How can I change this connection so I can write to the remote file system?
How is the mount set up? Can you show us the pertinent details?
I’d say you just have to make sure to add the option “rw” to the nfs-mount. I have always ever only used YaST for NFS mounts. Did you follow dcurtisfra’s avice? This is my (quite simple) fstab entry style I have for all my mounts created by YaST, now on 15.3:
server:/folder /home/user/targetfolder nfs rw,async 0 0
I don’t know why, but these mounts now stay mounted even when closing the lid of my laptop down to suspend and opening it again, which didn’t work until about half a year ago yet on 15.2.
(Just note, “async” is no default setting. I am actually not sure if it is advisable but it works for me since many years. If you want to use it for the mount you need to use it as export option at the server, too. Otherwise just skip it or use “sync”.)
You may also want to make sure not to run into any compatibility issue with NFS version. Here in YaST the chart shows “NFS version” - “any”. The YaST module has a setting for this. I have to admit I don’t know where that is reflected in fstab. It may probably be established by not specifically adding an NFS version.
Hi Deano,
Many thanks for the reply.
I set up my NFS using Yast. The server is an older Qnap NAS and doesn’t have nfsv4 but works on versions 2 & 3 as far as I understand. On the yast tool I just selected the Any (highest available).
Here is the line in my /etc/fstab:-
192.168.xxx.xxx:/Multimedia /home/alastair/mastermedia nfs rw,async 0 0
I added the rw,async instead of defaults but no success yet.
I have a suspicion that I don’t have group or ownerships correct but no idea what used to work and does no longer.
All help gratefully received.
I would say:
Why rw if that is already the default?
Why async, did you read what it is intended to do and have you a reason you want that?
And, a so often, please show what does not work and e.g. what iwnership and permissions are. Like:
ls -ld /home/alastair/mastermedia
ls -l /home/alastair/mastermedia
BTW, there is no reason to obfuscate the IP address in your fstab entry. The 192.168.1/24 range is a private address range (and can not be connected to from outside the LAN) and many thousands of people use it because most Internet providers have it as standard in their routers.
Hi Henk,
I used async because I thought it might improve performance but I do not need it and have now removed.
I had no idea rw was the default and put in rw in the hope that might solve my problem. Now you have prompted me I can remove rw too.
And, a so often, please show what does not work and e.g. what iwnership and permissions are. Like:
ls -ld /home/alastair/mastermedia
ls -l /home/alastair/mastermedia
BTW, there is no reason to obfuscate the IP address in your fstab entry. The 192.168.1/24 range is a private address range (and can not be connected to from outside the LAN) and many thousands of people use it because most Internet providers have it as standard in their routers.
OK here is what I have for ownership and permissions:-
alastair@HP-Z640-1:~> ls -ld /home/alastair/mastermedia
drwxrwxrwt+ 10 root 101 4096 Sep 28 2020 /home/alastair/mastermedia
alastair@HP-Z640-1:~> ls -l /home/alastair/mastermedia
total 40
drwxrwx---+ 51 1002 users 4096 Sep 23 2020 **Music**
drwxrwx---+ 245 root users 12288 Nov 23 2020 **Photos**
drwxr-xr-x+ 242 1002 users 12288 Sep 28 2020 **Videos**
alastair@HP-Z640-1:~>
That looks a bit strange.
Question 1: why the t-bit on the directory?
Then, that same directory has a group 101, which is apparently not know to your system.
Then, the owners of the three directories inside are a bit mixed. Two are owned by UID 1002, which is apparently not know by your system. One is owned by root, which is not something that should not iappen, but strange when one sees the other two. Aren’t all three there basically with the same purpose (albeith which dofferent contents)?
It seems that these exports have a history of usage by another system which other users/groups configured. t is only you that can know about what happened earlier.
BTW, as you see your problems are purely because of permissions (users/groups and permission bits) and has nothing to do with the file systems themselves being mounted read-only. That is a big read herring.
Hi Henk,
Many thanks for your observations which seem to reflect the long and tortuous route to present situation. I have no idea what a t bit is ir why it is there. An artifact from the NAS setup I assume.
All the data is from the NAS which has been in use for years and has been subject to very many system updates from Qnap. However I have not knowingly changed the configuration for years.
I recall I many years ago I did have some ownership issues resulting from earlier connections and getting rsync to work. Most of the data was uploaded to NAS using rsync from my main machine which is still in use following recent hardware upgrades. I have now relocated my residence so have been setting up what I need to access the main machine remotely, hence NFS.
The Photos directory has only been brought into use in the last couple of years, the other two directories have been used from the start.
I would very much appreciate your help in sorting out my NAS system so that it is more correct and hopefully will work as I hope in future. Where should I start?
Regards,
Budge.
Hmmm …
- Looking through the openSUSE documentation, some things such as the move from OpenLDAP to 389 LDAP are documented – NIS versus LDAP was documented with Leap 42.2 but, no longer – and now we have the autofs versus systemd issue.
A general question – does the systemd automounter support LDAP and/or NIS reliably?
- If not then, autofs will still be needed for networks which use LDAP/Kerberos or NIS for user authentication.
- If it does then, is there any reason for not moving over to the system solution?
From my view, it seems to be much easier (than autofs) to configure – everything is in one place – “fstab” – rather than being spread around in the (/etc/) autofs files and directories …
So we have to start basic Unix/Linux lessons?
From man chmod :
For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp.
So when some system manager in the past thought that everybody should be able to put things in that directory (and thus will be the owner), but that only the owner should be able to delete it. But why not e.g. group root, but 102. And what group is/was 102?
I do not know anything about the NAS system you use, but can you access that, preferable as system manager? Then you should be able to find out what group 102 is and what user 1002 is.
Bottom line.
As system manager you must have file ownership by (user and group) and permission bits at your finger tips. Else it is rather difficult even to discuss this with you. Unix/Linux security is based very much on this and not understanding it will bring you surprises again and again.
And in an NFS environment it is crucial to manage users/groups centrally so that each user involved has the same UID (and preferable the same user name) on all systems. Same for groups. That “managing centrally” may mean that you have a piece of paper where you list them all so you can configure them the same on each system.But you can also use a centralized system like NIS or LDAP to do this. Depends on the number of systems you manage.
This discussion has left me struggling. I have neither read nor had a need to learn about NIS, LDAP, PAM and Kerberos and it seems that the concept of authentication is well above my paygrade and am afraid I cannot answer the questions raised.
I would be happy to move forward with the rapidly changing technology but will need more help on what to do.
Meanwhile all I want is to be able to save files from my present machine to my NAS so I can use them as intended.
All further advice appreciated.
I do not think that systemd.automount has any direct relation with NIS and/or LDAP. The same for old automounter.
Of course you should have a method to be sure that users/groups are centrally managed, else you will run into conflicting ownership, etc. But if you do that by carefully configuring on each system, or by using products like NIS and LDAP is of no concern to systemd.automount.
My fstab entry as shown earlier works for an NFS client system which like the NFS server system is managed “manually” with the same users/groups, In fact two users and one group. Very easy to do and I feel no urge to start using NIS for this number.
Hi Henk,
I will check out the group 102. This has not come from me by design, but as I explained there have been very many changes. Neither do I know where 1002 came from as I have only just made the NFS connection… I can certainly dismantle and rebuild what I have here
I like the idea of a centrally managed system and this is what I should have, along with possible a local DNS rather than using windoze tools or host. I also probable need a cheat sheet on all the groups and users as my network has grown exponentially.
Will meanwhile investigate the 102 group if you can tell me where you saw the 102?
“Old” automounter explicitly supports storing automount maps in NIS or LDAP.
Oh yes, that is true. I do not think systemd is doing that (would be a bit against their philosophy to put everything in systemd ;)).
But you can of course read the man pages. I only have that one entry and also have offered it for use in a few cases (with NFS mounting problems) on the forums. In all cases the members reported success.