For your system it is an NFS file system. Your system has no idea what file system type it is on the server.
Like all mounted file systems it forms part of the one and only directory tree that starts at /. You choose to have it at /home/Zhegedongxi_NAS. I do not know why. In /home are normaly the home directories of the users defined in the system. I do not say you should not do this (I have several directories in /home that are not home directories of users), but you must have a reason.
That brings us to the related point. What is on that file system. If it is files that are more system oriented (like a databse, or a files to be use by the Apacjhe web-server), or if it is files to be used by more users, or if it is files to be used by only one particular user (e.g. his music/films) then all those ask for different solutions where to mount it (or mount it in e.g. /mnt and then make symbolic links).
This is Linux and thus Unix, thus all basics about file ownership by UID and GID and the permission bits are valid. You already tried to do something about this by makeing rogier:users the owner of the mount point (this will enable rogier to go there, but why not mount it somewhere inside
/home/rogier then?). But we still do not know who is/are the owners of the directories/file on the NAS. On that system you should have the same UIDs and GIDs (no need to give them the same user name and group name, but that helps in understanding) as on the client system and the files should be ownded by those.
Al lot to study in the above
And to do something practical:
ls -l /home/Zhegedongxi_NAS
(which is btw a terribel name to type every time.)
The reason of having the NAS is because of a major crash on my hard drive where I store all my documents and photos. Luckily, my documents where also backup in a cloud service, as I have to be able to access them at work as well. My photos taken in a month time are all lost. I normally ran a backup every month on a USB harddrive, as the files are huge. This time there was a bunch of photos of an airshow that took a considerable amount of time to edit, but many were rated for printing (normally I rarely print them). All these are lost, including the raw files. This made me decide to have something more readily available to backup files to “on the fly”, preferably something that my wife also could access from her windows laptop so her files are also stored safely. So all we store are documents (text documents, spreadsheets, project files and presentations) and photos, no system files like databases etc. The NAS is shared, the laptops are not, we both have our own laptop, hers on Windows, mine on Linux with me the only user (and root of course).
I chose to mount it in /home, to have it somewhere where it felt logical to me as it should contain the same directory structure and files as in my own home folder /home/rogier. That actually was the only line of thought here.
I have now added “user” to the row in FSTAB, and now it nicely automounts at boot accessible to me. It seems all is resolved now, but I will still study item number 4 as I am not sure whether those are all set correctly for my wife and myself on the NAS. We at least can access, and no secretive files on the NAS either. But for sake of correctness, I will try to get that corrected. At least the main hurdle has been taken!
I can imagine that typing such names is a bit cumbersome. The name came from a running gag that we had on a project we did in China a couple of years ago. All our equipment has names like that:
Zhegedongxi_NAS (that thing)
Neigadongxi_MESH (what’s that thing)
Shenmedongxi_Printer (which thing)
It is for backup and you want a mirror of the home directory of rogier.
My idea then would be to look into rsync. You could create an rsync command that mirrors your /home/rogier on /home/… This will take a bit of time the first time you run it because it has to copy all and everything. But later runs will only remove on the mirror what is removed on rogier’s home, and copy new and changed files. Often only seconds. Then run that on a regular base e.g. using crontab (or it’s systemd equivalent). E.g. every hour is fine I would think.
Why wouldn’t it be a good idea? I use it for several years already and it works great, easy to setup and no pain when you try to reach the NAS when it happens to be offline.
I have tried the systemd variant but that doesn’t come close to autofs regarding how it works. For me it will always be autofs.
When I write “automount” I mean the “autofs” service – the thing which performs “on-demand” mounts and then, automatically unmounts when the File System is no longer being used (time delay of usually 5 minutes – possibly causes a delay when shutting down.) …
No. It needs to be “nfs” – “Network File System” – not a local File System such as Btrfs …
NFS “exports” a File System via the Network – the NFS packets are either TCP/IP packets or UDP/IP packets – older NFS implementations used to use UDP, current implementations tend to use TCP by default.
The underlying File System on the NFS Server is transparent to the NFS Client.
Possibly, because the “auto” in ‘fstab’ means, automatically mount when “mount -a” is called – the systemd variant ain’t “mount on demand” – but, be aware that, Autofs mounts can cause a client to hang if the network service isn’t available and, the Autofs and NFS timeouts haven’t been setup with reasonable values …
In the talking about systemd.automount nobody suggested that it should be combined with the option auto (by default or not). Quite the contrary. In the link provide in post #19 above, one of the first examples shown is:
noauto,x-systemd.automount
which is of course the logical combination. When you want mounted on need, you do not want mounted on boot (or other mount-a).
What @JanMussche started is a suggestion to use the well know Unix (started by Sun Microsystems in about 1988) automount feature that uses the auto.master file (and can use many configuration files starting from there).
There is no relation between this and the (no)auto option for mounting.
There is also no relation with the action taken on many systems when, running a desktop, a connectable/removable mass-storage device (often via USB) is connected (yes, I know many people call this also automounting).
The expression “automount” is of course just a plain English term that can be used for any mounting where the onlooker assumes it is automated. Thus the bewildering number of usages.
Now wat @JanMussche suggests is a not bad IMHO, but I found it a bit premature to suggest atomounter where the OP was not even able to mount all. That was first to be solved. After that automounting is a good suggestion, specialy because we now know more about the intended usage of this case.
What I also did was suggesting to look at the new way to do this type of automounting using systemd.automount. @deano_ferrari not only supported my suggestion, but pointed to documentation (quite good IMHO, it starts saying that it better be combined with noauto ;)).
@JanMussche is apparently not very enthousistic about the new systemd.automount compaired with the “old” way.
I asked him to explain a bit more about the Pros and Cons. That would be an interesting thing to read.
I find the sentence: “is not the objective facts we all want to see” strange, to say the least. How can you be so sure you can speak for everyone? Or do you just mean: is not the objective facts I want to see?
When I used the systemd variant and added a line to my fstab it somehow didn’t work as I wanted it to work, the way it works with autofs (automount). I can’t remember what it was (it’s too long ago) that drove me away from it but there was something that didn’t work as smooth as I was used to, so I setup autofs again. Simple, fast and just does what it is supposed to do: mount a drive on request and let it go after a certain time of not being used.
Well, I guess that here there are at least more then just me that are interested if someone has tested these two for pros and cons. This is a technical forum where people like to see technical facts. And you made at least me curious.
The NAS box doesn’t seem to support NFS via IPv6 properly – possibly a RPC library issue …
Via NFS, an empty file on the NAS box could be created with “touch” but, attempting to edit the file with “vim” resulted in “fsync” errors and no change in the file’s contents – also, “vim” couldn’t create the ‘.«filename».swp’ swap file.
The solution is:
In DHCP always assign the same IPv4 address to the NAS box.
In all the NFS clients, point the NFS mount to the IPv4 address of the NAS box rather than, the DNS hostname.
Setup the NFS export filter on the NAS box to use the IPv4 address range constriction – in my case ‘192.168.178.0/24’ …
[HR][/HR]I noticed this because, the Leap 15.1 systems seem to use IPv4 for NFS, regardless – the DNS name of the Leap 15.1 NFS server seems to be always resolved to the “A” record – the IPv4 address …
I’ve been looking at the QNAP “rsync” possibilities:
It seems that, they have a proprietary “rsync” to synchronise between multiple QNAP boxes.
Yes, there is a possibility to setup for another (non-QNAP) server, with a system-wide user and password, which would be visible in any “rsync” scripts running on the Linux hosts – not a good idea for small sites and, there’s also file ownership issues on the NAS box.
Now that NFS is behaving again, I therefore prefer scripts containing the following code to dump user directories onto the NAS box (file owner and protections are preserved):