Can someone please explain the difference between mount and link.
Example: I mount a dev to /mnt/sub
then I mount it from there to another folder
mount /mnt/sub /home/sub
I also could do a link
ln -s /mnt/sub /home/sub
same effect and functionality.
Which one is better?
And another question. For backups, mounted directories are fully included. So, I would have the same data twice if I backup /mnt and /home in this example. Will the same happen if it’s a link?
DESCRIPTION
link() creates a new link (also known as a hard link) to an existing file.
If newpath exists, it will not be overwritten.
This new name may be used exactly as the old one for any operation; both names refer to the same file (and so have the same permissions and ownership) and it is impossible to
tell which name was the "original".
Self has learnt far more than expected by using terminal
It depends? They both generally do the same thing.
If you use the mount command, it will only exist until you reboot or it is unmounted
Symlinks are permanent unless you delete them
If you mount permanently in /etc/fstab, systemd will make sure they are mounted before they are needed by something. This will only be an issue in some pretty specific cases.
In most cases, it is just two ways of achieving the same goal.
In both cases, it depends how you are doing your backups. There are usually ways to exclude both mounts and links.
I usually use clonezilla for full backups and the Yast Backup in openSUSE before they removed it. Currently, I just copy the most important folder structure from time to time, but it takes forever if it includes mounted subfolders.
I mount all my devices in /mnt (in fstab) and distribute from there to the actual destination (currently also in fstab) but wanted to clean up a little and replace all mounts with links. I dont really like the idea of having everything directly mounted, so its clearly arranged and not so spread out.
Until I find a new backup software, I have to do it that way, which is ok, but I want to avoid duplicates in data.
But the reason why I asked that question is, I noticed a problem with amule. When the sub-folder incoming and temp are mounted (from a different partition), everything works fine. But it seems it doesn’t like links. It can read from them but not write for some reason (yes, I checked chmod and it should be ok). I don’t know if I will run into similar problems with other applications, that’s why I ask if there are any limits with links.
If you use rsync to do the copy there are options to exclude/include links. If you are using mounts you can also limit it so it doesn’t leave the filesystem it is currently in.
There really shouldn’t be many limitations. However, applications can tell the difference between a link and normal file/directory if they try to. It just depends how the application works.
I am not sure what the issue is with amule specifically, I am not familiar with it.
I dont want to exclude either of them. I dont know all links of the system so I dont touch that if I dont have to. On the other hand, mounted directories are a problem for manual backups which I also try to avoid but I never had any problems with links. We are talking about several hundred GBs that a spread over the entire file system which itself is less than 10GB. So, I am not sure which solution would be better in my case.
Ok, I didnt know, that applications can tell a difference. But matches my observation and makes sense now.
Then I have no other choice than to check out every application where I had sourced out directories.
The better solution is not to link or to double mount
If a directory is not mounted you can not backup that data, right?
Are you still using Clonezilla for your backups? That is it think a nice program for “system partitions” but not so for “home partitions”. With “home partitions” I mean all your personal data. Clonezilla will “clone” that whole partition, that is not needed, a backup should only make get the files on your system that you care about are the same as on your backup, if you do semi-frequency backups it should only copy the files/dirs/etc that are changed relative to the last backup, that should not be too much.
My back strategy is not to backup “system partitions”, if things crashed I will reinstall that. My “home” is backup-ed using rsync, that is a program specifically made to get to sides in sync. Try running it with “–dry-run” and review what it doing before running it.
Then I have no other choice than to check out every application where I had sourced out directories.
I do not care where applications save their data, that is no personal data. Only exception I have is Thunderbird, but also not all data in that directory is of interest:
Ok, but there is no way around, see explanation later.
That is also correct but I cannot unmount on a running system just for a backup
I use clonezilla for full backups before major changes on the main system or when I mess with the RAID.
Ok, but it’s a server (for home use only) and a little more complex than just a PC with an ordinary /home partition. Maybe I should have mentioned that earlier but didn’t expect the discussion would go that far.
Backstory. I set up that server probably 10 years ago with openSUSE 13.1.
Structure I planned was:
RAID 1 System
/boot, ext4
/, btrfs
LVM with /home /svr /tmp /var swap … , XFS
RAID 6 DATA
Userfiles, database and who knows what else, each on its partition within another LVM.
I did that for two reasons. To make backups easier and to limit the capacity. If a service runs out of space, it cant block other services.
So, I mounted /dev/raid/lv/www to /srv/www for example and was no problem for a long time and worked with the backup tool which was included in openSUSE 13.1 up to ? but after upgrading to leap 15.x it was removed from the package.
My strategy before was, I made a backup of /www (daily/weekly) and of / once a month (have btrfs and snapshots anyway) and as far I remember all mounted partitions excluded.
And here is where the problem begins. Until I find a new backup software, I just copy by hand for the next while. But its annoying if I want to avoid double backups.
Thats why I was interested in links and thought I can mount everything external in /mnt and then link to its destination point where it was mounted before. With the first application I tested (amule because I dont really use it and data loss when testing wouldnt be critical) I run into problems. Thats how it came to my opening question.
Shouldn’t mount throw an error when trying to ‘mount’ a directory and not a filesystem? Like, complain that the referenced directory is not a block device?
With a 950 Pro 512GB being too small to hold all of /home I split /home between the 950 Pro and another 850 Evo 500GB. They are linked into one /home using a bind mount. This works hassle free. A soft link had worked too, but caused trouble.
Backup of /home is performed with rsync. The bind mount is fully transparent, i.e. rsync -a /home ... works the same way as if /home was a single folder.
For backups self regularly uses tar to save folders with recursive contents.
tar --preserve-permissions -zcvf Archive/Docs20230722.tar.gz ~/Documents
tar --preserve-permissions -zcvf Archive/Public20230722.tar.gz ~/Public
tar --preserve-permissions -zcvf Archive/bin20230722.tar.gz ~/bin
tar --preserve-permissions -zcvf Archive/Pictures20230722.tar.gz ~/Pictures
Then I copy them across to other locatons.
Tar on enough occasions recovered what otherwise for self could have been traumatic…