When I first installed tumbleweed I followed the proposed partition scheme: BTRFS for the system and XFS for the data.
My current setup is the following:
I’m wondering if it’d be possible to just format my /home to BTRFS and move back the data from a backup.
The reason why I want to switch to BTRFS for my home is the possibility to create snapshots and backup up them with restic on a different volume.
Hi, thanks for responding.
I have done several backups of my /home using both restic and rsync locally on a separate drive, and remotely on NAS.
The reason I want to use BTRSF snapshots is that with rsync it often does not guarantee backup consistency, as well as problems with hard links.
I think backing up snaphosts at a specific time is best, because if while I am backing up a file is changed, the backup is no longer consistent.
I use this strategy with my NAS and so far it has always worked: I take a snapshot of a partition, and then I back up the snapshot.
I do not use Btrfs and do not dare to advise on it’s use. I only commented on how to “change” a file system from one type to another. This can not be done while keeping the data on it. Files must be backed up and can then be copied back on the new file system (which may be on the same place and of the same size, or on a different place and/or different size).
Strange. but when you never have reported this here for help, I have no comment.
Be aware that Btrfs snapshots are no replacement for back-ups.
Been using rsync for more years than I can count (since the beginning, I guess).
Highly reliable.
We would NEVER consider switching our separate XFS dedicated /home to BTRFS.
It’s worth repeating… rsync is very reliable (which is why many Linux independent backup systems use it).
I know. The only reason I was thinking about this solution–converting /home to BTRFS–is that by being able to take a precise snapshot, it allows me to back up the home at that precise moment. If for example during a copy with rsync or a backup with restic I modify a file (even unintentionally, e.g. a firefox cache file) the backup is inconsistent.
To give you an idea of my NAS backup strategy I give you this example.
I have a /docker folder that contains all my self-hosted services. To back up this folder, I create a snapshot every day (because I configured it that way from the NAS, but I could do one every hour/day/week etc) and I created another script that takes the last snapshot of the /docker folder and backs it up with restic to an external disk connected to the NAS.
With restic I can mount the snapshot as if it were any volume, take a specific file or restore the entire volume and be sure it is a snapshot of a specific instant of that folder.
A BTRFS snapshot is not necessarily a full copy of the files you have got on the (sub)volume. It may just be the changes. OK, you may probably force a full copy doing a manual snapshot, I don’t know. However,
Why would backing up a copy be better than backing up the original set? You’d run into exactly the same issues
I’d say this it true for any kind of backup. But this can be overcome with a) discipline and b) solvers offered by tools as unison. rsync may have similar options.
About backing up with BTRFS you may want to check out the send/receive function after about 30 min in the video.
Are you logged in as a user when you do the rsync?
We always log out from all user accounts. Then we log in as root user at Console 1, and drop to init 3 level (text mode only, multi-user), then run rsync on /home.
The errors you show seem indicative that your user account is logged in while running rsync.
Can you explain better what you mean by ‘drop to init 3 level’?
I currently use tumbleweed and KDE. I logout, and then CTRL+ALT+F1 to open the console. Then I log in as root and if I launch rsync I get errors similar to the ones in my previous post in particular symlinks. It is not clear to me how this way can avoid symlinks.
Perhaps I did not explain myself well.
The errors I get are those related to simlinks, see my previous post.
These are symlink related errors, not input output errors. I make regular backups to NAS and other local disks and I am 100% sure they are not disk-related errors.
Also as suggests @aggie I tried to logout from KDE of my standard user, I pressed Cttrl+Alt+F1 to log in to tty1 as root.
Anche in questo caso usando rsync ottengo gli stessi errori/warning relativi ai symlink.
My doubt at this point is that the command[1] I am using is incorrect, and I am wondering: what is the correct way to back up the /home so that it can be restored 1-1 in a data recovery and/or migration situation?
Everyone has different requirements and different hardware.
We use a Samsung T7 Shield external SSD to store our backups for the laptop and desktop machines. (and our Samsung phones) - we use a USB-C to USB-C cable to connect to the machines.
For our requirements, we do this
# mount --source /dev/sdb3 --target /mnt/target
.
(in this case, sdb3 is an XFS partition / filesystem on the T7 external drive)
[ ... ] our rsync :
# rsync -a --info-progress2 --exclude="lost+found" /home/ /mnt/target/laptop
Obviously (well obviously to us), we have a “/mnt/target/desktop” for the backup of the desktop machine … and we have an NTFS partition / filesystem to backup the Win10 laptop … and we have a dedicated partition / filesystem to back up our Samsung s21 phones. (all on the T7 drive - we used the openSUSE machine to re-configure / format the external T7 drive for the different filesystems).
And yes, we drop to init 3 runlevel. So, CTRL-ALT-F1, log in as root user, then execute the command “init 3”, which is “multi-user with no graphical desktop running”. If you’re unfamiliar with the different runtimes of Linux (Unix), I suggest doing research to understand.
Thanks, after you post I made some research and at the end made a simple script that switch to init 3 with this:
echo "Switching to multi-user.target (runlevel 3)..."
systemctl isolate multi-user.target
I didn’t know about init 3 – thanks.
Then I make backups of my home on both a spare local HHD and then on my NAS:
rsync -a --info=progress2 --force --delete --exclude-from ~/.exclude_patterns /home/fabio/ /volumes/backup/home/fabio/
rsync -a --info=progress2 --force --delete --exclude-from ~/.exclude_patterns /home/fabio/ /mnt/magnum/homes/fabio/
I’m excluding the .cache folder from the backup and some other files, and then shutdown my workstation.
It works better than running rsync from the user, and another advantage is that i do not have issues with permissions. Only issues I have is with some symlinks stil get errors. But thery are related to some flatpak I’m using to play some games on steam and lutris so I do not care to much. But overall works better, thanks. I’ll do the backups also with restic in this way from init 3.
Technically, your command to switch to Runlevel 3 is the proper way (i.e., using systemctl isolate multi-user.target) versus doing an “init 3”.
I’m surprised no Admins in here have corrected us … whenever we recommend “run init 3”, they correct us to say “use systemctl …” is the proper way
The idea to drop the Runlevel is to eliminate any background GUI processes that may run while doing the backup.
Heck, you could even drop to Runlevel 1 and do the backup, to eliminate any other rogue processes that may want to run.
Our desktop machine has two separate NVME drives, each with its own separate instance of openSUSE.
So, we boot into instance A and backup the /home of instance B. And then boot into instance B and backup the /home of instance A. That ensures there is zero activity occuring with the “inactive” /home.
On the laptops, there is only a single install of openSUSE. So, as mentioned, we drop down to the lower Runlevel and do the backup logged in a root user, to backup the /home of users NOT logged in.
And yea, excluding the ~./.cache folder is fine, since it is just that, for temp caching for apps to run a bit faster with the info.
I have the feeling that you are going in the wrong direction. Instead of changing file system type and messing with system targets (ex runlevels), why don’t you check the proper way to use rsync to synchronise the data that you need?
You first must have way to rsync your data from the XFS /home partition to your external backup partition – and back again. Beware of symlinks and hard links. Be creative if you must, find solutions for sym/hard link problems. Test rsyncing back and forth thoroughly.
If the rsyncing works you can make your move from the XFS home partition to a BtrFS @home subvolume. Don’t forget to deal with /etc/fstab and make the necessary backups and changes of and to that file.
If you have an old spare SSD (64 - 128 GB should be enough) you can even practise! - And you should, because if you have only the / (root) BtrFS with /home (and others) as BtrFS @subvolumes, you need to be able to manveuer all things BtrFS anyway.
And let’s not forget, there’s another option. If you really have a backup of your XFS /home partition you could as well just flush your original SSD (no more partitions there) and then install tumbleweed afresh deselecting [ ] extra /home partition.