Backing up laptop with btrfs, what is OK and are there any simple steps?

Please understand this is not critique or flaming, I’m a little lost here!

Although a long time unix/linux/whatever nix user I’ve never messed with the btrfs file system until I recently installed Tumbleweed on my macbook.

I have been searching for information regarding (preferably simple) backup/restore procedures for the system as a whole - think total disk failure, personal files in /home is no problem as I understand traditional procedures will work as good as ever.

There is no shortage on information regarding backup program this or that in internet forums but they seem to mostly focus on /home .
Also there are lots of warnings about old-school procedures like clonzilla or dd for disk cloning.

Most documents describe the use of snapshots for system recovery from bad updates and the accidental “rm -rf” et.c.
There are also descriptions of some (in my opinion) rather convoluted ways of getting snapshots saved elsewhere which if I have understood correctly does not save boot records, partitions and other filesystems.

My google-fu may be lacking but I have not found a direct, simple, how-to description of how one makes a backup of the system as a whole and stores it elsewhere, on a NAS or other server.

The official documentation does not tell the story either, or I missed it, or I didn’t understand.

If I may, I’d like to make a copy (image) of the disk as a whole, boot record, partitions, whatever filesystems to my NAS or an external disk and then create file backups of changes to the installed system thereafter.

Then, or rather when disaster strikes I can restore the cloned image to a new disk and restore latest installation from file backups.

Maybe this is old technique and a “no-go” with btrfs ?

Any and all pointers to docs or whatever I may have missed are welcome.

Thanks in advance!

My way won’t help much … I only backup my (XFS) /home (I use rsync) … if something catastrophic happens to the system (btrfs), I’ll simply install afresh, then restore /home

These may (or may not) be of assistance:

.
A thread from the forums:

A ThinkBook runs daily backups:

thinkbook:~ # journalctl -b -u btrbk.service 
Mar 19 08:12:50 thinkbook systemd[1]: Starting btrbk backup of /home...
Mar 19 08:13:50 thinkbook check-erlangen[3015]: sleep
Mar 19 08:14:50 thinkbook btrbk[4043]: --------------------------------------------------------------------------------
Mar 19 08:14:50 thinkbook btrbk[4043]: Backup Summary (btrbk command line client, version 0.32.6)
Mar 19 08:14:50 thinkbook btrbk[4043]:     Date:   Tue Mar 19 08:13:50 2024
Mar 19 08:14:50 thinkbook btrbk[4043]:     Config: /etc/btrbk/btrbk.conf
Mar 19 08:14:50 thinkbook btrbk[4043]: Legend:
Mar 19 08:14:50 thinkbook btrbk[4043]:     ===  up-to-date subvolume (source snapshot)
Mar 19 08:14:50 thinkbook btrbk[4043]:     +++  created subvolume (source snapshot)
Mar 19 08:14:50 thinkbook btrbk[4043]:     ---  deleted subvolume
Mar 19 08:14:50 thinkbook btrbk[4043]:     ***  received subvolume (non-incremental)
Mar 19 08:14:50 thinkbook btrbk[4043]:     >>>  received subvolume (incremental)
Mar 19 08:14:50 thinkbook btrbk[4043]: --------------------------------------------------------------------------------
Mar 19 08:14:50 thinkbook btrbk[4043]: /home
Mar 19 08:14:50 thinkbook btrbk[4043]: +++ /Btrbk/btrbk_snapshots/home.20240319T0813
Mar 19 08:14:50 thinkbook btrbk[4043]: --- /Btrbk/btrbk_snapshots/home.20240317T0532
Mar 19 08:14:50 thinkbook btrbk[4043]: >>> erlangen.fritz.box:/Backup/btrbk_snapshots/thinkbook/home.20240319T0813
Mar 19 08:14:50 thinkbook btrbk[4043]: /
Mar 19 08:14:50 thinkbook btrbk[4043]: +++ /Btrbk/btrbk_snapshots/ROOT.20240319T0813
Mar 19 08:14:50 thinkbook btrbk[4043]: --- /Btrbk/btrbk_snapshots/ROOT.20240317T0532
Mar 19 08:14:50 thinkbook btrbk[4043]: >>> erlangen.fritz.box:/Backup/btrbk_snapshots/thinkbook/ROOT.20240319T0813
Mar 19 08:14:50 thinkbook systemd[1]: btrbk.service: Deactivated successfully.
Mar 19 08:14:50 thinkbook systemd[1]: Finished btrbk backup of /home.
Mar 19 08:14:50 thinkbook systemd[1]: btrbk.service: Consumed 4.968s CPU time.
thinkbook:~ # 

Backups are stored on infamous host erlangen. More: btrbk - Summary

Asking about backup/restore policies you will always get a lot of different and often very personal advice. This is mine.

I do not care about the type of file system involved at all. I backup important data (system configuration files and user files) on a file basis. Basically using rsync in a way that I can reach back several backup generations (there is always the user that comes and tells (s)he malformed a file two weeks ago, but only finds out about it now).

I repeat that this is of course independent of the fact that there is Btrfs on the system or not. So this might or might not answer your question.

1 Like

Thank you for your input so far.

@karlmistelberger
Great find btrbk, looks simple enough.

@aggie, @hcvv
Yes, a strategy to consider, “list the user installed packages, backup what can’t be downloded again”, and reinstall the rest.
A few hours of work and we’re good to go.

My long time strategy with disk imaging and file based backups is not faster, it’s just what I am used to…

I have considered reinstalling with ext4 or xfs but I really like btrfs bootable snapshots, awesome!

The hart of the matter might be: Is it OK using clonezilla or dd to image and restore a btrfs system to a new disk ?

I do not see a link with what I said. And it certainly does not take “a few hours to go” to set-up a rsync based backup strategy. There are many backup tools based on rsync to choose from.

I am also not sure what you mean with “user installed packages”. When a user installs software for him/herself, they are in his/her home directory. Thus when all in /home is subject to backup, it is there in the same way as the users mails, documents, images, video’s, whatever.

My advice is always to first decide what disaster(s) you are planning for. That will then determine very much what to backup, with what method and where to store them. And of course how to recover from the different disasters (and yes, not only write down the theory, but test that recovery when you have all implemented!)

Someone here had issues with Clonezilla :x:
dd should always work :white_check_mark:

But both of those methods require the filesystem to be taken offline before each backup.

A better option is as @karlmistelberger has said by using btrfs send/receive to backup snapshots incrementally to another btrfs target (a backup drive formatted with btrfs). Blazing fast :rocket:

If you have a healthy dose of fear (of btrfs), I suggest additionally backing up to a more traditional ext4 target as well using rsync or borg/borgmatic.

Here’s my borgmatic config for backing up over ssh to a remote server, you should be able to use it as-is with the default btrfs subvolume layout:

# /etc/borgmatic/config.yaml
source_directories:
    - /rootfs.latest
    - /home.latest
    - /opt.latest
    - /root.latest
    - /srv.latest
    - /usr-local.latest
    - /var.latest

repositories:
    - ssh://<user>@<remote-server-hostname.tld>/./suse-pc

encryption_passphrase: "<your-borg-repo-password>"

retention:
    keep_daily: 7
    keep_weekly: 4

hooks:
    before_backup:
        - btrfs subvolume snapshot -r / /rootfs.latest
        - btrfs subvolume snapshot -r /home /home.latest
        - btrfs subvolume snapshot -r /opt /opt.latest
        - btrfs subvolume snapshot -r /root /root.latest
        - btrfs subvolume snapshot -r /srv /srv.latest
        - btrfs subvolume snapshot -r /usr/local /usr-local.latest
        - btrfs subvolume snapshot -r /var /var.latest

    after_backup:
        - btrfs subvolume delete /rootfs.latest
        - btrfs subvolume delete /home.latest
        - btrfs subvolume delete /opt.latest
        - btrfs subvolume delete /root.latest
        - btrfs subvolume delete /srv.latest
        - btrfs subvolume delete /usr-local.latest
        - btrfs subvolume delete /var.latest

    on_error:
        - btrfs subvolume delete /rootfs.latest
        - btrfs subvolume delete /home.latest
        - btrfs subvolume delete /opt.latest
        - btrfs subvolume delete /root.latest
        - btrfs subvolume delete /srv.latest
        - btrfs subvolume delete /usr-local.latest
        - btrfs subvolume delete /var.latest

Just to clarify.
I’m using BTRFS for all partitions (/ …) , except for my /home, which is XFS.

@btr what ever you decide to do, pull the storage device from the laptop and replace with a new device and test… backups are no good without a real test…

FWIW, I only use rsync for my data and config files over ssh to another system.

Hoe many more partitions do you have except those for / and /home ?

From original post:

I do not see a link with what I said. And it certainly does not take “a few hours to go” to set-up a rsync based backup strategy. 

But restoring a failed system, new disk, new bootrecords, new partitions and restoring files will.

@btr again, that’s depends all on your backup strategy, you can backup the disk gpt info, partition info, efi files if wanting to etc…

Last time for me (2021), fresh install and setup was around 45 minutes. I just get data as needed off the backup.

1 Like

Yep. It’s robust and easy to configure too, e.g.:

6700k:~ # cat /etc/btrbk/btrbk.conf
snapshot_preserve        1d 1w 1m 1y
target_preserve          1d 1w 1m 1y

snapshot_dir               /Btrbk/btrbk_snapshots
target                     ssh://erlangen.fritz.box/Backup/btrbk_snapshots/6700k
subvolume                  /home
6700k:~ # 

During half a year of testing I created thousands of backups, some 300 Terabytes altogether without a single glitch.

1 Like

Clonezilla also has a dd mode, IIRC

1 Like

I think the earlier problem stemmed from using the partclone option with btrfs:
https://clonezilla.org/clonezilla-live/doc/01_Save_disk_image/advanced/09-advanced-param.php

Took around 3 hours for me last time restoring the whole system from local btrfs snapshot backup.

Backing up and restoring from btrfs snapshots ensure the restored system is in a consistent state as btrfs snapshots are atomic.

The clonezilla creators do write “supports btrfs” among a zillion other file systems, nothing about partclone or the other utilities used by cz.

Think I’ll try a clonezilla image/restore image run on this system as it is in a flux anyway right now.

Related question: How do I know if btrfs is in shape, is the following enough:

btrfs device stats /

and possibly

btrfs scrub start -Bq /

Anyone know if there are some definitive documents on this as I may have missed it…

Scrub followed by a btrfs check from rescue ISO while the filesystem is unmounted, scrub won’t find errors with btrfs internals like check.

What about

btrfs check --check-data-csum  --force /dev/sda2

I’m fishing for ways to check for errors while still “online”, I understand fixing while running is bad :slight_smile:

That’s essentially what scrub does.

From man 8 btrfs-check:

       --check-data-csum
              verify checksums of data blocks

              This expects that the filesystem is otherwise OK, and is basically an offline scrub that does not repair data from spare copies.

From man 8 btrfs-scrub:

       Scrub is a pass over all filesystem data and metadata and verifying the checksums. If a valid copy is available (replicated block group profiles) then the damaged
       one is repaired. All copies of the replicated profiles are validated.

       NOTE:
          Scrub  is  not a filesystem checker (fsck) and does not verify nor repair structural damage in the filesystem. It really only checks checksums of data and tree
          blocks, it doesn't ensure the content of tree blocks is valid and consistent. There's some validation performed when metadata blocks are read from  disk  (Tree
          checker) but it's not extensive and cannot substitute full btrfs-check(8) run.

These are read-only tests. Typically users want to write to the file system. They want to check the atomic behaviour of their drives.

https://btrfs.readthedocs.io/en/latest/Hardware.html

A full balance without filtering reallocates all chunks and is indeed a great sanity check: Infamous Host erlangen - #4 by karlmistelberger