New TW install and snapper questions.

I noticed that snapper is configured to only snap / by default…then I found the article that backed this up.

I’m using TW as a desktop and am concerned about this since I’d think the following subvolumes would be important to excluding tmp. I installed with everything on /. Here is the output of btrfs list.

btrfs subvolume list /

ID 256 gen 32 top level 5 path @
ID 257 gen 15789 top level 256 path @/var
ID 258 gen 15408 top level 256 path @/usr/local
ID 259 gen 15766 top level 256 path @/tmp
ID 260 gen 4505 top level 256 path @/srv
ID 261 gen 15562 top level 256 path @/root
ID 262 gen 12627 top level 256 path @/opt
ID 263 gen 15789 top level 256 path @/home
ID 264 gen 15547 top level 256 path @/boot/grub2/x86_64-efi
ID 265 gen 28 top level 256 path @/boot/grub2/i386-pc
ID 266 gen 15588 top level 256 path @/.snapshots
ID 267 gen 15734 top level 266 path @/.snapshots/1/snapshot
ID 274 gen 64 top level 266 path @/.snapshots/2/snapshot
ID 275 gen 271 top level 266 path @/.snapshots/3/snapshot
ID 279 gen 1112 top level 266 path @/.snapshots/6/snapshot
ID 280 gen 4502 top level 266 path @/.snapshots/7/snapshot
ID 281 gen 4546 top level 266 path @/.snapshots/8/snapshot
ID 282 gen 4547 top level 266 path @/.snapshots/9/snapshot
ID 283 gen 4551 top level 266 path @/.snapshots/10/snapshot
ID 284 gen 4553 top level 266 path @/.snapshots/11/snapshot
ID 285 gen 4563 top level 266 path @/.snapshots/12/snapshot
ID 287 gen 4567 top level 266 path @/.snapshots/13/snapshot
ID 288 gen 15321 top level 266 path @/.snapshots/14/snapshot
ID 289 gen 15394 top level 266 path @/.snapshots/15/snapshot
ID 290 gen 15546 top level 266 path @/.snapshots/16/snapshot

Am I correct in understanding that the following will not be snapped? I can see how /tmp would probably be a waste of time but the others… /var, /usr/local, /root, /opt, /home etc. I’d assume that most packages would install in /usr but I’d also assume that maybe third parties repo’s or packages might use /usr/local or put something in /root or /opt. I’m curious for the guys running Opensuse in general what’s considered a good policy on what to snapshot and what not. If I wanted to setup subvolumes to automatically get snapped when yast or zipper does it’s think then how would I go about make this automatic like the default / and would it be a config file in /etc/snapper for each subvolume or can one config file be used?

ID 257 gen 15789 top level 256 path @/var
ID 258 gen 15408 top level 256 path @/usr/local
ID 259 gen 15766 top level 256 path @/tmp
ID 260 gen 4505 top level 256 path @/srv
ID 261 gen 15562 top level 256 path @/root
ID 262 gen 12627 top level 256 path @/opt
ID 263 gen 15789 top level 256 path @/home
ID 264 gen 15547 top level 256 path @/boot/grub2/x86_64-efi
ID 265 gen 28 top level 256 path @/boot/grub2/i386-pc

The article states this about the /home dir for example so I’d assume this would be the same for every subvol?

During installation YaST does not setup a snapper config for /home. We can do so manually:

snapper -c home create-config /home

This creates the config file in /etc/snapper/configs for the /home subvol …correct and does it assign the group id/s and all that’s needed for it to be a automatic snapshot like / when yast runs system updates?

Also, what if I’d like to have it manual instead is there a way to do this? Thanks…guys opensuse is new to me so I’m trying to learn. But, I don’t want something to happen and then be out of luck for stuff in those
subvolumes. I’d like to have a desktop that can be recovered fully. I plan on enabling the packman repo and the google chrome one.



Another thing… /boot/grub2 directory, is that not important to such a update or upgrade might foobar grub and then how do you roll back via the grub loader?

I’ll note that I don’t use “btrfs” except in occasional testing.

I think your assessment is about right.

As for “/boot/grub2” – yes, that needs to be in a snapshot. If you rollback to an earlier time where you had an earlier kernel, then you do want “grub.cfg” to also be rolled back so that it is set to boot that earlier kernel. The subdirectories ("/boot/grub2/x86_64-efi" or “/boot/grub2/i386-pc”) are not in a snapshot, because they contain boot code that has to match what is in your EFI partition or your MBR or other boot sector.

There is a specific grub configuration down in /.snapshots that reflects the snapshots available via the grub entry.

The use of snapper is really focused on the operating system, any user data, databases etc should be excluded from snapshots and better to be on xfs etc and backed up as a separate routine…

I get what you’re saying and I could agree with you for say the home directory or the mysql directory but /opt, /var, /root … /usr/local for example /usr/local/sbin etc? All those can have important files that could be needed for running system, even root with it’s .ssh keys etc. But, I totally get what your saying but for a desktop I just wanted a quick way to roll back if the system goes foobar on a update especially since I’m trying out Tumbleweed. I guess anything added to the snapper config like the wiki says is a manual option? I basically wanted everything other than /home and /tmp to be auto backed up with snapper when the system runs yast or zipper like it does for /. I’ll read more about it…Thanks!

Have a read here as to why:

I would just be careful if you decide to use home, especially pulling in an iso image or two, if you don’t have a big / then it may get allocated/full without realizing, but in saying that careful configuration should take care of that. I use btrfs but don’t use snapshots on Tumbleweed, everything of importance is on xfs and separate backup routine for my important data.

sda           8:0    0 232.9G  0 disk 
├─sda1        8:1    0   260M  0 part /boot/efi
├─sda2        8:2    0   768M  0 part /boot
├─sda3        8:3    0   230G  0 part /stuff
└─sda4        8:4    0   1.9G  0 part [SWAP]
nvme0n1     259:0    0 232.9G  0 disk 
├─nvme0n1p1 259:1    0    40G  0 part /
└─nvme0n1p2 259:2    0 192.9G  0 part /data

I’d exclude /home as that’s where I’d keep anything like ISO’s or music etc and I’d have that backed-up on a remote storage solution, I’d exclude /tmp to. So basically /home and /tmp would be out of the snapshots. Again this is just a desktop but I’d hate for something to be in /opt, /var, /usr/local and something happen where a simple boot to grub and choose the last snapshot wouldn’t fix. Space looks to be plenty to me…so far. I was just surprised to see /var and /usr/local omitted and kinda for /opt but the latter two really surprised me since those have important system files that get installed there often. Thanks for the link.

Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 405M 7.4G 6% /dev/shm
tmpfs 7.8G 1.8M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda3 460G 6.9G 452G 2% /
/dev/sda3 460G 6.9G 452G 2% /.snapshots
/dev/sda3 460G 6.9G 452G 2% /boot/grub2/i386-pc
/dev/sda3 460G 6.9G 452G 2% /boot/grub2/x86_64-efi
/dev/sda3 460G 6.9G 452G 2% /home
/dev/sda3 460G 6.9G 452G 2% /opt
/dev/sda3 460G 6.9G 452G 2% /root
/dev/sda3 460G 6.9G 452G 2% /srv
/dev/sda3 460G 6.9G 452G 2% /tmp
/dev/sda3 460G 6.9G 452G 2% /usr/local
/dev/sda3 460G 6.9G 452G 2% /var
/dev/sda1 500M 5.3M 495M 2% /boot/efi
tmpfs 1.6G 40K 1.6G 1% /run/user/1000

Since you do have lot’s of space, consider moving the listed directories to separate partitions, honestly 40-60GB for the operating system and snapshots would be sufficient, it is after all to get back to a point in time…

There is also tumbleweed-cli to rollback a release as well GitHub - boombatower/tumbleweed-cli: Command line interface for interacting with Tumbleweed snapshots. which may be of interest.

The default configuration (root) deals with vendor specific changes only. You can easily add configurations which handle site specific changes, e.g. /srv/www/htdocs or /opt and many more. Details at:

Note: You need multiple configurations because you never want to have site specific changes undone whenever you undo a system snapshot.

Since it’s a desktop what’s the real benefit to having a bunch of separate partitions, I’ve done it before and it seems like just more work…for a server I understand.

Thanks for the feedback.

Take a look at the explanations for the new vs old subvolume layout: SDB:BTRFS - openSUSE Wiki

AFAICT boot up doesn’t depend on any of those subvolumes. Maybe /var, I’m not sure but haven’t seen any report about it. That said, you have a few options:

  • Create snapper configurations for each subvolume (that can be hourly if you prefer), so you can easily revert to a previous snapshot, although manually after/before/independently of a rollback to rootfs.
  • Remove subvolumes. If you prefer one subdirectory to rollback alongside rootfs, you can copy the content to a new subdirectory under the root subvolume, delete the subvolume then move the new directory to its new place.
  • Mix and match both options above, and even create new subvolumes.

The default subvolume structure was refined over the years, and should work for most people. So take your time to study what’s in them, and you might find you’re overthinking. With this knowledge then perform the structural changes you see fit.

Not quite, /etc is not on its own subvolume so they are reverted with a rollback to rootfs. Including /etc/fstab, /etc/systemd, etc.

I think SUSE is planning or working on solving this issue by moving vendor configuration to /usr/etc and site configuration to /etc. systemd has already performed this move: /usr/lib/systemd and /etc/systemd. Clearlinux puts it: Stateless feature - Designed so that the user is able to quickly and easily manage their custom configuration vs. system configuration.

I recall when systemd was unmounting subvolumes during boot up, /var not being available caused AppArmor to fail.

More fun is lurking:

More and more Linux Distributors have a Distribution using atomic updates to update the system (for SUSE/openSUSE it's transactional-update). They all have the problem of updating the files in /etc, but everybody come up with another solution which solved their usecase, but is not generic useable.

 Additional there are the "Factory Reset"]( and "systemd.volatile="]( features from systemd, which no distribution has really fully implemented today. A unique handling of /etc for atomic updates could also help to convience upstream developers to add support to their applications for this cases.