System booting from old snapper snapshot by default

When I run snapper list, there is a star at a rather old snapshot, meaning that it boots by default into that snapshot. I don’t really understand how this can be the case, since I have performed a few zypper dup operations over time, upgraded kernels, and the programs on my system are up to date.

The problem is that looking at the output of top, my laptops spends a lot of time and power running a kworker/u64:0+btrfs-qgroup-rescan operation in the background a few times a day, which drains my battery and I suspect it could have something to do with my snapshots. This kworker operation also runs when I try to get the outputs of snapper list, and it takes a few minutes until the output is returned.

This is the output of snapper list:

$ sudo snapper list
[sudo] password for root: 
   # β”‚ Type   β”‚ Pre # β”‚ Date                         β”‚ User β”‚ Used Space β”‚ Cleanup β”‚ Description           β”‚ Userdata
─────┼────────┼───────┼──────────────────────────────┼──────┼────────────┼─────────┼───────────────────────┼──────────────
  0  β”‚ single β”‚       β”‚                              β”‚ root β”‚            β”‚         β”‚ current               β”‚
439* β”‚ single β”‚       β”‚ Do 04 Apr 2024 15:25:17 CEST β”‚ root β”‚  26.52 MiB β”‚         β”‚ writable copy of #436 β”‚
462  β”‚ pre    β”‚       β”‚ Fr 12 Apr 2024 15:51:45 CEST β”‚ root β”‚   1.70 GiB β”‚ number  β”‚ zypp(zypper)          β”‚ important=yes
463  β”‚ post   β”‚   462 β”‚ Fr 12 Apr 2024 15:53:05 CEST β”‚ root β”‚ 262.15 MiB β”‚ number  β”‚                       β”‚ important=yes
488  β”‚ pre    β”‚       β”‚ Sa 08 Jun 2024 15:03:23 CEST β”‚ root β”‚  70.05 MiB β”‚ number  β”‚ zypp(zypper)          β”‚ important=yes
489  β”‚ post   β”‚   488 β”‚ Sa 08 Jun 2024 15:04:36 CEST β”‚ root β”‚  86.23 MiB β”‚ number  β”‚                       β”‚ important=yes
511  β”‚ pre    β”‚       β”‚ Sa 06 Jul 2024 14:02:11 CEST β”‚ root β”‚ 252.24 MiB β”‚ number  β”‚ zypp(zypper)          β”‚ important=yes
512  β”‚ post   β”‚   511 β”‚ Sa 06 Jul 2024 14:45:08 CEST β”‚ root β”‚ 215.28 MiB β”‚ number  β”‚                       β”‚ important=yes
513  β”‚ pre    β”‚       β”‚ So 07 Jul 2024 12:49:15 CEST β”‚ root β”‚   1.15 MiB β”‚ number  β”‚ yast bootloader       β”‚
514  β”‚ post   β”‚   513 β”‚ So 07 Jul 2024 12:49:55 CEST β”‚ root β”‚   1.17 MiB β”‚ number  β”‚                       β”‚
515  β”‚ pre    β”‚       β”‚ Mo 08 Jul 2024 14:21:47 CEST β”‚ root β”‚   1.16 MiB β”‚ number  β”‚ yast snapper          β”‚
516  β”‚ post   β”‚   515 β”‚ Mo 08 Jul 2024 14:22:56 CEST β”‚ root β”‚   1.03 MiB β”‚ number  β”‚                       β”‚

I would like to get rid of the kworker task. In my mind it would make most sense to default to a newer snapshot and delete the old ones, as I suspect that keeping all the snapshots around causes the long kworker task. How can I default to the newer snapshot?

I hope you can help me out. I am on kernel 6.9.7-1-default, opensuse, release 20240704 (this is with booting the snapshot 439 which is supposedly from April 2024 (?)).

The btrfs snapshots are a little confusing. You would usually think of a snapshot as a static point in time, but (as I understand it) when you boot into a rw snapshot you will continue using and updating that snapshot over time so it actually represents the current state of your system even though it appears to be old. I think what you’re seeing is perfectly normal and not the cause of your other issues.

That qgroup scan you are seeing is related to quotas. Did you intend to enable quotas on your system? Maybe you can disable them if you don’t actually need that.

1 Like

Ah okay, the mental model I had in mind was that the current system builds on the latest snapshot and keeps around diffs to that, but your explanation also makes sense. So the current state is a modification of an early snapshot, new snapshots are created from this modification and my base snapshot only changes when I roll back… I am a bit confused how this can be storage efficient, since without rolling back often, the system has to keep around a state of the system that differs greatly from the current state.
Anyways, I do not intend to use quotas on my system, I do have two local users, but this is my personal device that only I use. I will try to find a way to disable quotas and hope it remedies the qgroup scan issue. Thank you for the suggestion!

I looked up the quota issue and found a thread with a similar issue:

https://bugzilla.opensuse.org/show_bug.cgi?id=1017461

It seems like snapper needs the quota to clean up snapshots that take up too much space, but that is not an issue in my case, so I disabled the quota clean-up and the btrfs quotas themselves:

$ sudo snapper set-config QGROUP=
$ sudo btrfs quota disable /

I still have to test out the effects in the coming days but I hope this resolves this issue.

1 Like

There is no issue. By default, you will boot into snapshot 1. At some point, you performed a rollback to snapshot 436. Once you did that, a writable copy (snapshot 439) got the new root filesystem. Here’s my output of snapper list:

# | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description           | Userdata
---+--------+-------+--------------------------+------+------------+---------+-----------------------+-------------
0  | single |       |                          | root |            |         | current               |
1* | single |       | Wed Jan  3 13:46:51 2024 | root |  60.34 MiB |         | first root filesystem |
2  | pre    |       | Wed Jul 10 21:40:23 2024 | root | 212.08 MiB | number  | zypp(zypper)          | important=no
3  | post   |     2 | Wed Jul 10 21:40:49 2024 | root | 864.00 KiB | number  |                       | important=no
2 Likes

That is so correct:

    # | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description            | Userdata     
------+--------+-------+--------------------------+------+------------+---------+------------------------+--------------
   0  | single |       |                          | root |            |         | current                |              
1887* | single |       | Sat Jan  1 22:16:19 2022 | root |  34.55 MiB |         | writable copy of #1879 |              
2817  | pre    |       | Thu May 30 22:36:50 2024 | root |   1.14 GiB | number  | zypp(zypper)           | important=yes
2818  | post   |  2817 | Thu May 30 22:43:28 2024 | root |   2.52 MiB | number  |                        | important=yes
2819  | pre    |       | Fri May 31 15:49:03 2024 | root | 352.00 KiB | number  | zypp(zypper)           | important=yes
2820  | post   |  2819 | Fri May 31 15:50:23 2024 | root | 253.07 MiB | number  |                        | important=yes
2823  | pre    |       | Wed Jun  5 22:34:13 2024 | root | 393.28 MiB | number  | zypp(zypper)           | important=yes
2824  | post   |  2823 | Wed Jun  5 22:43:45 2024 | root | 516.10 MiB | number  |                        | important=yes
2829  | pre    |       | Thu Jun 27 22:05:13 2024 | root | 584.16 MiB | number  | zypp(zypper)           | important=yes
2830  | post   |  2829 | Thu Jun 27 22:21:45 2024 | root |   2.91 MiB | number  |                        | important=yes
2831  | pre    |       | Thu Jun 27 22:51:44 2024 | root |   1.28 MiB | number  | zypp(zypper)           | important=no 
2832  | post   |  2831 | Thu Jun 27 22:54:55 2024 | root |   1.94 MiB | number  |                        | important=no 
2833  | pre    |       | Sun Jun 30 14:51:59 2024 | root |   6.20 MiB | number  | zypp(zypper)           | important=yes
2834  | post   |  2833 | Sun Jun 30 14:53:30 2024 | root |  13.58 MiB | number  |                        | important=yes
2835  | pre    |       | Mon Jul  8 22:39:27 2024 | root |   5.88 MiB | number  | zypp(zypper)           | important=no 
2836  | post   |  2835 | Mon Jul  8 22:39:37 2024 | root |   3.12 MiB | number  |                        | important=no 

1 Like