Snapper mounts by default old snapshot

Hello,

I noticed today that an old snapshot is mounted by default, despite
subsequent updates with snapper. This is snapshot 108 which appears
with a * with the command snapper ls

| Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata

-----±-------±------±--------------------------------±-----±-----------±--------±-------------±-------------
0 | single | | | root | | | current |
108* | post | 107 | Thu 07 Feb 2019 12:12:36 PM CET | root | 298.77 MiB | number | | important=yes
124 | post | 123 | Tue 26 Feb 2019 04:24:19 PM CET | root | 275.57 MiB | number | | important=no
125 | pre | | Mon 04 Mar 2019 02:47:10 PM CET | root | 40.83 MiB | number | zypp(zypper) | important=yes
126 | post | 125 | Mon 04 Mar 2019 02:59:39 PM CET | root | 145.39 MiB | number | | important=yes

I cannot remove this snapshot obviously:
$ snapper -c root delete --sync 108
Cannot delete snapshot 108 since it is the currently mounted snapshot

I thought of rebooting manually on a later snapshot in order to erase the old one, but it does not seem safe.

Is there a way to assign a later snapshot by default? Or would a rollback to a later snapshot work?

Thanks!

S.

The way BTRFS snapshots work is that old snapshots are not removed whenever your system does a rollback which is what appears to have happened, so snapshots “more recently updated” than your current snapshot should still exist (unless manually removed or removed by scheduled maintenance).

So, use snapper to inspect your snapshots, and if you wish “roll back” to a “more recent” image.

TSU

I am fairly certain that the * indicates the root snapshot that all subsequent snapshots are derived from. You can’t delete it because of this. If you ever do a rollback, a new root snapshot will be created and the old one can be deleted.

My advice, don’t do a rollback unless you absolutely have to, and let snapper look after deleting snapshots. If disk space is an issue then you can change the cleanup settings.

Wouldn’t that be the point of rolling back to a snapshot with missing updates and changes, to establish a new “default” root?
If that proves to be not what actually is wanted, it’s always possible to again roll back to before that rollback(or try another image but I’ve found if you don’t return to a particular image it can get confusing what is what).

Just musing…
TSU

If that was the case then it would fall under “absolutely have to”, but rollbacks, from experience, are not foolproof.

eh, using rollbacks almost daily to undo experiments related to support attempts, and haven’t noticed that.

If a software package, docker for example, puts its data in a folder that gets snapshotted, rolling back can really screw things up. Typically you don’t discover things like this until they happen.

When you’re talking about application data, particularly long running transactions that might stored in a large file like a database file, then you have to deal with atomicity. Since BTRFS snapshots are not “application aware” for these kinds of situations, you shouldn’t count of snapshots to maintain the integrity of the data file. Or, you should carefully ensure your application isn’t performing a write operation when you do your snapshot. If you are caught in this kind of situation, then depending on the application, there <may> be ways to recover (eg replaying database log files) but I wouldn’t count on these kinds of extreme “I sure hope this works” procedures.

This has nothing or little to do with docker, it has everything to do with understanding atomic transactions.

I can’t think of any other relevant situation, and ordinarily or with proper planning should not be an issue (If you know what you’re doing).

IMO,
TSU

My experience here is that only what has changed gets rolled back. And it should. Could you provide a more specific example for me to test what you describe? It
definitely would be worth a bug report.

It’s been over a year since I encountered the docker problem. Docker created its own btrfs subvolume inside of /var/lib/docker, something like
.snapshots/1/snapshot/var/lib/docker/btrfs/subvolumes/d9f43fcb4e08e58d9828…
Getting rid of the docker snapshots after a rollback was quite involved. To avoid it I mounted /var/lib/docker on another partition.

A lot of rollback issues were resolved with the new btrfs layout for installs after January 2018. See https://en.opensuse.org/SDB:BTRFS

Thanks for your answer.

I rollbacked to a later snapshot, and could delete with snapper the old
default snapshot as well as other ones (my system survived it).

However, I noted that almost no memory was freed up in this process,
and indeed /.snapshots still contains all the previous snapshots, while
snapper ls says they should be gone. btrfs subvolume confirms this.

Is there any way to physically remove old snapshots from /.snapshots
in this situation?

I fear that I have to do a fresh reinstall. I have a 30GB SSD, and while
tumbleweed should take only 12GB, because of old snapshots, it’s almost full.

S.

Please see the section “Checking Free Space” in the link I provided above. A snapshot does not contain a complete physical copy of your system. Most TW important updates can be measured in MB, not GB, so with the number of snapshots you posted, space should not be an issue.

Thanks, I have done that and balance did not help.
I manage to fully get rid of all snapshots except the current one:


$ sudo btrfs subvolume list -s /
ID 544 gen 297404 cgen 296835 top level 275 otime 2019-03-13 18:40:21 path .snapshots/128/snapshot

However, it seems some data from old snapshot is still lurking here, which does not make sense:


$ sudo btrfs qgroup show -p /
qgroupid         rfer         excl parent  
--------         ----         ---- ------  
0/257         1.72GiB      1.72GiB ---     
0/258        16.00KiB     16.00KiB ---     
0/259       435.64MiB    435.64MiB ---     
0/260        16.00KiB     16.00KiB ---     
0/261        48.23MiB     48.23MiB ---     
0/262        88.32MiB     88.32MiB ---     
0/263        16.00KiB     16.00KiB ---     
0/264         2.36MiB      2.36MiB ---     
0/275        16.00KiB     16.00KiB ---     
0/465           0.00B        0.00B ---     
0/519           0.00B        0.00B ---     
0/521           0.00B        0.00B ---     
0/525           0.00B        0.00B ---     
0/526           0.00B        0.00B ---     
0/535           0.00B        0.00B ---     
0/540           0.00B        0.00B ---     
0/544        11.18GiB      6.55GiB ---    

Also, the extra “excl” 6.55GiB in the current snapshot looks suspicious. At least
snapper is consistent with this:


$ sudo snapper list
   # | Type   | Pre # | Date                            | User | Used Space | Cleanup | Description | Userdata
-----+--------+-------+---------------------------------+------+------------+---------+-------------+---------
  0  | single |       |                                 | root |            |         | current     |         
128* | single |       | Wed 13 Mar 2019 06:40:21 PM CET | root |   6.55 GiB |         |             |         


Why is here 6.55GiB of extra used space, while I have only one snapshot on my system?
Any way to clean up this mess?

S.

So, /home is in there, and the minimum of 40GB for btrfs is also not met.
Use YaST’s snapshot tool to remove older snapshots.

/home is on a different (much larger) HD
I’ve read the 40GiB requirement, but snapper manual says 20GiB is ok,
which was the case until recently, as long as I kept a few (typically 4 max) older
snapshots.

Thanks for pointing the Yast snapshot tool, which I did not know, but it only
shows the current snapshot (like snapper) and does not allow to erase something
that should not exist :frowning:

S.

Your first post indicates your root snapshot was 108. This means that at some point you did a rollback and the earlier snapshots were either automatically cleaned up or you deleted them. Were there any problems ever encountered?

You can use the Yast2 Partioner tool to see if there are any orphan snapshots. Run the tool, click on the Btrfs entry and press the edit button. Do you see any snapshots that are not identified by the snapper tool? You can run both tools at the same time but the Partioner displays the snapshots in alphabetical order instead of ID order. Some time ago when a rollback and docker did not behave I had to use the Partioner to delete the orphan snapshots.

If you find something and are considering deleting it, make sure you have any data and settings you wish to keep backed up just in case.

Under no circumstances delete any snapshot with a /boot in the path, and really anything that does not contain .snapshots in the past. If you have any orphans I would look for @/.snapshots/#/snapshot where # is a number between 2 and 108.

Out of curiosity, what makes you think you are out of space? What do these now show:

sudo btrfs fi show /
sudo btrfs fi df /

Thanks for trying to help!

Both estimates indicate about 20GiB of disk usage:


# sudo btrfs fi df /
Data, single: total=24.00GiB, used=19.40GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.00GiB, used=569.30MiB
GlobalReserve, single: total=58.25MiB, used=0.00B

However, I know that my tumbleweed can be squeezed down
to 13GiB when all snapshots are cleared. This lower estimate
is confirmed by du -s / (excluding /home), which is perhaps not
very accurate, but certainly not off by 7GiB.

Many people have reported that btrfs drags much more data
than naively expected:
https://forums.opensuse.org/showthread.php/531964-Has-Btrfs-quot-wasted-quot-some-space

However, currently I have a single snapshot left, and nothing else
to delete, and can’t see a way to clear up this unwanted 7GiB

It’s annoying because disk usage climbs to almost 100% when doing
a big update.

S.

Here’s another way of seeing the problem:


# sudo btrfs qgroup show -p /
qgroupid         rfer         excl parent
--------         ----         ---- ------ 
0/580        11.17GiB      7.31GiB ---

Again this extra 7GiB shows in “excl”, and that does not
make sense as there is a unique snapshot.

I have the feeling that this 7GiB corresponds to all the extra
packages that were installed in the past months since the last
total cleanup of my system (I’ve had the same problem before,
but managed to get rid of it by deleting all snapshots, going down
to 13GiB for /).

I’m convinced that my system is keeping duplicates of both old
and newly updated packages, which is ok as long as snapshots
are conserved, but not if they are subsequently deleted.

Could it be that this is a general feature of btrfs/snapper, that mostly
people with a tight SSD have noticed?

S.

Something else you might check is /var/log for any log files that have grown too large.

I agree, /var/log/journal was especially taking some unwanted 1GiB, and I
removed it yesterday, but it’s not the core problem. There is really 7GiB of
hidden data on my machine that I cannot see or erase… I tried balancing
and defragmenting, but that did not help much.