Snapshot Space Management

I just upgraded from Leap 15.1 to Leap 15.2 and then opened disk utility to find out how much space the root filesystem is using. The hard disk has these partitions: EFI, swap, root, and home. Root is 43GB and had about 3.8GB free. After determining that the bulk of space used was by the snapshots, I deleted about half of the 10 or so snapshots. Now there is 10GB free. Here is what I have now:


    # | Type   | Pre # | Date                            | User | Cleanup | Description   | Userdata     
------+--------+-------+---------------------------------+------+---------+---------------+--------------
   0  | single |       |                                 | root |         | current       |              
2311* | single |       | Fri 18 Dec 2020 02:10:58 AM EST | root |         |               |              
2613  | pre    |       | Sat 13 Mar 2021 04:30:41 PM EST | root | number  | before update | important=yes
2614  | post   |  2613 | Sat 13 Mar 2021 07:38:28 PM EST | root | number  | after update  | important=yes
2615  | pre    |       | Sat 13 Mar 2021 07:41:27 PM EST | root | number  | zypp(zypper)  | important=yes
2616  | post   |  2615 | Sat 13 Mar 2021 07:42:17 PM EST | root | number  |               | important=yes

I can’t get the space-used column to show even when specifying it with the --columns option.
It seems these snapshots are still taking up more space than they should. Is there something I should do to make them take up less space without deleting any more snapshots or should I make the root partition larger?

Hi
First off, have yoiu configured how many snapshots are kept (edit /etc/snapper/configs/root)? Second have you run the (or let the system run) the btrfs cleanup routines;

To see when things are due to run;


systemctl list-timers

You can always manually kick of the maintenance services manually as listed in the ‘Activates’ column with systemctl start <service>.

I didn’t set how many snapshots are kept. I assume it’s configured to a default value. An excerpt from the referenced configuration file is:


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="10"
NUMBER_LIMIT_IMPORTANT="10"

I’m not sure which service is the btrfs cleanup routines. The output from systemctl list-timers is:


NEXT                         LEFT                LAST                         PASSED             UNIT                         ACTIVATES
Mon 2021-03-15 22:00:00 EDT  11min left          Mon 2021-03-15 21:00:43 EDT  47min ago          snapper-timeline.timer       snapper-timeline.service
Tue 2021-03-16 00:00:00 EDT  2h 11min left       Mon 2021-03-15 00:00:19 EDT  21h ago            logrotate.timer              logrotate.service
Tue 2021-03-16 00:00:00 EDT  2h 11min left       Mon 2021-03-15 00:00:19 EDT  21h ago            mandb.timer                  mandb.service
Tue 2021-03-16 00:00:00 EDT  2h 11min left       Mon 2021-03-15 00:00:19 EDT  21h ago            unbound-anchor.timer         unbound-anchor.service
Tue 2021-03-16 01:03:30 EDT  3h 14min left       Mon 2021-03-15 01:30:43 EDT  20h ago            backup-sysconfig.timer       backup-sysconfig.service
Tue 2021-03-16 01:05:00 EDT  3h 16min left       Mon 2021-03-15 01:05:43 EDT  20h ago            mdcheck_continue.timer       mdcheck_continue.service
Tue 2021-03-16 01:25:49 EDT  3h 37min left       Mon 2021-03-15 01:36:43 EDT  20h ago            check-battery.timer          check-battery.service
Tue 2021-03-16 01:54:50 EDT  4h 6min left        Mon 2021-03-15 01:46:43 EDT  20h ago            backup-rpmdb.timer           backup-rpmdb.service
Tue 2021-03-16 02:00:00 EDT  4h 11min left       Mon 2021-03-15 02:00:43 EDT  19h ago            mdmonitor-oneshot.timer      mdmonitor-oneshot.service
Tue 2021-03-16 20:50:43 EDT  23h left            Mon 2021-03-15 20:50:43 EDT  57min ago          snapper-cleanup.timer        snapper-cleanup.service
Tue 2021-03-16 20:56:43 EDT  23h left            Mon 2021-03-15 20:56:43 EDT  51min ago          systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sun 2021-03-21 01:00:00 EDT  5 days left         Sun 2021-03-14 01:00:43 EST  1 day 19h ago      mdcheck_start.timer          mdcheck_start.service
Mon 2021-03-22 00:00:00 EDT  6 days left         Mon 2021-03-15 00:00:19 EDT  21h ago            btrfs-balance.timer          btrfs-balance.service
Mon 2021-03-22 00:00:00 EDT  6 days left         Mon 2021-03-15 00:00:19 EDT  21h ago            btrfs-trim.timer             btrfs-trim.service
Mon 2021-03-22 00:00:00 EDT  6 days left         Mon 2021-03-15 00:00:19 EDT  21h ago            fstrim.timer                 fstrim.service
Thu 2021-04-01 00:00:00 EDT  2 weeks 2 days left Mon 2021-03-01 00:00:44 EST  2 weeks 0 days ago btrfs-scrub.timer            btrfs-scrub.service

16 timers listed.

Thanks.

Hi
I would reduce the number to 5 or 6…

Timers ran 21hrs ago, so that good :wink:

What about btrfs status?


btrfs fi usage /

Is it full output?

I can’t get the space-used column to show even when specifying it with the --columns option.

This requires that quota be enabled on btrfs filesystem. It was not default in the past. To get more or less the same information you could do

btrfs quota enable /
btrfs quota rescan -w /
btrfs qgroup show /

although this will miss summary information.

It seems these snapshots are still taking up more space than they should.

I am not sure what “should” means here. Snapshots take up exactly the space they take up; there is no target how much space they “should” consume. If “should” means “you were expecting they take up less” - well, after full distribution upgrade it is expected that total space consumption is at least doubled - you need space for (at least one) copy of old version and space for new version and you replaced almost everything in old version.

It is also possible to have “orphaned” snapshots. Post

btrfs subvolume list /
btrfs subvolume get-default /
btrfs us -T /
snapper list

An interesting question is this: How much space will be freed when I remove a specific snapshot?
For example I had removed several snapshots, but I still have no free space on my separate boot:

snapper -v -c boot cleanup timeline
Deleting snapshot from boot:
919
Deleting snapshot from boot:
979
Deleting snapshot from boot:
980
Deleting snapshot from boot:
981
Deleting snapshot from boot:
982
Deleting snapshot from boot:
984
Deleting snapshot from boot:
985
Deleting snapshot from boot:
986
Deleting snapshot from boot:
987
Deleting snapshot from boot:
988
Deleting snapshot from boot:
989
Deleting snapshot from boot:
990
# df /boot
Filesystem           1K-blocks   Used Available Use% Mounted on
/dev/mapper/sys-boot    679936 576460     14516  98% /boot

This is what’s left:

# snapper -c boot list
    # | Type   | Pre # | Date                             | User | Cleanup  | Description          | Userdata
------+--------+-------+----------------------------------+------+----------+----------------------+---------
   0  | single |       |                                  | root |          | current              |         
   1  | single |       | Sun 23 May 2021 05:24:17 PM CEST | root |          | very first snaposhot |         
   2  | single |       | Sun 23 May 2021 06:00:25 PM CEST | root | timeline | timeline             |         
 678  | single |       | Tue 02 Nov 2021 08:20:19 AM CET  | root | timeline | timeline             |         
 782  | single |       | Wed 01 Dec 2021 07:45:18 AM CET  | root | timeline | timeline             |         
 910  | single |       | Mon 03 Jan 2022 07:40:31 AM CET  | root | timeline | timeline             |         
 921  | single |       | Wed 05 Jan 2022 08:23:23 AM CET  | root | timeline | timeline             |         
 927  | single |       | Mon 10 Jan 2022 07:58:44 AM CET  | root | timeline | timeline             |         
 936  | single |       | Tue 11 Jan 2022 07:50:16 AM CET  | root | timeline | timeline             |         
 944  | single |       | Wed 12 Jan 2022 09:57:24 AM CET  | root | timeline | timeline             |         
 950  | single |       | Thu 13 Jan 2022 10:18:28 AM CET  | root | timeline | timeline             |         
 957  | single |       | Fri 14 Jan 2022 07:35:25 AM CET  | root | timeline | timeline             |         
 962  | single |       | Mon 17 Jan 2022 08:49:38 AM CET  | root | timeline | timeline             |         
 972  | single |       | Tue 18 Jan 2022 07:49:20 AM CET  | root | timeline | timeline             |         
 983  | single |       | Wed 19 Jan 2022 08:07:24 AM CET  | root | timeline | timeline             |         
 991  | single |       | Wed 19 Jan 2022 04:00:10 PM CET  | root | timeline | timeline             |         
 992  | single |       | Thu 20 Jan 2022 08:45:32 AM CET  | root | timeline | timeline             |         
 993  | single |       | Thu 20 Jan 2022 09:00:10 AM CET  | root | timeline | timeline             |         
 994  | single |       | Thu 20 Jan 2022 10:00:10 AM CET  | root | timeline | timeline             |         
 995  | single |       | Thu 20 Jan 2022 11:00:10 AM CET  | root | timeline | timeline             |         
 996  | single |       | Thu 20 Jan 2022 12:00:10 PM CET  | root | timeline | timeline             |         
 997  | single |       | Thu 20 Jan 2022 01:00:02 PM CET  | root | timeline | timeline             |         
 998  | single |       | Thu 20 Jan 2022 02:00:10 PM CET  | root | timeline | timeline             |         
 999  | single |       | Thu 20 Jan 2022 03:00:10 PM CET  | root | timeline | timeline             |         
1000  | single |       | Thu 20 Jan 2022 04:00:10 PM CET  | root | timeline | timeline             |         

I answered this in my last post in this thread immediately before yours. Have you tried to read this thread before posting? Do you have specific questions about what I said?