One small snapshot takes a lot of disk space in btrfs

I have deleted all, but one snapshots. Still the disk remains rather full.
The last snapshot is about 8.4 GB big. The partition usage is 17 Gb (instead of about 8.4 Gb).
How do i free up the remaining (17-8,4 =) about 9 GB?

Help is appreciated.

I have rolled back three months ago. I am under the impression that the previous data is still present somehow, but i can not delete this.
# btrfs balance start /
Did not solve this.

The output of some relevant commands

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        25G   17G  7.4G  70% /
/dev/sda2        25G   17G  7.4G  70% /.snapshots
# du -h -s /.snapshots/*
8.4G	/.snapshots/487
4.0K	/.snapshots/grub-snapshot.cfg
# btrfs qgroup show -p /
Qgroupid    Referenced    Exclusive Parent     Path 
--------    ----------    --------- ------     ---- 
0/5           16.00KiB     16.00KiB -          <toplevel>
0/256          8.86GiB      7.27GiB -          @
0/257          1.27GiB      1.27GiB -          @/var
0/258          8.64MiB      8.64MiB -          @/usr/local
0/259         16.00KiB     16.00KiB -          @/srv
0/260         35.66MiB     35.64MiB -          @/root
0/261         16.00KiB     16.00KiB -          @/opt
0/262          4.11MiB      4.11MiB -          @/boot/grub2/x86_64-efi
0/263         16.00KiB     16.00KiB -          @/boot/grub2/i386-pc
0/269         16.00KiB     16.00KiB -          @/.snapshots
0/605            0.00B        0.00B -          <stale>
0/606            0.00B        0.00B -          <stale>
0/785            0.00B        0.00B -          <stale>
0/786            0.00B        0.00B -          <stale>
0/787            0.00B        0.00B -          <stale>
0/788            0.00B        0.00B -          <stale>
0/796          8.17GiB      6.54GiB -          @/.snapshots/487/snapshot
# snapper list
   # | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description           | Userdata
-----+--------+-------+--------------------------+------+------------+---------+-----------------------+---------
  0  | single |       |                          | root |            |         | current               |         
487* | single |       | Tue Mar 14 22:06:24 2023 | root |   6.58 GiB |         | writable copy of #482 |

# btrfs filesystem usage -h /
Overall:
    Device size:		  24.50GiB
    Device allocated:		  18.25GiB
    Device unallocated:		   6.25GiB
    Device missing:		     0.00B
    Device slack:		     0.00B
    Used:			  16.79GiB
    Free (estimated):		   7.40GiB	(min: 7.40GiB)
    Free (statfs, df):		   7.40GiB
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		  46.38MiB	(used: 0.00B)
    Multiple profiles:		        no

Data,single: Size:17.47GiB, Used:16.32GiB (93.43%)
   /dev/sda2	  17.47GiB

Metadata,single: Size:768.00MiB, Used:481.95MiB (62.75%)
   /dev/sda2	 768.00MiB

System,single: Size:32.00MiB, Used:16.00KiB (0.05%)
   /dev/sda2	  32.00MiB

Unallocated:
   /dev/sda2	   6.25GiB
# snapper delete 487
Cannot delete snapshot 487 since it is the currently mounted snapshot.
# btrfs check --force /dev/sda2
Opening filesystem to check...
WARNING: filesystem mounted, continuing because of --force
Checking filesystem on /dev/sda2
UUID: 7c0eef22-6f76-4c30-a517-f2bd24ae08f1
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups
found 18029256704 bytes used, no error found
total csum bytes: 16176876
total tree bytes: 505397248
total fs tree bytes: 458244096
total extent tree bytes: 27344896
btree space waste bytes: 135665009
file data blocks allocated: 30389108736
 referenced 19139227648
# btrfs subvolume get-default /
ID 796 gen 164293 top level 269 path @/.snapshots/487/snapshot

Subvolume @ consumes over 7GiB. You cannot delete this subvolume because it contains other subvolumes. You will need to mount it and carefully delete everything except nested subolumes.

You may want to clean up qgroup information (remove stale qgroups).

btrfs qgroup destroy 0/605 /
...

When did you update your system last time?

Out of curiosity - why did you expect it to help in the first place?

Thanks.

Subvolume @ consumes over 7GiB. You cannot delete this subvolume because it contains other subvolumes. You will need to mount it and carefully delete everything except nested subolumes.

You may want to clean up qgroup information (remove stale qgroups).

I thought the @ was the same as the current mounted system. I will try what you suggest and also look at the qgroup information.

When did you update your system last time?

This was done yesterday with:
zypper --pkg-cache-dir /home/cache dist-upgrade -l --no-allow-vendor-change

Should this command delete the stale qgroups?

Out of curiosity - why did you expect it to help in the first place?

To be honest, i did not. This command might reduce the metadata. In my case this not the problem, because it is already not to big. I read on the internet (google) that a lot of advises pointing to using this command. I tried it anyway, just because i could not find a solution. I did not want this advice again, so i included this. I also removes all other subvolumes, so it was clear that there were no other subvolumes using up space. May be a bit overdone, but i was out of other options. Also the btrfs check command is a bit overdone, because there were no errors. But i just tried everything.

Considering the (zypper) config is where you want it, why all that fluff ?

All you need is

zypper dup

zypper dup
is equal to
zypper dist-upgrade

I just download everything upfront to /home/cache, so when an internet connection is lost, i am not.
I also have stuff from packman. Changing this can, but will not go unnoticed with the --no-allow-vendor-change. I like to auto agree with the licenses, thus the -l.

So i do the same already.

This step is unneeded as zypper downloads all rpms into /var/cache/zypp/packages (for non-root users $XDG_CACHE_HOME/zypp/packages) prior upgrading/updating. So if you loose internet connection whilst downloading, all downloaded packages stay in this directory until you do a zypper dup.

See man zypper.

And please inspect the contents of /etc/zypp/zypp.conf as it already is set up to:

solver.dupAllowVendorChange = false

So you do double and twice what is already implemented by default. As myswtest already pointed out, you don’t need more than:

zypper dup

or if you want

zypper dup -l
1 Like

Thanks. I was unaware of the config file.

My /var is on the same partition as my root /. I sometimes think that i have enough disk space for an upgrade, but then the partition becomes full with the downloaded .rpm files. Then the space is gone and the upgrade will get stuck. Then i have to manually remove stuff, which is annoying. I therefore decided long ago, that i would use a separate partition for caching the .rpm files during a system update. This is the /home/, which is on a separate partition. I have used the command line to do this. It just copies what i used previous, so i don’t have to type it. I have never looked at all the options in the config file. The command i use, is setting a lot of already default options and are indeed not needed.

And i think the (now?) default options for automatically removing old snapshots are the thing i have to look into. My other computer with a newer installation is doing a better job at keeping the btrfs partition with the snapshots small. Some ten years ago, i was not happy with the automatic removal and adjusted it to something which is just not good.

I have just removed the /usr directory from the subvolid=256 (the @ volume). This helps to get some space back. I first checked is the directory was not the same as /usr by checking the dates and making some file in subvolume, which did not appear in /var

# mount -o subvolid=256 /dev/sda2 tmp-mount/
# rm -f tmp-mount/usr/*

This gave me back 8.8 GB!

I’m confused, apparently. When you do a zypper dup, the packages are downloaded, dependencies verified, then packages are installed.

Then, zypper cleans up after the install - no need to hunt down .rpms and delete them.

Hmmm. All okay after a reboot ?

The zypper dub downloads the packages on the same partition as the brtfs is. If this was already quite full, the downloads fills this up. Then i had problems with the installation. I got complains about a full btrfs and the option to abort or ignore (ignore never helps in this situation).

Hmmm. All okay after a reboot ?

No, the /usr/local is now also gone. I tested some directories in /usr, but not all. The was /usr/local in the output of btrfs qgroup show -p /, but i missed it. I did make a snapshot with snapper create on forehand, but this does not help me now. Booting in to this other snapshot give the same output. I thought snapshots were supposed to help out in these cases, so i though i was safe deleting the /usr. I guess i just have to reinstall it. This also removes the 8 GB which i wanted (but not the way i intended).

Do you know an other solution than just an other installation?
I have backups of all the relevant config files and the /home is on a different partition and OK, but it still is a bit of a hassle.

Any btrfs subvolume can be explicitly mounted. @ is just another subvolume, it does not have any inherent special properties.

Reducing space allocated for metadata is bad idea (unless you are 100% sure you know it was a bug or some other misbehavior). As one user with a lot of btrfs experience put it, “if btrfs needed this amount of space for metadata once, chances are very high it will need it again”. Compacting metadata may allocate all available space to data leading to out of space. Any advice you see on Internet to balance metadata is outdated and no more relevant since years and was based on behavior of early btrfs versions.

It would not help in this case. The @ cannot be removed, manually or automatically.

@ is configured as root filesystem during installation if you either installed using really old openSUSE version (years ago) or snapshots were disabled during installation. Today if snapshots are enabled (default) installer puts filesystem root on the first snapshot (@/.snapshots/1/snapshot) which can easily be deleted if needed.

The problem you observed was the exact reason SUSE changed default subvolume layout to be more snapshots firendly.

Well, I told you “carefully” and “except the nested subvolumes”. The information in your first post quite clearly shows that @/usr/local is subvolume.

btrfs snapshots are per subvolume and default configuration only covers root subvolume. No other subvolumes are included. You can configure snapper to create snapshots of any subvolume including @/usr/local.

No package installs anything in /usr/local so I do not see how “other installation” can help. If you put there anything manually, just do it again (after re-creating this subvolume).

You indeed need help.

Partition size does matter. You may try this:

erlangen:~ # fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 970 EVO Plus 2TB            
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F5B232D0-7A67-461D-8E7D-B86A5B4C6C10

Device           Start        End    Sectors  Size Type
/dev/nvme1n1p1    2048    1050623    1048576  512M EFI System
/dev/nvme1n1p2 1050624 3907028991 3905978368  1.8T Linux filesystem
erlangen:~ # 

The above partitioning is Tumbleweed’s default on new disks. It stopped THREE DECADES OF HASSLE with partitioning on my machines since January 1991.

erlangen:~ # btrfs filesystem usage -T /
Overall:
    Device size:                   1.82TiB
    Device allocated:            567.07GiB
    Device unallocated:            1.26TiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                        546.66GiB
    Free (estimated):              1.28TiB      (min: 665.10GiB)
    Free (statfs, df):             1.28TiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

                  Data      Metadata System                            
Id Path           single    DUP      DUP      Unallocated Total   Slack
-- -------------- --------- -------- -------- ----------- ------- -----
 1 /dev/nvme1n1p2 559.01GiB  8.00GiB 64.00MiB     1.26TiB 1.82TiB     -
-- -------------- --------- -------- -------- ----------- ------- -----
   Total          559.01GiB  4.00GiB 32.00MiB     1.26TiB 1.82TiB 0.00B
   Used           541.63GiB  2.52GiB 80.00KiB                          
erlangen:~ # 
erlangen:~ # btrfs subvolume list -t /
ID      gen     top level       path
--      ---     ---------       ----
256     663890  5               @
257     680051  256             @/var
258     679901  256             @/usr/local
259     676216  256             @/srv
260     679939  256             @/root
261     678998  256             @/opt
262     680064  256             @/home
263     679493  256             @/boot/grub2/x86_64-efi
264     610717  256             @/boot/grub2/i386-pc
265     679558  256             @/.snapshots
2779    680062  265             @/.snapshots/2137/snapshot
2946    666396  265             @/.snapshots/2295/snapshot
2947    666420  265             @/.snapshots/2296/snapshot
..
2990    679557  265             @/.snapshots/2335/snapshot
erlangen:~ #

Thanks, i just removed the line with /usr/local from /etc/fstab and the machine is running fine again. (now it does not go into emergency mode because of a missing mount). I don’t use /usr/local, or at least i thing i do.

@karlmistelberger
This is indeed a solution. You just use one big linux partition (and a swap somewhere i think). I found out that the more barriers (partitions) you make, the bigger the chance is you run into a limit (of size). For me this was especially true, when a SSD was expensive and i bought a small one for just the linux system, keeping my /home on a HDD. These days are past, but i am still doing the things i learned back then.

There are a lot of suggestions on the internet to keep things on different partitions. Are there no drawbacks from your approach?

Both were the case with me. Snapshots were disabled when i installed OpenSuse 4 years ago. I enabled snapshots manually. This is where the files in the @ directory came from.

I must say snapshots on brtfs are hard to grasp for me. That when i mounted the @, there are also other mounts already in there (like /usr/local) is strange for me as a user. I know this functionality makes rollback work, but is makes this hard to work with as a user. You indeed need to be careful, because of this. Using a diff does not help, since in a snapshot there are always matches and differences. I started writing files in directories to see if thy appeared in other mounts. A rather clumsy method, but i could not think of an other / better one.

In this case you were not working as a user, you were working as a system administrator, and it raises the bar.

May be someone needs to file a bug (or feature request) to eliminate this artificial difference in subvolume layout. You are not the first one to configure snapshots post installation and the current installer behavior makes it more complicated than necessary.

Machines maintained by me have no swap:

 erlangen:~ # free -h
               total        used        free      shared  buff/cache   available
Mem:            30Gi       6.6Gi        11Gi       195Mi        13Gi        24Gi
Swap:             0B          0B          0B
erlangen:~ # 

I started with a 60GB SSD in 2014: Personal Computer MaĂźanfertigung: Ich bin beeindruckt | Karl Mistelberger and upgraded to 512GB in 2016: 25 Jahre eigener PC | Karl Mistelberger Adapting to rolling hardware keeps the level of annoyances low. Infamous host erlangen now has:

erlangen:~ # inxi -Dy222
Drives:    Local Storage: total: 5.46 TiB used: 1.67 TiB (30.7%)
           ID-1: /dev/nvme0n1 vendor: Samsung model: SSD 970 EVO Plus 2TB size: 1.82 TiB
           ID-2: /dev/nvme1n1 vendor: Samsung model: SSD 970 EVO Plus 2TB size: 1.82 TiB
           ID-3: /dev/sda vendor: Crucial model: CT2000BX500SSD1 size: 1.82 TiB
erlangen:~ # 
erlangen:~ # btrfs filesystem show 
Label: 'System'  uuid: 0e58bbe5-eff7-4884-bb5d-a0aac3d8a344
        Total devices 1 FS bytes used 544.25GiB
        devid    1 size 1.82TiB used 567.07GiB path /dev/nvme1n1p2

Label: 'Data'  uuid: 0e9f8bb1-2e36-4a6e-aab8-50a12a269d37
        Total devices 1 FS bytes used 1.14TiB
        devid    1 size 1.82TiB used 1.14TiB path /dev/sda1

Label: 'Backup'  uuid: 972278a5-61f5-41c1-aa5d-5fc8f458fe26
        Total devices 1 FS bytes used 41.11GiB
        devid    1 size 1.82TiB used 45.07GiB path /dev/nvme0n1p2

erlangen:~ # 

In my experience there are no drawbacks, but prerequisites exist:

  1. Users need to understand the concept of btrfs
  2. Solid hardware required
  3. Chill down before jumping to premature conclusions

Yea, a strategy to use in a sterile environment.

In the real world, that is never the usual case. - many other factors are in play. Don’t have a 2TB drive for anything else? It’s an easy demo. :+1:

To @sep8: You may safely ignore strong opinions unsupported by factual information. Infamous host erlangen is a long term project, which serves as a reference for several clones and many siblings, in the real world of course.

I have already read your advice :wink:

Great! Thanks for the feedback. :smiley: