I’m using Btrfs as my root file system on Tumbleweed, and I’m trying to work out whether it has lost/“wasted” some space. The partition is 25GB and I’m using ~12GB but df and Btrfs say that 24GB is used, despite there only being a few snapshots totalling less than 1GB of exclusive space!
I’ve even restored from a snapshot recently so that I shouldn’t have too many “local” changes on the disk, in case that was the problem, but it doesn’t seem to have helped.
I previously had problems because something was wrong with my Snapper/Btrfs config, but I seemed to have fixed that by restoring a snapshot and manually tidying up subvolume 5.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/main-root 25G 24G 978M 97% /
$ sudo btrfs fi usage /
Overall:
Device size: 25.00GiB
Device allocated: 24.97GiB
Device unallocated: 33.00MiB
Device missing: 0.00B
Used: 23.64GiB
Free (estimated): 978.80MiB (min: 978.80MiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 71.73MiB (used: 0.00B)
Data,single: Size:23.69GiB, Used:22.76GiB
/dev/mapper/main-root 23.69GiB
Metadata,single: Size:1.25GiB, Used:893.80MiB
/dev/mapper/main-root 1.25GiB
System,single: Size:32.00MiB, Used:16.00KiB
/dev/mapper/main-root 32.00MiB
Unallocated:
/dev/mapper/main-root 33.00MiB
$ sudo btrfs qgroup show --sort=excl /
qgroupid rfer excl
-------- ---- ----
0/5 16.00KiB 16.00KiB
0/257 16.00KiB 16.00KiB
0/259 16.00KiB 16.00KiB
0/260 16.00KiB 16.00KiB
0/1696 10.95GiB 16.00KiB
0/1697 10.95GiB 16.00KiB
0/1708 10.98GiB 736.00KiB
0/1709 10.98GiB 1.05MiB
0/1684 11.04GiB 1.54MiB
0/261 3.43MiB 3.43MiB
0/272 4.61MiB 4.61MiB
0/1685 10.97GiB 5.28MiB
0/1711 10.98GiB 6.52MiB
0/1694 10.95GiB 14.38MiB
0/1687 10.95GiB 14.89MiB
0/1710 10.98GiB 28.62MiB
0/1678 10.73GiB 46.80MiB
0/1698 11.03GiB 57.37MiB
0/1675 10.98GiB 68.23MiB
0/258 184.82MiB 184.82MiB
0/1683 11.59GiB 232.03MiB
0/1641 10.65GiB 332.07MiB
1/0 23.28GiB 12.39GiB
$ sudo snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+------+-------+------------------------------+------+----------+--------------------+--------------
single | 0 | | | root | | current |
single | 1258 | | Mon 18 Jun 2018 19:00:00 BST | root | timeline | timeline |
single | 1291 | | Sun 24 Jun 2018 14:26:40 BST | root | | |
pre | 1294 | | Sun 24 Jun 2018 15:59:55 BST | root | number | zypp(zypper) | important=yes
post | 1299 | 1294 | Sun 24 Jun 2018 19:06:29 BST | root | number | | important=yes
pre | 1300 | | Sun 24 Jun 2018 19:27:14 BST | root | number | zypp(ruby.ruby2.5) | important=no
post | 1301 | 1300 | Sun 24 Jun 2018 19:27:20 BST | root | number | | important=no
single | 1303 | | Mon 25 Jun 2018 19:00:00 BST | root | timeline | timeline |
single | 1310 | | Fri 29 Jun 2018 20:00:30 BST | root | timeline | timeline |
pre | 1312 | | Sat 30 Jun 2018 08:56:24 BST | root | number | zypp(zypper) | important=yes
single | 1313 | | Sat 30 Jun 2018 09:00:00 BST | root | timeline | timeline |
post | 1314 | 1312 | Sat 30 Jun 2018 09:09:08 BST | root | number | | important=yes
single | 1324 | | Sat 30 Jun 2018 19:00:33 BST | root | timeline | timeline |
single | 1325 | | Sat 30 Jun 2018 20:00:33 BST | root | timeline | timeline |
single | 1326 | | Sun 01 Jul 2018 19:00:18 BST | root | timeline | timeline |
single | 1327 | | Mon 02 Jul 2018 20:00:49 BST | root | timeline | timeline |
$ mount | grep subvol
/dev/mapper/main-root on / type btrfs (rw,relatime,ssd,space_cache,subvolid=1675,subvol=/@/.snapshots/1291/snapshot)
/dev/mapper/main-root on /.snapshots type btrfs (rw,relatime,ssd,space_cache,subvolid=272,subvol=/@/.snapshots)
/dev/mapper/main-root on /srv type btrfs (rw,relatime,ssd,space_cache,subvolid=259,subvol=/@/srv)
/dev/mapper/main-root on /boot/grub2/i386-pc type btrfs (rw,relatime,ssd,space_cache,subvolid=260,subvol=/@/boot/grub2/i386-pc)
/dev/mapper/main-root on /opt type btrfs (rw,relatime,ssd,space_cache,subvolid=258,subvol=/@/opt)
/dev/mapper/main-root on /boot/grub2/x86_64-efi type btrfs (rw,relatime,ssd,space_cache,subvolid=261,subvol=/@/boot/grub2/x86_64-efi)
$ sudo du -hxs /
12G /
So, how can I have a 12GB root, barely anything in the other subvolumes, a total of about 1GB of excl usage from snapshots, and yet have 23GB of 24GB of the data allocation used? And 12GB exclusive usage in qgroup 1/0? Am I misunderstanding something, or has Btrfs managed to “lose” some disk space that it thinks is used when it isn’t?
Thanks.