Huge snapshot, one of only two - can I make this the default to free up disk space?

Hi all.

Forgive me in advance for the lengthy description that follows, I just want to make sure I’m being thorough…

I have a tumbleweed install that is perhaps 2.5 years old. When I created it, I hadn’t run tumbleweed before for more than a few months without doing a nuke and pave. Thus when I set this one up, I used the suggested 40Gb root partition size, which has come back to haunt me several times in the form of perilously low diskspace on root. At some point maybe 5 months ago, I ran a standard zypper dup, and my machine wouldn’t boot afterward. I wasted considerable time trying to fix the problem before doing the sensible thing and booting from the previous snapshot. I thought that I had taken steps to make that snapshot my new default, but now am not sure.

In the intervening time, I have continued to do distribution upgrades, sometimes several per week, and despite religiously deleting snapshots, I now find myself with a root partition with only 16Gb free. About a month ago my zypper reported ~3600 packages to update, requiring ~5Gb of files. The last time this happened, I had about 18Gb free and thought I would be fine, but just barely squeaked by by deleting everything I could as my space dropped to near zero. Now that I have only 16Gb free, I am afraid to run the dup as I don’t think I will have sufficient space. I have only two snapshots left, 0 and 473, which is now up to 18Gb and has an asterisk next to it in the snapshot list:

localhost:~ # snapper list
   # | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description | Userdata
-----+--------+-------+--------------------------+------+------------+---------+-------------+---------
  0  | single |       |                          | root |            |         | current     |         
473* | single |       | Sun Jun 16 12:11:22 2019 | root |  18.05 GiB |         |             |         
[1]+  Done                    $PANGPA start
localhost:~ # 


So I have a couple of questions: first, is the fact that snapshot #473 has grown to 18Gb, and has an asterisk next to it, indicative of the fact that it is now the snapshot I’m booting from, and that it has grown to 18Gb because that is how different it is from #0?

Second, is there some way I can make #473 the new #0 or otherwise the default, so that I can reclaim that 18Gb of diskspace so I can run the distribution upgrade, which is now weeks overdue?

Any suggestions you can offer about how to “fix” this situation so that I can get that diskspace back and run zypper without breaking my system would be much appreciated. Thank you very much for reading this far!

Bill Donnelly

#0 does not exist, it is virtual link to your current root which is #473. You do not have any other snapshot unless some subvolume is orphaned. Show output of

btrfs file usage /
btrfs subvolume list /
btrfs qgroup show /

Hello arvidjaar–thanks so much for your quick reply. As requested:

localhost:~ # btrfs file usage /
Overall:
    Device size:                  40.00GiB
    Device allocated:             24.78GiB
    Device unallocated:           15.22GiB
    Device missing:                  0.00B
    Used:                         23.36GiB
    Free (estimated):             16.29GiB      (min: 16.29GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               56.91MiB      (used: 0.00B)

Data,single: Size:24.00GiB, Used:22.94GiB
   /dev/sdc2      24.00GiB

Metadata,single: Size:768.00MiB, Used:435.33MiB
   /dev/sdc2     768.00MiB

System,single: Size:32.00MiB, Used:16.00KiB
   /dev/sdc2      32.00MiB

Unallocated:
   /dev/sdc2      15.22GiB


And…

localhost:~ # btrfs subvolume list /
ID 257 gen 495959 top level 5 path @
ID 258 gen 515855 top level 257 path @/.snapshots
ID 260 gen 502859 top level 257 path @/boot/grub2/i386-pc
ID 261 gen 495959 top level 257 path @/boot/grub2/x86_64-efi
ID 262 gen 515938 top level 257 path @/opt
ID 263 gen 495959 top level 257 path @/srv
ID 264 gen 515954 top level 257 path @/tmp
ID 265 gen 511093 top level 257 path @/usr/local
ID 266 gen 515771 top level 257 path @/var/cache
ID 267 gen 495959 top level 257 path @/var/crash
ID 268 gen 513925 top level 257 path @/var/lib/libvirt/images
ID 269 gen 495959 top level 257 path @/var/lib/machines
ID 270 gen 495959 top level 257 path @/var/lib/mailman
ID 271 gen 495959 top level 257 path @/var/lib/mariadb
ID 272 gen 495959 top level 257 path @/var/lib/mysql
ID 273 gen 495959 top level 257 path @/var/lib/named
ID 274 gen 495959 top level 257 path @/var/lib/pgsql
ID 275 gen 515965 top level 257 path @/var/log
ID 276 gen 495959 top level 257 path @/var/opt
ID 277 gen 515966 top level 257 path @/var/spool
ID 278 gen 515966 top level 257 path @/var/tmp
ID 1567 gen 515966 top level 258 path @/.snapshots/473/snapshot
localhost:~ # 

And…

localhost:~ # btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5          16.00KiB     16.00KiB 
0/257        16.00KiB     16.00KiB 
0/258        16.00KiB     16.00KiB 
0/260         2.36MiB      2.36MiB 
0/261        16.00KiB     16.00KiB 
0/262       720.94MiB    720.94MiB 
0/263        16.00KiB     16.00KiB 
0/264         3.69GiB      3.69GiB 
0/265       768.00KiB    768.00KiB 
0/266       233.26MiB    233.26MiB 
0/267        16.00KiB     16.00KiB 
0/268        16.00KiB     16.00KiB 
0/269        16.00KiB     16.00KiB 
0/270        16.00KiB     16.00KiB 
0/271        16.00KiB     16.00KiB 
0/272        16.00KiB     16.00KiB 
0/273        16.00KiB     16.00KiB 
0/274        16.00KiB     16.00KiB 
0/275       470.73MiB    470.73MiB 
0/276        16.00KiB     16.00KiB 
0/277        87.89MiB     87.89MiB 
0/278        86.34MiB     86.34MiB 
0/852           0.00B        0.00B 
0/1567       18.05GiB     18.05GiB 
1/0             0.00B        0.00B 
255/269      16.00KiB     16.00KiB 
localhost:~ # 

Thank you again for your reply, and any advice you have.

Bill

Unless you want to keep them, you could try purging old kernels.

systemctl start purge-kernels

https://en.opensuse.org/SDB:Disk_space

If you’ve tried a bunch of desktop environments and software you don’t use anymore, you might want to make it a goal to do a fresh install, which will also allow you to change the partition size. The new default size is 80GB – probably to prevent this recurring headache.

You do not have any snapshots. Your root filesystem consumes 18GiB and you need to search what takes up this space (if you think it is too much). There may be GUI tools for it, I usually start with “du -sh /*” and drill down. Of course you need to exclude /proc, /sys, /dev which are virtual, /run and any other RAM based filesystem.

Hello ravas. Thank you very much for your reply. I have been waiting for a less-busy time to reinstall tumbleweed on a new SSD but I think I will just go ahead and do it now. I’ll probably go with a 120G root, just to be on the safe side!

Thanks again for your help,

Bill

Hello arvidjaar. Thanks very much for your suggestion. I will probably reinstall on a new drive and go with a 120G root. Once the new install is working I will do as you suggested and see if I can make enough space to update this install. Your post brought to mind a vague recollection that I may have installed matlab on root a while ago–maybe that’s what’s eating the space.

Thank you again,

Bill