Latest update ran out of disk space now system locked up

I tried installing the latest update and when it got to around 3200 of 3500 installed it ran out of disk space. When I came to check on its progress it said it needed xxx space. The cursor was still active but the clock wasn’t running and everything was froze. I tried rebooting but using either of the 2 snapshots it always freezes at the ‘light bulb’ with a working cursor and nothing else.
So … what to do?

Thanks

Hi
Press the esc key or temporarily disable plymouth at boot, press the e key and edit the linux(efi) line to see what is happening.


plymouth.enable=0

I think its just so out of disk space it won’t boot. That would explain why the previous snapshot does the same thing. If I could get to the command line somehow I could delete one of the snapshots or the huge rpm database this last update created to free up some space.

Dave

Boot only to multi-user ( add a 3 to the line containing linux / linuxefi, after hitting ‘e’ in the boot menu )
Next do, as root

snapper list

then


snapper delete NO_FROM-NO_TO

Hi
You need to boot from a rescue system, chroot the system and run the cleanup tools for snapper etc…

I couldn’t find the plymouth.enable or the efi lines but putting “single” after “ro quiet splash” got me to the login. From there I used snapper to delete the “pre” part of the long update as it never wrote the “post” part as it never finished installing. I also did the rpm rebuild. There was only 24 bytes! left and now there is 700mb of space which still isn’t enough for the large update. It does boot normally now also.

Any suggestions on stuff that is safe to delete?

Thanks.

Root is a 40 gb partition and it seems way fuller than it should be.

Here is what it looks like with df:

ilesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 8164792 0 8164792 0% /dev
tmpfs 8178784 59676 8119108 1% /dev/shm
tmpfs 8178784 1764 8177020 1% /run
tmpfs 8178784 0 8178784 0% /sys/fs/cgroup
/dev/sdb3 41946112 40232668 713268 99% /
/dev/sdb3 41946112 40232668 713268 99% /opt
/dev/sdb3 41946112 40232668 713268 99% /var/crash
/dev/sdb3 41946112 40232668 713268 99% /var/lib/libvirt/images
/dev/sdb3 41946112 40232668 713268 99% /var/lib/mailman
/dev/sdb3 41946112 40232668 713268 99% /var/lib/machines
/dev/sdb3 41946112 40232668 713268 99% /.snapshots
/dev/sdb3 41946112 40232668 713268 99% /var/log
/dev/sdb3 41946112 40232668 713268 99% /var/spool
/dev/sdb3 41946112 40232668 713268 99% /var/lib/mariadb
/dev/sdb3 41946112 40232668 713268 99% /tmp
/dev/sdb3 41946112 40232668 713268 99% /var/lib/named
/dev/sdb3 41946112 40232668 713268 99% /var/tmp
/dev/sdb3 41946112 40232668 713268 99% /usr/local
/dev/sdb3 41946112 40232668 713268 99% /var/lib/pgsql
/dev/sdb3 41946112 40232668 713268 99% /srv
/dev/sdb3 41946112 40232668 713268 99% /var/opt
/dev/sdb3 41946112 40232668 713268 99% /var/lib/mysql
/dev/sdb4 73126008 32410472 40715536 45% /home
/dev/sda5 19196752 4508496 14688256 24% /home/dave/MAIL
tmpfs 1635756 20 1635736 1% /run/user/1000

Dolphin thinks its way bigger:

128.1 TiB (140,797,137,817,160)
1132184 files, 81142 sub-folders

Any ideas?

On btrfs the ‘normal’ df is not reliable, please use

btrfs fi df /

That shows:

btrfs fi df /
Data, single: total=38.22GiB, used=37.53GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.75GiB, used=775.61MiB
GlobalReserve, single: total=81.08MiB, used=0.00B

Hi
Have you read: SDB:BTRFS - openSUSE Wiki

This is also worth a peruse: SDB:Disable btrfsmaintenance - openSUSE Wiki