Root-partition full. Can't boot anymore

One problem with OpenSUSE is that it doesn’t give any warning about low diskspace.
I want to install latest updates and in the midle of installing updates, then it stop with warning about no more diskspace.

So now i have to decrease /home so i can increase space on root-partition.
I tought this could be done with installation-media, but no. If i reduse /home, i can’t move /Home to the end of disk so i can increase /.

What am i doing wrong?

INHO you should not start in reducing/moving/enlarging. You should start to find out why your / partition is full.
Else you are not curing your problem, but only acting on symptoms.

First thing to do is always show the computer facts you base your story on. Starting with

df -h

which will show how large file systems are and how full.

And about the “how large”. When you have 40 GB for a Btrfs or 20 GB for a non Btrfs file system, it is vgery strange that there isn’t enough place for an update.

A second possibility is that the snapshots on your system ar not managed correct.

How do i run this cmd when the system is not booting anymore. It comes to Tumbleweed-logo and then it hung.

I thought you booted into a rescue system using the installation media?

I also assume that you can at least answer a few of the questions (suggested above) yourself. Like

  • is the / file system Btrfs or not;
  • at what size did you create it at installation.

My laptop have a 110GB SSD. I created the root-partition as 40GB BtrFS. I also have 2GB Swap and the rest is /Home.
Now / is 100% full and because of this it refuse to boot.

I try now with Rescue-console from installation media.
The problem is that i need to mount / from /dev/sda, but i guess there is not recommanded to mount this as root.
I need advices!

(UPDATE)
I manage to mount / and /home in each folder in /mnt so i can access partition.
So what can i do to free space in / ?

RS

I repeat:

Either it is full by itself, or there are to many Btrfs snashots kept.
Do

df -h

to see if it is realy 100% (or almost) full (but that would be strange with 40GB)…

For listing Btrfs snapshots, etc. I am not the correct person, because I do not use it. Hope somewone else will tune in.

I manage to get it up again. I delete /tmp-folder and free 1,7GB.
Now my system look like this:


ronsim@localhost:~> df -h
Filsystem      Størrelse Brukt Tilgj. Bruk% Montert på
devtmpfs            2,0G     0   2,0G    0% /dev
tmpfs               2,0G   42M   1,9G    3% /dev/shm
tmpfs               2,0G  9,8M   2,0G    1% /run
tmpfs               2,0G     0   2,0G    0% /sys/fs/cgroup
/dev/sda3            41G   38G   1,7G   96% /
/dev/sda3            41G   38G   1,7G   96% /var/lib/machines
/dev/sda3            41G   38G   1,7G   96% /var/cache
/dev/sda3            41G   38G   1,7G   96% /var/lib/mysql
/dev/sda3            41G   38G   1,7G   96% /opt
/dev/sda3            41G   38G   1,7G   96% /var/opt
/dev/sda3            41G   38G   1,7G   96% /usr/local
/dev/sda3            41G   38G   1,7G   96% /var/lib/named
/dev/sda3            41G   38G   1,7G   96% /.snapshots
/dev/sda3            41G   38G   1,7G   96% /var/lib/mailman
/dev/sda3            41G   38G   1,7G   96% /var/lib/pgsql
/dev/sda3            41G   38G   1,7G   96% /boot/grub2/x86_64-efi
/dev/sda3            41G   38G   1,7G   96% /var/tmp
/dev/sda3            41G   38G   1,7G   96% /var/lib/mariadb
/dev/sda3            41G   38G   1,7G   96% /srv
/dev/sda3            41G   38G   1,7G   96% /var/crash
/dev/sda3            41G   38G   1,7G   96% /tmp
/dev/sda3            41G   38G   1,7G   96% /var/log
/dev/sda3            41G   38G   1,7G   96% /boot/grub2/i386-pc
/dev/sda3            41G   38G   1,7G   96% /var/spool
/dev/sda3            41G   38G   1,7G   96% /var/lib/libvirt/images
/dev/sda4            69G   23G    43G   35% /home
tmpfs               394M   12K   394M    1% /run/user/1000

I also adjust space used by Snapper, but i need to clear files used by snapper.
I will find a way to clear Snapper.

RS

Congratulations, this is a good start. Now you can run from the system itself.

BTW, I hope you did not “delete” /tmp. I hope you only removed whta is inside. You need /tmp. Please check if it is there!

I think 96% (38GB) still outrageous. What follows is an ittirative process. Go to / and check size of what is in there:

cd /
du -sh *

When you see someting that is suspicious because of it’s size, go there and repeat. Let us assume that /etc is more then 30GB (which is very unlikely, but I need an example):

cd etc
du -sh *

repeat until you say AHA!.

When in doubt, ask first. Specialy be carefull in deleting before you know what you are doing.

There are some directories that will be outrageously huge. These are virtual and represent the kernel space leave them alone!!!

Something doesn’t show up right.
At / :


ronsim@localhost:/> sudo du -sh *
1,2M    bin
93M     boot
4,0K    dev
25M     etc
23G     home
1005M   lib
11M     lib64
0       mnt
209M    opt
0       pbl.F5OwtP
du: klarte ikke å åpne 'proc/19885/task/19885/fd/4': Ingen slik fil eller filkatalog
du: klarte ikke å åpne 'proc/19885/task/19885/fdinfo/4': Ingen slik fil eller filkatalog
du: klarte ikke å åpne 'proc/19885/fd/4': Ingen slik fil eller filkatalog
du: klarte ikke å åpne 'proc/19885/fdinfo/4': Ingen slik fil eller filkatalog
0       proc
16M     root
du: klarte ikke å åpne 'run/user/1000/doc': Ikke tilgang
du: klarte ikke å åpne 'run/user/1000/gvfs': Ikke tilgang
34M     run
11M     sbin
0       selinux
0       srv
0       sys
152K    tmp
9,6G    usr
767M    var

As you can see, only /usr shows up with 9,6GB
But where is the rest?

Then i was thinking maybe the BtrFS is out of balance, so i run:


ronsim@localhost:/> sudo btrfs fi show
Label: none  uuid: f708ed27-2b8c-480a-8d32-fdfe1519a963
        Total devices 1 FS bytes used 37.73GiB
        devid    1 size 40.00GiB used 39.77GiB path /dev/sda3


maybe the BtrFS need to be balanced?
So i did:


ronsim@localhost:/> sudo btrfs balance start / -dusage=5
Done, had to relocate 0 out of 50 chunks

And it didn’t help anything!
No i’m out of idees!

RS

Tumbleweed now defaults to an 80gb root partition. ;-]

I used to have this problem, but found out it was mostly caused by too many snapshots and old kernels.
See https://en.opensuse.org/SDB:Disk_space

If you’re using an SSD you’ll want to run trim afterwards. I use the btrfsmaintenance scripts manually.
https://en.opensuse.org/SDB:Disable_btrfsmaintenance

I have Tumbleweed running btrfs. Provide more information through running the follwing commands:

erlangen:~ # btrfs subvolume list /mnt 
ID 257 gen 31 top level 5 path @
ID 258 gen 413 top level 257 path @/var
ID 259 gen 379 top level 257 path @/usr/local
ID 260 gen 406 top level 257 path @/tmp
ID 261 gen 68 top level 257 path @/srv
ID 262 gen 406 top level 257 path @/root
ID 263 gen 37 top level 257 path @/opt
ID 264 gen 125 top level 257 path @/boot/grub2/x86_64-efi
ID 265 gen 27 top level 257 path @/boot/grub2/i386-pc
ID 266 gen 401 top level 257 path @/.snapshots
ID 267 gen 420 top level 266 path @/.snapshots/1/snapshot
ID 275 gen 127 top level 266 path @/.snapshots/2/snapshot
ID 289 gen 248 top level 266 path @/.snapshots/16/snapshot
ID 290 gen 256 top level 266 path @/.snapshots/17/snapshot
ID 309 gen 359 top level 266 path @/.snapshots/35/snapshot
ID 311 gen 386 top level 266 path @/.snapshots/36/snapshot
erlangen:~ # 

erlangen:~ # btrfs filesystem df /mnt 
Data, single: total=8.01GiB, used=6.88GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=520.00MiB, used=391.16MiB
GlobalReserve, single: total=23.48MiB, used=0.00B
erlangen:~ # 
erlangen:~ # btrfs device usage /mnt
/dev/sdb5, ID: 1
   Device size:            39.45GiB
   Device slack:            3.50KiB
   Data,single:             8.01GiB
   Metadata,single:       520.00MiB
   System,single:          32.00MiB
   Unallocated:            30.91GiB

erlangen:~ # 

Yes,the usage shown by df is rather normal, thus: Btrfs. And I see other come already with Btrfs help :).

I think i solved this now!
I deleted all Snapshots except The current one and the first one that was created after fresh installation. These two Snapshots can’t be deletet.
I manage to get 15GB back. Now i can run the latest Zypper dup without running out of space :slight_smile:

Conclusion: 110GB SSD is maybe to small :slight_smile:

Hurray, you got it.

I am not sure. 40GB for a btrfs / should be enough. Apparently your snapshot management does not function as it should. Hope others will check that with you.