Root partition is full

opensuse v13.1
linux 3.11.10-25-desktop x86_64

The server has been around a while; the root partition is only 20 GB. Today the squid proxy quit because it could not write whatever to the volume.

$ df -h
Filesystem                            Size  Used Avail Use% Mounted on
/dev/sdd2                              20G   19G  234M  99% /
devtmpfs                              1.8G   40K  1.8G   1% /dev
tmpfs                                 1.9G     0  1.9G   0% /dev/shm
tmpfs                                 1.9G  6.2M  1.9G   1% /run
tmpfs                                 1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs                                 1.9G  6.2M  1.9G   1% /var/run
tmpfs                                 1.9G  6.2M  1.9G   1% /var/lock
/dev/sdb1                             230G   53G  166G  25% /data01
/dev/sdd3                             130G  1.2G  122G   1% /home

The root is showing 234M free space because I deleted some files that were extraneous. Before that the free space was listed at 0.

I have gone through all of the root directories using du -sh in search of the one that has consumed the free space. I could not find one. Adding all of the used space reported by du results in 16GB.

What is using the remaining 4 GB? How do I find what has filled the volume?

A quick guess 5% would be reserved blocks (ext3/4 does this to allow root login even if the file system is full) with possibly some application / service having an open file handle that reserves space ( lsof +aL1 would show what applications have unlinked files open )

Have you rebooted the server since you ran out of space?

Have you checked /tmp? Or do you remove everything from there on every boot?

Else you can start from the top (/) and do

du -sh *

Then try to estimate which one has grown out of proportions. got there and repeat

cd culprit
du -sh  *

Repeat. Thus, with trial and intelligent guessing, you might find where all your space has gone.

Yes. Since the swap disk was nearly full, I thought restarting might help. It made no difference.

Please explain. What did you “see” that brought you to this conclusion?

how many kernels do you have installed?

if unsure have a look in /lib/modules/,
how many sub-directories are populated?

Looking in /lib/modules/ there are 18 kernel directories, using 1.3 GB of space. None of 2.6.x or 3.7.x are populated, all of the 3.11.x are. I only have two kernels listed in the boot loader.

Is it safe to remove all of the older kernels? (rm -fr <whatever>). Or is there a more proper system command to do this?

It is of course best to de-install things using YaST > Software > Software Management or zypper. Not to use explosives. You can always revert to violence using rm when all else fails :wink:

There is a graphical tool called filelight. That might help if you’re graphically inclined.

Which is why I asked about a more proper method. However, YaST only lists the most current kernel. Is there another system tool to remove the kernel modules, or just blindly delete them?

it looks as though old kernels are there,

note the installed kernel versions listed in yast, the associated sub-directories /lib/modules/ should not be touched,
it should be safe to delete the contents of the others

also delete the associated kernel parts in /boot/, if any,

where 3.11.0-1.xxxxxx is the version of the unwanted kernel/s (NOT shown in yast)

NB. if a mistake is made a full re-install will be necessary

Or you could just remove kernel from multiversion = in /etc/zypp/zypp.conf and run **zypper ve **​which will then promply remove all but the latest kernel.

Did you use the Versions tab?

Or follow Miuku as by above.

That is of course all when you installed those several kernels using YaST/zypper (maybe even Apper). When you moved them in manualy, you have to delete them manualy.