30 gigs and out of space for Tumbleweed desktop

I got the warning messages about 300megs available on /

What I started to do was immediately look /var/log, where I was able to free up about 600megs or old logs.
What I discovered within /var/log/journal were large logs dating back to the birth of my Tumbleweed installation.
I rm -rf’d those logs and went snooping for more.

I have temporarily solved my problem. Still, 30gigs for a desktop setup with no server or database is too much.
For what is worth, I am using standard allocation, /boot, /, /home, swap With / set to 30gigs and formatted xfs.
I also must prune my system so as to purge kernel updates that are not pinned. I had almost 8 such updates.

So, I am in purge mode. Do you have any recommendations as to manage Tumbleweed, without actually taking the drastic action of doing a fresh reinstall?

If you are using “btrfs”, then 30G is too small.

I do not use btrfs as I multiboot systems.
I do have /boot on a 1 gig partition (adequately sized)
I do have / on a 30gig partition
/ do have home on a separate partition (adequately sized)
I do have swap on a separate partition (adequately sized)

When I first installed TWD, 22gigs was what I needed. I am trying to discover what is consuming 7 gigs of diskspace to raise that level to 29gigs.
If I am unable to discover the cause, a fresh installation will do that for me.

Interesting. I have a Tumbleweed system installed in 2014. And the root file system is still only using 17G (of the 40G available). I guess it depends on how you are using it.

I don’t have “/var/log/journal” (I did install “rsyslog”). And “/tmp” is cleared out at boot.

Well, okay, it is not my main machine.

Your observation and problem is not unusual and why I recommend at least 100GB root partition for most Users (despite an installation cutoff at 50GB whether the root partition is separate from the home partition or volume by default).

It helps to understand when a snapshot is created although it’s not the whole picture.

A snapshot is created every time you
shutdown and again when you bootup. This means that if you shutdown and reboot at least once every 24 hrs and maybe during the day, it is a significant contribution to creating snapshots.

Every time you install software, this includes updates and TW upgrades.
If you update regularly, even every day, that’s another two (before and after) snapshots created each time. And, this is besides application installations and removal

It’s easy to create so many snapshots, you overwhelm the default maintenance algorith that periodically decides which snapshots to keep and which to purge (which isn’t that often).

For a long time,
It’s been on my “ToDo” list to modify modify the Snapper maintenance algorithm by periodically reading remaining unused space and when a threshold is reached to more aggressively remove snapshots but I’ve never found enough time to work on it however easy it should be to do.

If someone actually did that and contributed it, I’m sure your name would live on as a major contributor later people will appreciate.

Until something like that can happen automatically, people will need to do that manually… Depending on your activity and maintenance habits, you may want to remove snapshots manually every 3-6 wks or so…


These are absolutely unrelated. Lots of others here mutliboot and do use btrfs. The two should work fine.

@Tsu2 Your post re. snapper makes no sense. The OP indicates clearly that they don’t use btrfs

@OP A dirty way to fine out, is to use

cd /
du -h --max-depth=1 | grep G | sort -g

check the output, enter a suspicious directory and repeat.

You’re absolutely right.


A more canonical way:

erlangen:~ # du -xhd1 -t1M /
18M     /etc
73M     /boot
8.4G    /usr
298M    /lib
1.2M    /bin
11M     /lib64
11M     /sbin
8.8G    /
erlangen:~ # 

The above does not display subvolume /@/var

erlangen:~ # du -xhd1 -t1M /var/
214M    /var/log
122M    /var/lib
817M    /var/cache
5.8M    /var/tmp
215M    /var/adm
1.4G    /var/
erlangen:~ # 

Are there any recommendation what to do when this is not done on boot ? (Except watching occupied / free space, deleting files manually when needed).
Tumbleweed 32 bit and Leap 15.2.

The default setup is to not clean “/tmp”.

In a pinch, you can manually remove files that you know you don’t need. Or you can setup cleaning on boot, and then reboot. Try

man tmpfiles.d

This one I use to clean every 7 days. I have the tmp.conf in /etc/tmpfiles.d
I am still using ext4.

I used to believe /tmp was cleared at boot, too ;).

This is actually now controlled by a systemd policy, and the default seems to be to ignore it. You have to copy the policy from /usr/lib/tmpfiles.d/tmp.conf to /etc/tmpfiles.d, and then edit it. But you can do things like aging. I came to discover this only recently, when I found my /tmp was multiple gigs in size, with a huge number of files, some going back over a year…

# See tmpfiles.d(5) for details

# Clear tmp directories separately, to make them easier to override
# SUSE policy: we don't clean those directories
q /tmp 1777 root root -
q /var/tmp 1777 root root -

# (I Added, kills things >10 days for /tmp, >30 days for /var/tmp...)
d /tmp 1777 root root 10d
d /var/tmp 1777 root root 30d

# (and found and added this..) Exclude namespace mountpoints created with PrivateTmp=yes
X /tmp/systemd-private-*
X /var/tmp/systemd-private-*

I see that I was unclear. On my system, “/tmp” is cleared at boot because I was careful to set it up that way.

Yes, the default is to not clear it.

Have defaults for everything. Delete unused snapshots:

erlangen:~ # btrfs filesystem usage -T /
    Device size:                  59.45GiB
    Device allocated:             24.03GiB
    Device unallocated:           35.42GiB
    Device missing:                  0.00B
    Used:                         14.27GiB
    Free (estimated):             43.80GiB      (min: 43.80GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               43.89MiB      (used: 0.00B)

             Data     Metadata  System              
Id Path      single   single    single   Unallocated
-- --------- -------- --------- -------- -----------
 1 /dev/sdb5 22.00GiB   2.00GiB 32.00MiB    35.42GiB
-- --------- -------- --------- -------- -----------
   Total     22.00GiB   2.00GiB 32.00MiB    35.42GiB
**   Used      13.62GiB 661.73MiB 16.00KiB            **
erlangen:~ # snapper list
    # | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description           | Userdata     
   0  | single |       |                          | root |            |         | current               |              
1004* | single |       | Wed Apr 22 09:59:11 2020 | root |  74.72 MiB |         | writable copy of #756 |              
1079  | pre    |       | Wed Apr 29 20:13:35 2020 | root | 773.36 MiB | number  | zypp(zypper)          | important=yes
1080  | post   |  1079 | Wed Apr 29 20:16:05 2020 | root | 137.42 MiB | number  |                       | important=yes
1083  | pre    |       | Thu Apr 30 00:51:34 2020 | root | 132.41 MiB | number  | zypp(zypper)          | important=yes
1084  | post   |  1083 | Thu Apr 30 00:53:23 2020 | root |  63.52 MiB | number  |                       | important=yes
1087  | pre    |       | Thu Apr 30 18:16:22 2020 | root | 418.77 MiB | number  | zypp(zypper)          | important=no 
1088  | post   |  1087 | Thu Apr 30 18:16:25 2020 | root | 864.00 KiB | number  |                       | important=no 
1089  | pre    |       | Fri May  1 05:41:42 2020 | root |   4.02 MiB | number  | zypp(zypper)          | important=yes
1090  | post   |  1089 | Fri May  1 05:42:54 2020 | root |   1.59 MiB | number  |                       | important=yes
1091  | pre    |       | Fri May  1 05:44:47 2020 | root |   1.84 MiB | number  | zypp(zypper)          | important=yes
1092  | post   |  1091 | Fri May  1 05:44:54 2020 | root |   3.19 MiB | number  |                       | important=yes
1095  | pre    |       | Sat May  2 05:13:35 2020 | root |  11.38 MiB | number  | zypp(zypper)          | important=yes
1096  | post   |  1095 | Sat May  2 05:18:09 2020 | root |  25.58 MiB | number  |                       | important=yes
erlangen:~ # 

Details of usage:

erlangen:~ # du -xhd1 -t100k /
18M     /etc
73M     /boot
8.5G    /usr
299M    /lib
1.2M    /bin
11M     /lib64
11M     /sbin
**8.9G    /**
erlangen:~ # du -xhd0 -t100k /var/
**1.5G    /var/**
erlangen:~ #