I got the warning messages about 300megs available on /
What I started to do was immediately look /var/log, where I was able to free up about 600megs or old logs.
What I discovered within /var/log/journal were large logs dating back to the birth of my Tumbleweed installation.
I rm -rf’d those logs and went snooping for more.
I have temporarily solved my problem. Still, 30gigs for a desktop setup with no server or database is too much.
For what is worth, I am using standard allocation, /boot, /, /home, swap With / set to 30gigs and formatted xfs.
I also must prune my system so as to purge kernel updates that are not pinned. I had almost 8 such updates.
So, I am in purge mode. Do you have any recommendations as to manage Tumbleweed, without actually taking the drastic action of doing a fresh reinstall?
I do not use btrfs as I multiboot systems.
I do have /boot on a 1 gig partition (adequately sized)
I do have / on a 30gig partition
/ do have home on a separate partition (adequately sized)
I do have swap on a separate partition (adequately sized)
When I first installed TWD, 22gigs was what I needed. I am trying to discover what is consuming 7 gigs of diskspace to raise that level to 29gigs.
If I am unable to discover the cause, a fresh installation will do that for me.
Your observation and problem is not unusual and why I recommend at least 100GB root partition for most Users (despite an installation cutoff at 50GB whether the root partition is separate from the home partition or volume by default).
It helps to understand when a snapshot is created although it’s not the whole picture.
A snapshot is created every time you
shutdown and again when you bootup. This means that if you shutdown and reboot at least once every 24 hrs and maybe during the day, it is a significant contribution to creating snapshots.
Every time you install software, this includes updates and TW upgrades.
If you update regularly, even every day, that’s another two (before and after) snapshots created each time. And, this is besides application installations and removal
It’s easy to create so many snapshots, you overwhelm the default maintenance algorith that periodically decides which snapshots to keep and which to purge (which isn’t that often).
For a long time,
It’s been on my “ToDo” list to modify modify the Snapper maintenance algorithm by periodically reading remaining unused space and when a threshold is reached to more aggressively remove snapshots but I’ve never found enough time to work on it however easy it should be to do.
If someone actually did that and contributed it, I’m sure your name would live on as a major contributor later people will appreciate.
Until something like that can happen automatically, people will need to do that manually… Depending on your activity and maintenance habits, you may want to remove snapshots manually every 3-6 wks or so…
I used to believe /tmp was cleared at boot, too ;).
This is actually now controlled by a systemd policy, and the default seems to be to ignore it. You have to copy the policy from /usr/lib/tmpfiles.d/tmp.conf to /etc/tmpfiles.d, and then edit it. But you can do things like aging. I came to discover this only recently, when I found my /tmp was multiple gigs in size, with a huge number of files, some going back over a year…
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
# SUSE policy: we don't clean those directories
q /tmp 1777 root root -
q /var/tmp 1777 root root -
# (I Added, kills things >10 days for /tmp, >30 days for /var/tmp...)
d /tmp 1777 root root 10d
d /var/tmp 1777 root root 30d
# (and found and added this..) Exclude namespace mountpoints created with PrivateTmp=yes