Suddenly I can’t boot. The drive is 100% full. It was at 29% yesterday,
localhost:/home/ion #mount /dev/sdc3 /mnt/in
localhost:/home/ion #
localhost:/home/ion # df /mnt/in/home/ion
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc3 66510264 63903280 0 100% /mnt/in
localhost:/home/ion #
Normally I would use du to find where it all went but there are thousands of files. Is there a way to determine the largest consumers of disk space? Thanks in advance.
What I usually do from a command-line is use du -h --max=1 | sort -h to look at the current directory and one level down. That provides a good overview as to where the space is, then start drilling into the part of the drive that’s used the most, and iterate down that until I find where the space has gone.
The -h produces human-readable values, and using that on the sort command sorts those intelligently so the smaller directories are at the top, and the larger ones are at the bottom. If you use just a regular sort, because human-readable values are values like “80G”, it sorts using a standard alphanumeric sort, and larger stuff can appear higher in the list because ‘1GB’ appears alphanumerically above ‘1KB’ (for example).
I had the same symptom a few years back, maybe not the same cause, I don’t know. I had Tumbleweed installed and a few weeks later the root disk was filled 100% which prevented the computer from running and booting. This was the first, and only, time I used btrfs. Since then I use ext4 again and have no problems with it.
I can have done something wrong because I don’t understand btrfs but it was a normal install as you would do a TW install. Maybe the root disk needed to be larger with btrfs than when it is formatted as ext4, could well be, but I didn’t, and still don’t, know that.
For (1), that’s because, as a default, when you choose BTRFS, snapshots are created, as time goes on.
Space is required for each and every snapshot. And so, that partition will consume more space over time.
That was what I thought as well, but it happened in a matter of weeks. This would then mean I did not make the partition large enough.
I just see @dart364 writing about a bad COW config. Well, as I wrote, I just installed TW and that’s it. I configured nothing related to btrfs and that might be where I went wrong. No idea.
The most important question was brought up by hendersj: which filesystem is used. Because with btrfs, the native df command is the wrong tool to use.
df on btrfs always shows 100% usage. So this might be a red herring, and there are other reasons why the system no longer boots.
(If you have determined your filesystem and used the correct tools for usage calculation:)
It is not unusal that a process or app can run haywire and spams the journal in a short time and fills up all space. That means, also check the file sizes under /var/log/
No problem that was actually my mistake … I had the same thing happen back in (201?) and reinstalled … I just don’t use snapper anymore and try not to do something dumb
Similarly, btrfs filesystem du will give more accurate disk usage statistics, but it doesn’t accept the same options as the regular du command (for example, --max=1).
Besides, the first thing I would have done is “sudo zypper clean” and then, after deleting snapshots and especially if you have an SSD, enable compression!!!
Before enabling it, run sudo compsize -x / to see the "before" and "after" of compression.. To enable it, edit your “/etc/fstab” file and add “compress=zstd:15” property like in this line:
I have maximum compression enabled for the files I have on “/” which, in my case, is the system partition.
After adding “compress=zstd:15” and rebooting, you’ll have automatic compression enabled for every new file write. To compress existing files, you’ll use “sudo btrfs filesystem defragment -rvf -czstd /”
and when it finishes, run “sudo compsize -x /” again and you’ll see the space saved by compression.