I have a openSUSE MATE instalation.
My fs is:
sidro@home:~> df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 21M 2.0G 2% /dev/shm
tmpfs 2.0G 2.1M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sda1 41G 15G 25G 38% /
/dev/sda1 41G 15G 25G 38% /.snapshots
/dev/sda1 41G 15G 25G 38% /var/tmp
/dev/sda1 41G 15G 25G 38% /usr/local
/dev/sda1 41G 15G 25G 38% /tmp
/dev/sda1 41G 15G 25G 38% /var/spool
/dev/sda1 41G 15G 25G 38% /var/opt
/dev/sda1 41G 15G 25G 38% /srv
/dev/sda1 41G 15G 25G 38% /opt
/dev/sda1 41G 15G 25G 38% /boot/grub2/x86_64-efi
/dev/sda1 41G 15G 25G 38% /var/log
/dev/sda1 41G 15G 25G 38% /var/crash
/dev/sda1 41G 15G 25G 38% /var/lib/pgsql
/dev/sda1 41G 15G 25G 38% /var/lib/named
/dev/sda1 41G 15G 25G 38% /boot/grub2/i386-pc
/dev/sda1 41G 15G 25G 38% /var/lib/mysql
/dev/sda1 41G 15G 25G 38% /var/lib/mariadb
/dev/sda1 41G 15G 25G 38% /var/lib/mailman
/dev/sda1 41G 15G 25G 38% /var/lib/libvirt/images
/dev/sda6 418G 261G 136G 66% /home
If my root partition has 50% full of space BTRS FS send me a message with no space left.
df is not accurate when dealing with btrfs filesystems. Please try:
btrfs filesystem df /
I had not heard that df is inaccurate.
Note that df reports numbers of 1k blocks. You can extrapolate that number multiplying by a million or more to read a “Megabyte” or “Gigabyte” based on base 10 multiplying.
Note that “btrfs filesystem df” reports Mebibytes(denoted MiB), not megabytes(denoted MB).
Mebibytes are calculated with a base 2 multiplier, not base 10 so returns a different number.
For the OP’s purposes, it’s probably not important exactly how much free space is available calculated by Mebibytes or Megabytes but that “sufficient” free space exists. So, either “df” or “btrfs filesystem df” are both likely fine for this purpose.
if you look at this : https://www.suse.com/documentation/sles11/stor_admin/data/trbl_btrfs_volfull.html
Always tries to use snapper for this, either in command mode, or through yast
Remember that snapshots are subvolumes :
X-79-PRO:~ # btrfs subvolume list /.snapshots/
ID 257 gen 2917 top level 5 path .snapshots
ID 258 gen 2956 top level 257 path 1/snapshot
ID 259 gen 157 top level 5 path boot/grub2/i386-pc
ID 260 gen 157 top level 5 path boot/grub2/x86_64-efi
ID 261 gen 2609 top level 5 path opt
ID 262 gen 2609 top level 5 path srv
ID 263 gen 2955 top level 5 path tmp
ID 264 gen 2340 top level 5 path usr/local
ID 265 gen 157 top level 5 path var/crash
ID 266 gen 157 top level 5 path var/lib/libvirt/images
ID 267 gen 157 top level 5 path var/lib/mailman
ID 268 gen 157 top level 5 path var/lib/mariadb
ID 269 gen 157 top level 5 path var/lib/mysql
ID 270 gen 157 top level 5 path var/lib/named
ID 271 gen 157 top level 5 path var/lib/pgsql
ID 272 gen 2956 top level 5 path var/log
ID 273 gen 157 top level 5 path var/opt
ID 274 gen 2956 top level 5 path var/spool
ID 275 gen 2955 top level 5 path var/tmp
ID 282 gen 157 top level 257 path 2/snapshot
ID 283 gen 2611 top level 258 path 1/snapshot/var/lib/machines
ID 305 gen 213 top level 257 path 15/snapshot
ID 306 gen 215 top level 257 path 16/snapshot
ID 309 gen 220 top level 257 path 19/snapshot
ID 310 gen 222 top level 257 path 20/snapshot
ID 311 gen 224 top level 257 path 21/snapshot
ID 312 gen 226 top level 257 path 22/snapshot
ID 313 gen 228 top level 257 path 23/snapshot
ID 314 gen 231 top level 257 path 24/snapshot
ID 315 gen 2332 top level 257 path 25/snapshot
ID 316 gen 2340 top level 257 path 26/snapshot
ID 317 gen 2341 top level 257 path 27/snapshot
ID 318 gen 2343 top level 257 path 28/snapshot
ID 319 gen 2346 top level 257 path 29/snapshot
ID 320 gen 2362 top level 257 path 30/snapshot
ID 321 gen 2363 top level 257 path 31/snapshot
ID 322 gen 2365 top level 257 path 32/snapshot
ID 323 gen 2564 top level 257 path 33/snapshot
ID 324 gen 2567 top level 257 path 34/snapshot
ID 326 gen 2859 top level 257 path 35/snapshot
ID 327 gen 2865 top level 257 path 36/snapshot
ID 328 gen 2867 top level 257 path 37/snapshot
ID 329 gen 2869 top level 257 path 38/snapshot
ID 330 gen 2871 top level 257 path 39/snapshot
ID 331 gen 2872 top level 257 path 40/snapshot
And some are important for the system, so it is best to use snapper, for which it acts the same algorithm.
It is also important to set your files and also journal to leverage more hard disk space : /etc/snapper/configs/root and journal : /etc/systemd/journald.conf .
X-79-PRO:~ # journalctl --disk-usage
Archived and active journals take up 24.0M on disk.
And this is :
# This file is part of systemd.
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
# See journald.conf(5) for details.
PD. Sorry but my English is a little bad and I’m using a translator, thanks
See, for instance, the SLED Admin guide here: https://www.suse.com/documentation/sled-12/book_sle_admin/data/sec_snapper_faqs.html
Curious about this, I took a closer look at this…
First, the suse documentation you referenced wasn’t useful for me, it merely stated thus assuming no further explanation needed. You take suse at its word.
So, I Googled and most hits referenced the BTRFS ArchWiki link
After reading what is normally a very authoritative reference, I’ve come to the early conclusion that what is described is only a half-truth. If you total up all 3 “used” space, the result is <nearly> identical to what is reported in df, within a few tens of megabytes on a 50GB partition. That’s a miniscule difference. One could say that the difference that exists however tiny is proof that one of the two is not correct, but IMO the method of calculation could be the difference and there is no <practical> difference unless that exactness to the byte is critical to you.
In other words, IMO the original people who documented BTRFS probably overlooked what I’ve described and concluded wrongly. They compared only “df -h” to “btrfs filesystem df /” when they should have compared to “df.” It’s like the time they miscalculated the martian space trajectory by using miles instead of kilometers…
Perhaps the wild card calculation is how large the BTRFS metadata might be… Metadata is completely variable depending on what the metadata is supposed to describe and every situation would be different. Is it possible that one scenario might be 10 or more times the size of another? Who knows? The Arch Wiki article suggests that the ordinary “df” doesn’t count BTRFS metadata, but at least on my systems, that might not be true (but can only be verified with a multitude of testing. For now on the systems I’ve looked at, if the metadata isn’t included, there would be a much bigger difference between df and “btrfs filesystem df”).
This is one of those rare occasions when my very minimal testing suggests rather strongly (however possibly wrong) that the documentation is wrong, and my analysis should be correct.
BTW - the new “experimental” command that works on LEAP (tested) is cool…
btrfs filesystem usage /
Could it be something?
X79-PRO:~ # /usr/sbin/btrfs fi df /
Data, single: total=6.01GiB, used=5.39GiB
System, DUP: total=64.00MiB, used=16.00KiB
Metadata, DUP: total=640.00MiB, used=360.31MiB
GlobalReserve, single: total=128.00MiB, used=0.00B
X79-PRO:~ # btrfs fi show /dev/sda3
Label: none uuid: 1dac6b13-6fa3-4e0a-b71e-5c48141c5d16
Total devices 1 FS bytes used 5.74GiB
devid 1 size 40.00GiB used 7.38GiB path /dev/sda3
Data, single used + Metadata, DUP used = Total devices FS bytes used
5.39GiB + 360.31MiB = 5.75GiB