BTRFS disk Full how to fix it, is that really the solution?

Sorry to ask for help in a first post but …
I filled up my SSD with an upgrade, I am certain of that : it failed for lack of space.
As I was about to buy another disk I did and installed the same Opensuse leap 15.5 on it, thinking that I would just copy my files on the new disk.
Well no !
A df -h gives 100%full with less used space than available. !
And an empty home directory when trying to see it from the new and working setup.
Various non-destructive btrfs checks launched using the recovery boot on the install USB are OK
the /home is available
Emptied the /tmp and rm a large file still 100%full
This looks like what is described here :

Before attempting the balance operation described, do you have other ideas ?
And how do you prepare the USB stickjust mkfs -t btrfs /dev/sdXX ?
Any good btrfs doc for semi beginners ?
Or better a simple fix (no not rm.) !

Show full output of

btrfs filesystem usage /
snapper list
btrfs qgroup show /
1 Like

Well first is text of system were I can copy paste , I mounted the troublesomedisk (/dev/sda2) on /ancienhome . Snapper log refers to the newinstall not the ones needed if I understand. So i took a picture of the screen of the rescue system.
To top it all both discs are 1To samsung SSD the good one is M2 and the troubled one is SATA 870
So I replaced / by /ancienhome

localhost:/ # btrfs filesystem usage /**ancienhome**
    Device size:                 929.01GiB
    Device allocated:            929.01GiB
    Device unallocated:            1.00MiB
    Device missing:                  0.00B
    Used:                        858.11GiB
    Free (estimated):             70.37GiB      (min: 70.37GiB)
    Free (statfs, df):            70.37GiB
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,single: Size:925.97GiB, Used:855.59GiB (92.40%)
   /dev/sda2     925.97GiB

Metadata,single: Size:3.01GiB, Used:2.51GiB (83.40%)
   /dev/sda2       3.01GiB

System,single: Size:32.00MiB, Used:128.00KiB (0.39%)
   /dev/sda2      32.00MiB

   /dev/sda2       1.00MiB
    Device unallocated:            1.00MiB

localhost:/ancienhome # snapper list
 # | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description           | Userdata
0  | single |       |                          | root |            |         | current               |
1* | single |       | Mon Feb 19 10:32:36 2024 | root | 661.22 MiB |         | first root filesystem |
2  | single |       | Mon Feb 19 12:23:46 2024 | root | 271.72 MiB | number  | after installation    | important=yes
8  | pre    |       | Mon Feb 19 13:19:39 2024 | root |   2.89 MiB | number  | zypp(ruby.ruby2.5)    | important=yes
9  | post   |     8 | Mon Feb 19 13:30:42 2024 | root | 277.36 MiB | number  |                       | important=yes

localhost:/ancienhome # btrfs qgroup show /
qgroupid         rfer         excl
--------         ----         ----
0/5          16.00KiB     16.00KiB
0/256        16.00KiB     16.00KiB
0/257       707.32MiB    707.32MiB
0/258        16.00KiB     16.00KiB
0/259         1.68MiB      1.68MiB
0/260        16.00KiB     16.00KiB
0/261        10.56MiB     10.56MiB
0/262       346.88MiB    346.88MiB
0/263       205.04GiB    205.04GiB
0/264        16.00KiB     16.00KiB
0/265         2.63MiB      2.63MiB
0/266         1.97MiB      1.97MiB
0/267        14.08GiB    661.22MiB
0/282        12.25GiB    271.72MiB
0/289        12.48GiB      2.89MiB
0/292        14.02GiB    277.36MiB
1/0          14.32GiB    908.15MiB
localhost:/ancienhome # btrfs qgroup show **/ancienhome**
qgroupid         rfer         excl
--------         ----         ----
0/5          16.00KiB     16.00KiB
0/256        16.00KiB     16.00KiB
0/257         5.69GiB      5.69GiB
0/258        16.00KiB     16.00KiB
0/259        16.00KiB     16.00KiB
0/260        16.00KiB     16.00KiB
0/261        11.92MiB     11.92MiB
0/262       363.47MiB    363.47MiB
0/263       800.65GiB    800.65GiB
0/264         4.02MiB      4.02MiB
0/265        16.00KiB     16.00KiB
0/266        68.00KiB     68.00KiB
0/267        45.07GiB    155.84MiB
0/275        16.00KiB     16.00KiB
0/1622       21.44GiB      1.38GiB
0/1623       21.41GiB    574.92MiB
0/1675       21.58GiB    151.66MiB
0/1676       21.79GiB    162.38MiB
0/1910       24.22GiB    526.48MiB
0/1916       45.07GiB      1.55MiB
0/1917       45.07GiB     32.00KiB
0/1918       45.07GiB     64.00KiB

Picture of the screen booted on the recovery system :

Data matches with /ancienhome
snapper is different.
Hope it is usefull.
This time on the new working install on the M2, the df -h showed some room, still nothing in /home.

[Admin note: Edited to make output more readable with code tags]

Run a full balance to move around all blocks for maximum optimization followed by a quick one to free up completely unused (but allocated) blocks:

# max optimization, will run in background. takes some time
sudo btrfs balance start --full-balance --bg /ancienhome
# watch the balance progress
watch sudo btrfs balance status -v /ancienhome

# free up completely unused blocks, much faster compared to above
sudo btrfs balance start -dusage=0 -musage=0 --bg /ancienhome
watch sudo btrfs balance status -v /ancienhome

Just ran this myself and reclaimed about 30 GiB :wink:

One question first please, because I don’t know if I’ve understood correctly or not … do you have the user folder /home in the same partition as the system?

I will but not right away :slight_smile:
Though for some reason the M2 disc is not recognized, I mounted another disk (spinning kind) on the system and can now copy everything on this disc.
I am copying now…
THEN I will try running the balance

1 Like

Default at install

It’s strange. openSUSE defaults to one partition for the system (BTRFS) and one for the data (ext4 or XFS). Besides, it is the best way to avoid unnecessary risks in case of losing a partition and to control the available space much better.

My recommendation would of course be to have separate partitions for system and data (/home), because since you are using SSD, to optimise space and reduce the deterioration of your SSD, you should ideally use a feature of the BTRFS system, which is on-the-fly compression. You won’t notice anything except that you will have more space available and your SSD will suffer less (less data is written).

Here you can see I have in my /etc/fstab a parameter “compress” set to the highest compression possible for some of my system BTRFS subvolumes.

UUID=0ad79315-b215-4f5c-b526-cf0187e60298  /var                    btrfs  subvol=/@/var,compress=zstd:12                 0  0
UUID=0ad79315-b215-4f5c-b526-cf0187e60298  /usr/local              btrfs  subvol=/@/usr/local,compress=zstd:12           0  0
UUID=0ad79315-b215-4f5c-b526-cf0187e60298  /tmp                    btrfs  subvol=/@/tmp,compress=zstd:12                 0  0
UUID=0ad79315-b215-4f5c-b526-cf0187e60298  /srv                    btrfs  subvol=/@/srv,compress=zstd:12                 0  0
UUID=0ad79315-b215-4f5c-b526-cf0187e60298  /root                   btrfs  subvol=/@/root,compress=zstd:12                0  0
UUID=0ad79315-b215-4f5c-b526-cf0187e60298  /opt                    btrfs  subvol=/@/opt,compress=zstd:12   

Open your /etc/fstab file with administrator privileges and enable compression on those BTRFS drives that actually have compressible files, such as /var /usr /opt /svr . In your case, you can do the same with /home BTFRS subvolume. Do NOT compress subvolumes such as /efi or /grub2.

Once you saved fstab with compress parameters added, you can reboot or you can remount units with:

mount -o remount,compress=none /

Since you just enabled compression, only the files you save will be compressed, but you can recompress the subvolumes you have assigned compression to with the command:

btrfs filesystem defragment -rvf -czstd /var /usr /opt /svr /home

(include all subvolumes where you enabled compression).

And finally, you can see how much disk space you are saving after enabling compression:

sudo zypper in compsize  # Installing compsize command
sudo compsize -x /
1 Like

@rafaellinuxuser Thanks, I’ve bookmarked it for future use, the defrag and (re-)compression is new to me.
I used to use zstd compression on my previous distro, but seeing as OpenSuse is not using it by default I left it that way.

I’m sure there are quite a few horror stories behind these words of wisdom, but it seems new installs of TW should be okay with enabling it? I get this from my grub2 modules:

pavin@suse-pc:~> ll /boot/grub2/x86_64-efi/ | grep -E 'btrfs|zstd'
-rw-r--r-- 1 root root  47664 Feb 18 03:04 btrfs.mod
-rw-r--r-- 1 root root  80832 Feb 18 03:04 zstd.mod

You have enough space. Show the full output of dmesg immediately after boot (upload to, post link here).

Yes, I have “/” compressed in Tumbleweed, and it works, but since you are using LEAP, in my experience I would rather do not recommend something that could fail you. In fact, since I compressed “/”, I recover my system from “Hibernation” mode, so I need to enter in “Suspend” mode instead. I’m sure compression is involved somehow, although that fact should not be related because I have a SWAP partition., but I still haven’t figured out how to make compression compatible with “Hibernate” mode :wink: . Anyway, advantages of a compressed filesystem is enough for me despite this issue. I have not had a space problem on the system partition since I enabled compression. I use a lot of programs from the repositories but except for cases like LibreOffice, they generally won’t take up too much space.

What I do have very controlled are those applications that download and install a lot of files, generally in user subfolders, such as “Steam” or “Flatpak”. While “Steam” allows you to do a really good management of where the files are stored from its own GUI, “Flatpak” is the opposite, you have to spend time searching on the Internet how to make sure that each installation doesn’t devour your hard disk and go to the CLI to achieve it. That’s why I will always recommend AppImage over Flatpak. It’s 2024 and its user-friendliness to configure it easily is pitiful.

1 Like

Very good commands.
If that’s not enough to make some space, the next thing I would do would be to delete ALL snapshots with

for i in $(sudo snapper --config root --csvout list --columns number | tail -n +3); do sudo snapper -c root delete $i; done
1 Like

Did a very inelegant thing after successfully copying my data : Moved the home directory to another disk in emergency mode.
I was then abbe to boot normally and use the volume without problem.
Then launched a btrfs balance, which freed even more space.
Thanks for your help.
A little disappointed with btrfs as a desktop file system, because all that seemed to be a great solving of a problem that should not have happened in the first place. There is a big positive : no data lost.
Now that I have learned some useful things about btrfs. This thing is completely different from ntfs, way more sophisticated.
What should be done so that it never happens again ?
My answers :
-Be aware that df -h does not give an accurate value of free space. Learn the specifics of it, instead of using it trustingly. Thinking that if they changed the default it should be easier to use and ignore…
-A periodic balance is a must to have in crontab. Should even be a default setup. What periodicity and values for a desktop ?
-More than other file systems, filling it up to the max is not a good idea.

1 Like

There should already be monthly (1st of every mon) btrfs balance/defrag/scrub/trim systemd timers:

What’s the output of:

systemctl list-timers | grep btrfs

I insist about BTRFS must be used to store files from SYSTEM not for user data files.

I think you forgot something in the list and it is very important and I always, before moving to Linux, also did in Windows: Separate system files and data files into different partitions. It is essential if you use the equipment for work or have important information. My data is always on an Ext4 or XFS partition, since I make backup copies and therefore do not need the BTRFS snapshots.

There are 3 monthly jobs , I have been playing with virtual machines for a week, that explains…

Yes but that is the default install setup…

Make sure btrfs-balance.timer will run on March 1:

systemctl status btrfs-balance.timer

As far as I understand, yes :
In btrfs-balance.timer there is OnCalendar=monthly

I would like to add
Is there a real downside in running balance often on a SSD?
The ideal would be to have a trigger based on volume written.
But wait I saw a number of block written on some SSD monitoring app while looking at the system, this information is somewhere…
The idea would be to trigger a balance after a certain amout of data moved.

Or simpler : frequently test the size by df and btrfs filesystem usage, once there is a difference bigger than something to define : run balance.

That shouldn’t be necessary, have a look at /usr/share/btrfsmaintenance/
It’s not performing full rebalance, just reclaiming completely unused yet allocated blocks.

This is something you might want to monitor and setup an alert for if your future disk utilization is expected to cross 80%.