Why? root filesystem turned read-only during zypper dup

Quick update without technical details.

The real problem for me was only having 1mb Unallocated space. (contradicts what df and Conky shows, so i NEVER KNEW space was low for BTRFS).

So I read the article, “recover from out of space BTRFS problem”. Followed instructions, then ran Balance.

After a few minutes, I saw the dreaded “BTRFS filesystem error” on the screen, so stopped. Rebooted back, same “BTRFS filesystem” errors (basically corruption), then console login prompt.

So booted to the redundant TW install on 2nd drive, backed up /home on primary, and ran a few “BTRFS recovery commands”. I booted back to the primary drive, and it booted fine, to KDE login screen. So at least it recovered.

Then I did a few things to remove not needed software and old kernel images, then ran Balance again. Now up to 8gb Unallocated space.

Had to run zypper dup for updates … after that, now 6gb Unallocated space. So that’s where I am now.

With only 6gb Unallocated space, I think my only option now is to backup /home, then reinstall TW, with a larger / (root) partition.

I am now reluctant to use BTRFS, because of the Unallocated deceptiondf shows plenty of space on / , and so does Conky. Probably will use XFS or EXT4 … more established and reliable, and not deceiving like BTRFS for traditional “drive space” utilities

Without technical details we can’t help. You may have run destructive commands. There’s no evidence any recovery was needed in the first place. Apparently you misunderstood what Unallocated means. On your previous post it clearly mentions

    Free (estimated):             10.51GiB      (min: 10.51GiB)

This is what you should know. It’s about 33% free. Metadata is 45% used. Data is 60% used. Plenty of space to grow even with only 1MiB unallocated remaining.

If you have the possibility, increase this partition size to at least 40/50GiB. If not, watch out when free (estimated) goes below 4GiB. Consider enabling compression on this filesystem, it can free up quite a bit of space over time.

OK but –

  • Btrfs does need regular housekeeping.
  • You have to be aware that, general Linux user space tools such as “df” do display inaccurate information on partitions which use the Btrfs file-system.
  • You have to be aware that, the Btrfs Snapshots have to be controlled by Snapper and the associated housekeeping has to be enabled:
 > systemctl list-unit-files | grep -iE 'UNIT FILE|snap'
UNIT FILE                                                                 STATE           VENDOR PRESET
snapper-boot.service                                                      static          -
snapper-cleanup.service                                                   static          -
snapper-timeline.service                                                  static          -
snapperd.service                                                          static          -
snapper-boot.timer                                                        disabled        disabled
snapper-cleanup.timer                                                     enabled         enabled
snapper-timeline.timer                                                    enabled         enabled
474 unit files listed.
 >

The documentation is here: <https://doc.opensuse.org/documentation/leap/reference/html/book-reference/cha-snapper.html>.


To “Btrfs” or not – that is the question …

  • Personally, I can see the need for Btrfs to enabled a fast system recovery if, a patch or update turns sour.
    The Use Cases AFAICS are:
    – Critical Server systems.
    – Managed commercial Desktop systems.
    – Non-geek personal Desktop systems «possibly more than 80 % of the private desktop systems».
    With the caveat that, the Snapshot housekeeping is reliable and monitored.

Can you expand on this?

 # LANG=C df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        80G   16G   62G  21% /
 # 
 # LANG=C btrfs filesystem df /
Data, single: total=23.01GiB, used=14.49GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.00GiB, used=421.20MiB
GlobalReserve, single: total=48.56MiB, used=0.00B
 #

All clear now?

I hope I didn’t run anything destructive - like I wrote, I ran the “balance” and that resulted in a corrupted BTRFS filesystem (booted to my redundant TW install and fixed it).

The most recent articles I’ve read, state: “do not pay attention to df or other traditional tools, because they do not take into consideration the BTRFS Unallocated space”. They go on to say, "BTRFS Unallocated space is important to watch, because that will indicate an “out of disk space” scenario (which I experienced yesterday).

Anyway, not sure what to offer for technical feedback, but here’s the current usage on primary TW:

:~ # btrfs filesystem usage -T /
Overall:
    Device size:                  30.00GiB
    Device allocated:             23.03GiB
    Device unallocated:            6.97GiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                         14.33GiB
    Free (estimated):             14.58GiB      (min: 14.58GiB)
    Free (statfs, df):            14.58GiB
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               44.98MiB      (used: 0.00B)
    Multiple profiles:                  no

                  Data     Metadata  System                             
Id Path           single   single    single   Unallocated Total    Slack
-- -------------- -------- --------- -------- ----------- -------- -----
 1 /dev/nvme1n1p3 21.49GiB   1.51GiB 32.00MiB     6.97GiB 30.00GiB     -
-- -------------- -------- --------- -------- ----------- -------- -----
   Total          21.49GiB   1.51GiB 32.00MiB     6.97GiB 30.00GiB 0.00B
   Used           13.88GiB 461.86MiB 16.00KiB                           
:~ # 

… And here’s df output:

:~> df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        4.0M     0  4.0M   0% /dev
tmpfs            32G   90M   32G   1% /dev/shm
tmpfs            13G  2.1M   13G   1% /run
/dev/nvme1n1p3   30G   15G   15G  50% /
[ ... ]

My redundant TW install has a 40gb / BTRFS partition, and it shows 8.5gb of Unallocated space, so that extra 10gb of space on / , doesn’t seem to matter much (30gb / root on this primary TW install).

Thanks @dcurtisfra … yea, I posted the “maintenance” services list earlier, but here again
defrag.timer and trim.timer are Disabled (those are all defaults on my system).

:~> systemctl list-unit-files | grep -iE 'UNIT FILE|btrfs'
UNIT FILE                                                                 STATE           PRESET
btrfsmaintenance-refresh.path                                             enabled         enabled
btrfs-balance.service                                                     static          -
btrfs-defrag.service                                                      static          -
btrfs-scrub.service                                                       static          -
btrfs-trim.service                                                        static          -
btrfsmaintenance-refresh.service                                          static          -
btrfs-balance.timer                                                       enabled         enabled
btrfs-defrag.timer                                                        disabled        enabled
btrfs-scrub.timer                                                         enabled         enabled
btrfs-trim.timer                                                          disabled        enabled
478 unit files listed.

All clear now. These commands don’t compare directly, as btrfs filesystem df present group type information.

Comparing Free (statfs, df) vs Available:

$ sudo btrfs fi us -k /
Overall:
    Device size:                   471094272.00KiB
    Device allocated:              265338880.00KiB
    Device unallocated:            205755392.00KiB
    Device missing:                        0.00KiB
    Device slack:                          0.00KiB
    Used:                          211236480.00KiB
    Free (estimated):              258264352.00KiB      (min: 258264352.00KiB)
    Free (statfs, df):             258263328.00KiB
    Data ratio:                               1.00
    Metadata ratio:                           1.00
    Global reserve:                   490304.00KiB      (used: 0.00KiB)
    Multiple profiles:                          no

Data,single: Size:255860736.00KiB, Used:203351776.00KiB (79.48%)
   /dev/nvme0n1p2       255860736.00KiB

Metadata,single: Size:9445376.00KiB, Used:7884672.00KiB (83.48%)
   /dev/nvme0n1p2       9445376.00KiB

System,single: Size:32768.00KiB, Used:48.00KiB (0.15%)
   /dev/nvme0n1p2       32768.00KiB

Unallocated:
   /dev/nvme0n1p2       205755392.00KiB
$ df -k /
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/nvme0n1p2 471094272 211726792 258263336  46% /
$ node -pe '258263328-258263336'
-8

To compare Used space, take total GlobalReserve into account, even when it’s free is non-available for regular operations.

Does this warrant an inaccurate remark?

I’ve have seen cases where balance put the filesystem into a read-only state, but that’s not corruption. The solution for those cases was to boot with a special flag to cancel the balance operation.

For daily operations I’d pay more attention to the Free/Available numbers. For maintenance (e.g. balance) then Unallocated matters. Let’s see what man pages has to say:

ENOSPC
The way balance operates, it usually needs to temporarily create a new block group and move the old data there, before the old block group can be removed. For that it
needs the work space, otherwise it fails for ENOSPC reasons. This is not the same ENOSPC as if the free space is exhausted. This refers to the space on the level of block
groups, which are bigger parts of the filesystem that contain many file extents.

When you had only 1MiB as unallocated, it was unwise to run btrfs balance start -dusage=6 / (no workspace available). On average blocks were 66% full, and -dusage=6 only acts on blocks at most 6% full.

Some time after you ended up with unallocated space, possibly from btrfs releasing blocks from regular operations (removing snapshots, maybe).

On top of my previous recommendations, I’d say that:

  • fstrim or btrfs-trim is safe (I suppose it’s disabled because it applies to solid drives and yours isn’t one)
  • snapper timers are safe and recommended
  • scrub is fine but futile since there’s no redundancy on your FS
  • balance is not safe to run on a schedule
  • I’d never run defrag as it can increase data usage

Thanks @awerlang , for your quick response and detailed information!!.
I will save this all off into a text file for future reference.

For whatever it’s worth, before I did the “balance”, I added a 20gb partition (on a SATA drive installed on that computer that I used for backups, and that 20gb is empty) to the BTRFS root partition … I then ran the “filesystem usage” and BTRFS responded that I now had 20gb of Unallocated space … so then I ran the “balance” , and we know the rest of the story :slight_smile:
… I used this to add the extra drive space

: # btrfs device add -f /dev/sda  /

Based on the instructions in here (the section … 2) Add some space.
(link provided by @karlmistelberger in this tread - thanks!)

Host 6700k is a living one. It gets its daily `zypper dist-upgrade’:

6700k:~ # journalctl -q -u dup -g Consumed
Apr 03 05:36:40 6700k systemd[1]: dup.service: Consumed 7.190s CPU time.
Apr 04 04:15:36 6700k systemd[1]: dup.service: Consumed 2min 28.130s CPU time.
Apr 05 04:03:55 6700k systemd[1]: dup.service: Consumed 1min 14.024s CPU time.
Apr 19 17:59:44 6700k systemd[1]: dup.service: Consumed 6min 48.961s CPU time.
Apr 20 06:34:34 6700k systemd[1]: dup.service: Consumed 2min 3.257s CPU time.
Apr 21 03:30:37 6700k systemd[1]: dup.service: Consumed 7.179s CPU time.
Apr 22 04:17:53 6700k systemd[1]: dup.service: Consumed 42.003s CPU time.
Apr 23 05:18:17 6700k systemd[1]: dup.service: Consumed 1.511s CPU time.
Apr 23 15:54:10 6700k systemd[1]: dup.service: Consumed 27.790s CPU time.
Apr 24 04:13:10 6700k systemd[1]: dup.service: Consumed 2min 101ms CPU time.
Apr 24 17:57:42 6700k systemd[1]: dup.service: Consumed 15.333s CPU time.
Apr 25 05:12:27 6700k systemd[1]: dup.service: Consumed 1.324s CPU time.
Apr 25 19:40:11 6700k systemd[1]: dup.service: Consumed 2min 48.069s CPU time.
Apr 26 07:22:52 6700k systemd[1]: dup.service: Consumed 9.662s CPU time.
Apr 27 05:03:24 6700k systemd[1]: dup.service: Consumed 1min 37.710s CPU time.
Apr 27 19:08:37 6700k systemd[1]: dup.service: Consumed 1min 4.784s CPU time.
Apr 28 03:00:14 6700k systemd[1]: dup.service: Consumed 1.594s CPU time.
Apr 28 19:43:21 6700k systemd[1]: dup.service: Consumed 26.732s CPU time.
Apr 29 05:46:16 6700k systemd[1]: dup.service: Consumed 1.263s CPU time.
Apr 29 19:11:15 6700k systemd[1]: dup.service: Consumed 2min 1.935s CPU time.
Apr 30 05:04:25 6700k systemd[1]: dup.service: Consumed 6.338s CPU time.
Apr 30 18:22:31 6700k systemd[1]: dup.service: Consumed 1min 23.463s CPU time.
May 01 04:34:56 6700k systemd[1]: dup.service: Consumed 7.272s CPU time.
May 01 14:27:33 6700k systemd[1]: dup.service: Consumed 41.135s CPU time.
May 02 04:14:38 6700k systemd[1]: dup.service: Consumed 11.760s CPU time.
May 03 14:50:21 6700k systemd[1]: dup.service: Consumed 14.318s CPU time.
May 04 03:18:27 6700k systemd[1]: dup.service: Consumed 4.759s CPU time.
May 04 19:16:43 6700k systemd[1]: dup.service: Consumed 2min 20.239s CPU time.
May 05 00:00:04 6700k systemd[1]: dup.service: Consumed 1.478s CPU time.
May 06 06:18:52 6700k systemd[1]: dup.service: Consumed 19.450s CPU time.
May 06 20:19:15 6700k systemd[1]: dup.service: Consumed 1.105s CPU time.
May 07 06:03:35 6700k systemd[1]: dup.service: Consumed 1min 36.704s CPU time.
May 08 03:27:10 6700k systemd[1]: dup.service: Consumed 15.531s CPU time.
May 09 03:45:57 6700k systemd[1]: dup.service: Consumed 11.499s CPU time.
May 10 06:45:10 6700k systemd[1]: dup.service: Consumed 53.704s CPU time.
May 11 05:12:33 6700k systemd[1]: dup.service: Consumed 35.022s CPU time.
May 23 20:55:52 6700k systemd[1]: dup.service: Consumed 7.311s CPU time.
May 23 21:33:16 6700k systemd[1]: dup.service: Consumed 6min 56.470s CPU time.
May 24 10:45:09 6700k systemd[1]: dup.service: Consumed 1.037s CPU time.
May 25 03:20:03 6700k systemd[1]: dup.service: Consumed 4.265s CPU time.
May 26 00:00:41 6700k systemd[1]: dup.service: Consumed 15.400s CPU time.
May 26 20:23:32 6700k systemd[1]: dup.service: Consumed 2min 12.047s CPU time.
May 27 04:32:29 6700k systemd[1]: dup.service: Consumed 15.798s CPU time.
May 28 04:04:20 6700k systemd[1]: dup.service: Consumed 1min 27.543s CPU time.
May 28 20:33:39 6700k systemd[1]: dup.service: Consumed 10.932s CPU time.
May 29 22:53:50 6700k systemd[1]: dup.service: Consumed 1.489s CPU time.
May 30 05:12:05 6700k systemd[1]: dup.service: Consumed 1.792s CPU time.
May 31 04:55:44 6700k systemd[1]: dup.service: Consumed 3min 4.779s CPU time.
Jun 01 05:51:42 6700k systemd[1]: dup.service: Consumed 1min 42.457s CPU time.
Jun 02 06:01:30 6700k systemd[1]: dup.service: Consumed 15.471s CPU time.
6700k:~ # 

Snapper maintains the snapshots:

6700k:~ # journalctl -q -u snapper* -g Consumed --no-pager 
Apr 03 05:37:40 6700k systemd[1]: snapperd.service: Consumed 6.557s CPU time.
Apr 03 05:39:40 6700k systemd[1]: snapperd.service: Consumed 12.531s CPU time.
Apr 04 04:14:35 6700k systemd[1]: snapperd.service: Consumed 1.862s CPU time.
Apr 04 04:16:35 6700k systemd[1]: snapperd.service: Consumed 5.039s CPU time.
Apr 05 04:04:54 6700k systemd[1]: snapperd.service: Consumed 6.477s CPU time.
Apr 05 04:07:16 6700k systemd[1]: snapperd.service: Consumed 4.421s CPU time.
Apr 05 04:16:54 6700k systemd[1]: snapperd.service: Consumed 4.568s CPU time.
Apr 19 17:55:05 6700k systemd[1]: snapperd.service: Consumed 1.631s CPU time.
Apr 19 18:00:45 6700k systemd[1]: snapperd.service: Consumed 13.053s CPU time.
Apr 19 18:02:47 6700k systemd[1]: snapperd.service: Consumed 3.821s CPU time.
Apr 19 18:12:31 6700k systemd[1]: snapperd.service: Consumed 8.501s CPU time.
Apr 20 06:34:43 6700k systemd[1]: snapperd.service: Consumed 3.590s CPU time.
Apr 20 06:55:31 6700k systemd[1]: snapperd.service: Consumed 1.530s CPU time.
Apr 20 18:55:02 6700k systemd[1]: snapperd.service: Consumed 3.381s CPU time.
Apr 20 18:56:33 6700k systemd[1]: snapperd.service: Consumed 3.313s CPU time.
Apr 21 03:31:37 6700k systemd[1]: snapperd.service: Consumed 3.085s CPU time.
Apr 21 03:44:44 6700k systemd[1]: snapperd.service: Consumed 10.609s CPU time.
Apr 22 04:17:18 6700k systemd[1]: snapperd.service: Consumed 1.771s CPU time.
Apr 22 04:18:53 6700k systemd[1]: snapperd.service: Consumed 4.526s CPU time.
Apr 22 04:26:42 6700k systemd[1]: snapperd.service: Consumed 3.410s CPU time.
Apr 23 08:25:43 6700k systemd[1]: snapperd.service: Consumed 4.902s CPU time.
Apr 23 11:38:51 6700k systemd[1]: snapperd.service: Consumed 2.862s CPU time.
Apr 23 15:54:02 6700k systemd[1]: snapperd.service: Consumed 1.973s CPU time.
Apr 23 15:54:23 6700k systemd[1]: snapperd.service: Consumed 3.643s CPU time.
Apr 23 16:09:16 6700k systemd[1]: snapperd.service: Consumed 4.464s CPU time.
Apr 24 04:14:10 6700k systemd[1]: snapperd.service: Consumed 6.410s CPU time.
Apr 24 04:19:42 6700k systemd[1]: snapperd.service: Consumed 8.049s CPU time.
Apr 24 17:58:11 6700k systemd[1]: snapperd.service: Consumed 3.544s CPU time.
Apr 24 18:01:10 6700k systemd[1]: snapperd.service: Consumed 2.905s CPU time.
Apr 24 18:09:54 6700k systemd[1]: snapperd.service: Consumed 4.017s CPU time.
Apr 25 19:41:10 6700k systemd[1]: snapperd.service: Consumed 9.120s CPU time.
Apr 26 07:23:52 6700k systemd[1]: snapperd.service: Consumed 1.839s CPU time.
Apr 27 05:04:24 6700k systemd[1]: snapperd.service: Consumed 8.692s CPU time.
Apr 28 03:01:09 6700k systemd[1]: snapperd.service: Consumed 5.378s CPU time.
Apr 28 03:08:53 6700k systemd[1]: snapperd.service: Consumed 11.065s CPU time.
Apr 29 05:47:12 6700k systemd[1]: snapperd.service: Consumed 3.439s CPU time.
Apr 29 19:12:15 6700k systemd[1]: snapperd.service: Consumed 5.054s CPU time.
Apr 30 05:05:24 6700k systemd[1]: snapperd.service: Consumed 4.751s CPU time.
Apr 30 18:21:32 6700k systemd[1]: snapperd.service: Consumed 2.062s CPU time.
Apr 30 18:23:31 6700k systemd[1]: snapperd.service: Consumed 6.021s CPU time.
May 01 04:35:56 6700k systemd[1]: snapperd.service: Consumed 4.216s CPU time.
May 01 13:31:13 6700k systemd[1]: snapperd.service: Consumed 17.513s CPU time.
May 01 14:28:33 6700k systemd[1]: snapperd.service: Consumed 5.359s CPU time.
May 01 15:27:37 6700k systemd[1]: snapperd.service: Consumed 3.730s CPU time.
May 02 04:15:37 6700k systemd[1]: snapperd.service: Consumed 3.805s CPU time.
May 03 14:51:21 6700k systemd[1]: snapperd.service: Consumed 3.972s CPU time.
May 04 03:19:27 6700k systemd[1]: snapperd.service: Consumed 1.394s CPU time.
May 04 19:17:42 6700k systemd[1]: snapperd.service: Consumed 7.533s CPU time.
May 04 20:04:29 6700k systemd[1]: snapperd.service: Consumed 3.605s CPU time.
May 04 20:14:13 6700k systemd[1]: snapperd.service: Consumed 11.518s CPU time.
May 06 06:20:04 6700k systemd[1]: snapperd.service: Consumed 3.058s CPU time.
May 06 12:35:29 6700k systemd[1]: snapperd.service: Consumed 2.972s CPU time.
May 07 06:04:34 6700k systemd[1]: snapperd.service: Consumed 5.647s CPU time.
May 07 09:53:01 6700k systemd[1]: snapperd.service: Consumed 2.293s CPU time.
May 08 03:28:10 6700k systemd[1]: snapperd.service: Consumed 4.080s CPU time.
May 09 03:46:57 6700k systemd[1]: snapperd.service: Consumed 2.628s CPU time.
May 10 06:46:10 6700k systemd[1]: snapperd.service: Consumed 6.178s CPU time.
May 11 05:13:33 6700k systemd[1]: snapperd.service: Consumed 5.279s CPU time.
May 23 21:06:45 6700k systemd[1]: snapperd.service: Consumed 9.728s CPU time.
May 23 21:34:15 6700k systemd[1]: snapperd.service: Consumed 15.915s CPU time.
May 23 21:42:16 6700k systemd[1]: snapperd.service: Consumed 4.650s CPU time.
May 23 21:44:17 6700k systemd[1]: snapperd.service: Consumed 3.708s CPU time.
May 23 21:54:05 6700k systemd[1]: snapperd.service: Consumed 13.770s CPU time.
May 25 03:21:02 6700k systemd[1]: snapperd.service: Consumed 4.575s CPU time.
May 25 03:58:29 6700k systemd[1]: snapperd.service: Consumed 4.899s CPU time.
May 25 12:21:43 6700k systemd[1]: snapperd.service: Consumed 5.127s CPU time.
May 25 12:28:39 6700k systemd[1]: snapperd.service: Consumed 2.453s CPU time.
May 26 00:01:41 6700k systemd[1]: snapperd.service: Consumed 4.936s CPU time.
May 26 20:24:31 6700k systemd[1]: snapperd.service: Consumed 7.579s CPU time.
May 27 04:33:28 6700k systemd[1]: snapperd.service: Consumed 3.564s CPU time.
May 27 05:21:44 6700k systemd[1]: snapperd.service: Consumed 17.476s CPU time.
May 27 05:23:04 6700k systemd[1]: snapperd.service: Consumed 2.787s CPU time.
May 27 05:24:28 6700k systemd[1]: snapperd.service: Consumed 3.028s CPU time.
May 27 05:31:04 6700k systemd[1]: snapperd.service: Consumed 20.336s CPU time.
May 27 05:42:20 6700k systemd[1]: snapperd.service: Consumed 48.754s CPU time.
May 27 05:44:38 6700k systemd[1]: snapperd.service: Consumed 42.766s CPU time.
May 28 04:05:19 6700k systemd[1]: snapperd.service: Consumed 10.382s CPU time.
May 28 04:39:51 6700k systemd[1]: snapperd.service: Consumed 5.581s CPU time.
May 28 04:55:11 6700k systemd[1]: snapperd.service: Consumed 23.910s CPU time.
May 29 04:23:14 6700k systemd[1]: snapperd.service: Consumed 5.072s CPU time.
May 31 04:56:44 6700k systemd[1]: snapperd.service: Consumed 8.413s CPU time.
May 31 04:57:57 6700k systemd[1]: snapperd.service: Consumed 8.838s CPU time.
Jun 01 05:52:42 6700k systemd[1]: snapperd.service: Consumed 6.454s CPU time.
Jun 01 11:38:50 6700k systemd[1]: snapperd.service: Consumed 4.931s CPU time.
Jun 01 11:48:34 6700k systemd[1]: snapperd.service: Consumed 8.713s CPU time.
Jun 02 06:02:30 6700k systemd[1]: snapperd.service: Consumed 5.067s CPU time.
6700k:~ # 

Given the above maintenance routine very relevant disk space gets allocated and released during the lifetime of the system.

The btrfs maintenance toolbox works fully automatic on reasonably sized filesystems such as 6700k’s partition:

6700k:~ # btrfs filesystem usage -T /
Overall:
    Device size:                  59.57GiB
    Device allocated:             34.05GiB
    Device unallocated:           25.52GiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                         29.50GiB
    Free (estimated):             29.31GiB      (min: 29.31GiB)
    Free (statfs, df):            29.31GiB
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               82.14MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System                             
Id Path      single   single   single   Unallocated Total    Slack
-- --------- -------- -------- -------- ----------- -------- -----
 1 /dev/sda8 32.01GiB  2.01GiB 32.00MiB    25.52GiB 59.57GiB     -
-- --------- -------- -------- -------- ----------- -------- -----
   Total     32.01GiB  2.01GiB 32.00MiB    25.52GiB 59.57GiB 0.00B
   Used      28.22GiB  1.28GiB 16.00KiB                           
6700k:~ # 

When I joined this discussion btrfs maintenance was fully automatic and there was no need for manual action.

Data Total was 34.01GiB and Used 25.72GiB. Thus allocated but unused space was 8.29GiB in fully automatic mode using default settings of the toolbox. With some 11.76GiB unallocated space no manual intervention is needed for host 6700k.

On the system of @aggie the situation is different. The automatic mode fails due to clueless partitioning during install. Manual interaction by @aggie is clueless too. Running btrfs balance start -dusage=6 / won’t change anything. Higher values of dusage are needed.

As a courtesy to the readers of this thread I demonstrated the release of allocated but unused space by running btrfs balance start -dusage=99 /.

Current values on host 6700k are:

6700k:~ # btrfs balance start -dusage=99 /
Done, had to relocate 12 out of 37 chunks
6700k:~ # btrfs filesystem usage -T /
Overall:
    Device size:                  59.57GiB
    Device allocated:             32.05GiB
    Device unallocated:           27.52GiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                         29.50GiB
    Free (estimated):             29.31GiB      (min: 29.31GiB)
    Free (statfs, df):            29.31GiB
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               82.23MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System                             
Id Path      single   single   single   Unallocated Total    Slack
-- --------- -------- -------- -------- ----------- -------- -----
 1 /dev/sda8 30.01GiB  2.01GiB 32.00MiB    27.52GiB 59.57GiB     -
-- --------- -------- -------- -------- ----------- -------- -----
   Total     30.01GiB  2.01GiB 32.00MiB    27.52GiB 59.57GiB 0.00B
   Used      28.22GiB  1.28GiB 16.00KiB                           
6700k:~ # 

Due to manual interaction allocated but unused space is down to 1.79GiB from 8.29GiB in automatic mode. Manual interaction freed some 6.50GiB.

Infamous host erlangen has:

erlangen:~ # btrfs filesystem usage -T /
Overall:
    Device size:                   1.77TiB
    Device allocated:            543.07GiB
    Device unallocated:            1.24TiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                        524.92GiB
    Free (estimated):              1.25TiB      (min: 650.37GiB)
    Free (statfs, df):             1.25TiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

                  Data      Metadata System                            
Id Path           single    DUP      DUP      Unallocated Total   Slack
-- -------------- --------- -------- -------- ----------- ------- -----
 1 /dev/nvme0n1p2 535.01GiB  8.00GiB 64.00MiB     1.24TiB 1.77TiB     -
-- -------------- --------- -------- -------- ----------- ------- -----
   Total          535.01GiB  4.00GiB 32.00MiB     1.24TiB 1.77TiB 0.00B
   Used           519.94GiB  2.49GiB 80.00KiB                          
erlangen:~ # 

Note: /home is a subvolume of the above

erlangen:~ # findmnt /home
TARGET SOURCE                  FSTYPE OPTIONS
/home  /dev/nvme0n1p2[/@/home] btrfs  rw,relatime,ssd,discard=async,space_cache=v2,subvolid=262,subvol=/@/home
erlangen:~ # 

Conclusions

  1. btrfs maintenance is fully automatic on properly configured systems

  2. Sloppy installation results in frequent trouble with disk space

  3. A single partition occupied by btrfs works best

This happened to me when I’ve changed my fstab config’s btrfs options. then I’d rollbacked to untouched fstab snapshot to rid off this annoying issue.

can you check/compare snapshots for your some system configs especially fstab?

Thanks !

I can say that I’ve only edited fstab once, and that was to add a partition of a SATA drive that was in another computer and I moved the SATA to this computer … I repurposed that drive to only be used as a backup device (to backup /home) , never for an active partition (like / or /home, etc …). The two main drives are nvme.

Other than that one new SATA entry, no other fstab entries were touched.

your r-only issue still presents? if yes, did you check changes for your snapshots?

1 Like

No issue now. (fingers crossed)

1 Like