BTRFS: qgroup marked inconsistent, qgroup info item update error -2

For the past month or so, I’ve been getting the following pair of log entries about once an hour on a couple of my RPI4b’s running Tumbleweed, currently 20251022.

Nov 07 23:12:58 rpi4b.walkerstreet.info kernel: BTRFS warning (device sda2): qgroup marked inconsistent, qgroup info item update error -2
Nov 07 23:12:58 rpi4b.walkerstreet.info kernel: BTRFS info (device sda2): qgroup scan completed (inconsistency flag cleared)

Is this something I should be worried about? After doing some web searching (https://www.spinics.net/lists/linux-btrfs/msg147019.html), it seems that there are conditions that can cause frequent inconsistencies that are fixed with a rescan (as indicated by the second log entry of the pair). As far as I can tell, I’m not seeing any ill effects.

-2 == ENOENT. Show

btrfs qgroup show -c /
btrfs subvolume list /

use whatever mount point is appropriate.

Thanks. Here you go…

> sudo btrfs qgroup show -c /
Qgroupid    Referenced    Exclusive Child                                 Path 
--------    ----------    --------- -----                                 ---- 
0/5           16.00KiB     16.00KiB -                                     <toplevel>
0/256         16.00KiB     16.00KiB -                                     @
0/257          7.61GiB      7.61GiB -                                     @/var
0/258         16.00KiB     16.00KiB -                                     @/usr/local
0/259         16.00KiB     16.00KiB -                                     @/srv
0/260         88.34MiB     88.34MiB -                                     @/root
0/261          4.82GiB      4.71GiB -                                     @/opt
0/262          5.96MiB      5.96MiB -                                     @/boot/grub2/arm64-efi
0/263        224.00KiB    224.00KiB -                                     @/.snapshots
0/840          9.48GiB     11.58MiB -                                     @/.snapshots/555/snapshot
0/885          9.41GiB    217.51MiB -                                     @/.snapshots/600/snapshot
0/886          9.39GiB    496.00KiB -                                     @/.snapshots/601/snapshot
0/887          9.39GiB      1.53MiB -                                     @/.snapshots/602/snapshot
0/888          9.68GiB      2.59MiB -                                     @/.snapshots/603/snapshot
0/889          9.68GiB    608.00KiB -                                     @/.snapshots/604/snapshot
0/890          9.48GiB    768.00KiB -                                     @/.snapshots/605/snapshot
1/0           13.40GiB      3.93GiB 0/885,0/886,0/887,0/888,0/889,0/890   <0 member qgroups>
> sudo btrfs subvolume list /
ID 256 gen 2504609 top level 5 path @
ID 257 gen 2562219 top level 256 path @/var
ID 258 gen 2562040 top level 256 path @/usr/local
ID 259 gen 2562044 top level 256 path @/srv
ID 260 gen 2562044 top level 256 path @/root
ID 261 gen 2562219 top level 256 path @/opt
ID 262 gen 2536414 top level 256 path @/boot/grub2/arm64-efi
ID 263 gen 2561278 top level 256 path @/.snapshots
ID 840 gen 2562216 top level 263 path @/.snapshots/555/snapshot
ID 885 gen 2535322 top level 263 path @/.snapshots/600/snapshot
ID 886 gen 2535328 top level 263 path @/.snapshots/601/snapshot
ID 887 gen 2535338 top level 263 path @/.snapshots/602/snapshot
ID 888 gen 2536290 top level 263 path @/.snapshots/603/snapshot
ID 889 gen 2536305 top level 263 path @/.snapshots/604/snapshot
ID 890 gen 2536404 top level 263 path @/.snapshots/605/snapshot

That looks consistent (at the time this information was captured). Can you check what jobs are running when you get the qgroup info item update error -2 error? Some btrfs maintenance jobs, snapshot creation, snapper cleanup or similar?

It seems to happen during a “Daily Cleanup of Snapper Snapshots” (which runs hourly). Here are logs for one of those hourly cleanups:

Nov 07 01:12:39 rpi4b.walkerstreet.info systemd[1]: Started Daily Cleanup of Snapper Snapshots.
Nov 07 01:12:39 rpi4b.walkerstreet.info systemd[1]: Starting DBus interface for snapper...
Nov 07 01:12:39 rpi4b.walkerstreet.info systemd-timesyncd[627]: Contacted time server 50.117.3.95:123 (2.opensuse.pool.ntp.org).
Nov 07 01:12:39 rpi4b.walkerstreet.info systemd[1]: Started DBus interface for snapper.
Nov 07 01:12:39 rpi4b.walkerstreet.info systemd-helper[320190]: Running cleanup for 'root'.
Nov 07 01:12:39 rpi4b.walkerstreet.info systemd-helper[320190]: Running number cleanup for 'root'.
Nov 07 01:12:51 rpi4b.walkerstreet.info kernel: BTRFS warning (device sda2): qgroup marked inconsistent, qgroup info item update error -2
Nov 07 01:12:51 rpi4b.walkerstreet.info kernel: BTRFS info (device sda2): qgroup scan completed (inconsistency flag cleared)
Nov 07 01:12:51 rpi4b.walkerstreet.info systemd-helper[320190]: Running timeline cleanup for 'root'.
Nov 07 01:12:51 rpi4b.walkerstreet.info systemd-helper[320190]: Running empty-pre-post cleanup for 'root'.
Nov 07 01:12:51 rpi4b.walkerstreet.info systemd-helper[320190]: Running 'btrfs qgroup clear-stale /.snapshots'.
Nov 07 01:12:51 rpi4b.walkerstreet.info systemd[1]: snapper-cleanup.service: Deactivated successfully.
Nov 07 01:13:52 rpi4b.walkerstreet.info systemd[1]: snapperd.service: Deactivated successfully.

I would suggest asking on the btrfs mailing list (linux-btrfs). It sounds like something developers may be interested in.

@dhwalker

Has anything changed on your system, or have you found a solution, since your last post about this issue?

It seems I’m facing the same issue on a system today. It just started two hours ago and the warning appears during the hourly cleanups, just like it does on your system.

I’d add that the system on which the issue started, contains a standard hard disk, not an SSD or nvme drive. I don’t know if this matters though.

Thank you.

I’m glad I’m not the only one… Nothing new, but I did send a note to linux-btrfs@vger.kernel.org; haven’t gotten a reply.

One thing that I do that probably isn’t very common is that I run the following script when I (occasionally/rarely) get an ENOSPC error on one of Tumbleweed’s periodic BTRFS maintenance processes:

sudo journalctl --vacuum-time=1years
sudo btrfs balance start -dusage=0 /
sudo btrfs balance start -dusage=5 /
sudo btrfs balance start -dusage=10 /
sudo btrfs balance start -dusage=20 /
sudo btrfs balance start -dusage=30 /
sudo btrfs balance start -dusage=50 /
sudo btrfs balance start -dusage=80 /
sudo btrfs balance start -dusage=100 /
sudo btrfs balance start -musage=0 /
sudo btrfs balance start -musage=5 /
sudo btrfs balance start -musage=10 /
sudo btrfs balance start -musage=20 /
sudo btrfs balance start -musage=30 /
sudo btrfs balance start -musage=50 /
sudo btrfs balance start -musage=80 /
sudo btrfs balance start -musage=100 /

I’m wondering if the balances aggravate something.

I did once try a repair with `btrfs check --repair /dev/sda2, thinking something may have corrupted that SSD, and I stopped seeing the warnings for a week or two. Unfortunately, the errors came back, and not long after they started on another machine (around them time I ran the script above), so if there was corruption, it’s not directly hardware related.

I hope this can get resolved soon. Right now, I’m crossing my fingers in hopes the warning is just a warning without serious consequence, but I may very well be over optimistic.

If anyone has ideas of how to gather useful information for debugging, let me know.

@dhwalker

Booting from a USB installation drive into rescue mode and running btrfs check --repair /dev/drive_name seems to have solved the issue. It corrected 5 errors. I really wonder what caused those errors all of a sudden. :thinking:

I also did a btrfs balance start / after the system started in normal mode again and I see no warning so far. Let’s hope it’s fixed. :slightly_smiling_face:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.