Quote Originally Posted by pattiM View Post
This may not be the right forum... I installed a pair of identical Samsung 850 SSDs on a laptop, then installed 15.2 with Btrfs across both devices, with default "raid" (mirror metadata, stripe data; snapshots enabled). A more or less default install. (This is a valid configuration for a pair of SSDs to provide FS error correction.)
It may be a valid configuration. But I doubt whether it is a prudent one.

fstab
Code:
UUID=2222222f-4861-8888-9509-81578864445f   /                       btrfs  compress=zlib                 0  0
UUID=2222222f-4861-8888-9509-81578864445f   /.snapshots             btrfs  subvol=/@/.snapshots          0  0
UUID=2222222g-69d8-4a50-ba80-81578864445f  swap                    swap   defaults                      0  0
UUID=2222222f-4861-8888-9509-81578864445f   /var                    btrfs  subvol=/@/var                 0  0
UUID=2222222f-4861-8888-9509-81578864445f   /usr/local              btrfs  subvol=/@/usr/local           0  0
UUID=2222222f-4861-8888-9509-81578864445f   /tmp                    btrfs  subvol=/@/tmp                 0  0
UUID=2222222f-4861-8888-9509-81578864445f   /srv                    btrfs  subvol=/@/srv                 0  0
UUID=2222222f-4861-8888-9509-81578864445f   /root                   btrfs  subvol=/@/root                0  0
UUID=2222222f-4861-8888-9509-81578864445f   /opt                    btrfs  subvol=/@/opt                 0  0
UUID=2222222f-4861-8888-9509-81578864445f   /home                   btrfs  subvol=/@/home                0  0
UUID=2222222f-4861-8888-9509-81578864445f   /boot/grub2/x86_64-efi  btrfs  subvol=/@/boot/grub2/x86_64-efi  0  0
UUID=2222222f-4861-8888-9509-81578864445f   /boot/grub2/i386-pc     btrfs  subvol=/@/boot/grub2/i386-pc  0  0
UUID=E4DD-244E                             /boot/efi               vfat   defaults                      0  2
Compression can eat lots of resources. Thus I don't use it.

What I've noticed is that about once per day (usually on boot) kde freezes (partially or mostly), repeatedly, sometimes for a minute or two at a time. Gkrellm keeps running (as does the mouse pointer) and shows a single core at 100% utilization (as does htop) - which jumps occasionally between the cores (i7-4700MQ, 16GB). This behavior lasts 5 minutes or so, then everything goes back to normal.
My systems have daily btrfs maintenance enabled. I never observed freezes due to maintenance tasks.
Code:
Apr 23 00:00:00 erlangen systemd[1]: Started Discard unused blocks on a mounted filesystem. 
Apr 23 00:00:00 erlangen systemd[1]: Started Scrub btrfs filesystem, verify block checksums. 
Apr 23 00:00:00 erlangen systemd[1]: Started Balance block groups on a btrfs filesystem. 
Apr 23 00:00:00 erlangen systemd[1]: Started Defragment file data on a mounted filesystem. 
Apr 23 00:00:00 erlangen btrfs-scrub.sh[20003]: Running scrub on / 
Apr 23 00:00:00 erlangen btrfs-trim.sh[20006]: Running fstrim on / 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: Before balance of / 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: Data, single: total=30.01GiB, used=24.75GiB 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: System, single: total=32.00MiB, used=16.00KiB 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: Metadata, single: total=3.00GiB, used=1.15GiB 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: GlobalReserve, single: total=75.17MiB, used=0.00B 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf 
Apr 23 00:00:00 erlangen btrfs-balance.sh[20011]: /dev/nvme0n1p3   56G     28G   26G   53% / 
Apr 23 00:00:35 erlangen systemd[1]: btrfs-defrag.service: Succeeded. 
Apr 23 00:00:35 erlangen systemd[1]: btrfs-defrag.service: Consumed 9.143s CPU time.
Apr 23 00:00:59 erlangen btrfs-trim.sh[20006]: /: 25,7 GiB (27635392512 Bytes) getrimmt 
Apr 23 00:00:59 erlangen btrfs-trim.sh[20006]: flock: es dauerte 0.000003 Sekunden, um die Sperre zu bekommen 
Apr 23 00:00:59 erlangen btrfs-trim.sh[20006]: flock: fstrim wird ausgeführt 
Apr 23 00:00:59 erlangen systemd[1]: btrfs-trim.service: Succeeded. 
Apr 23 00:00:59 erlangen systemd[1]: btrfs-trim.service: Consumed 1.779s CPU time.
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Scrub device /dev/nvme0n1p3 (id 1) done 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Scrub started:    Fri Apr 23 00:00:59 2021 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Status:           finished 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Duration:         0:00:14 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Total to scrub:   33.04GiB 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Rate:             1.85GiB/s 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: Error summary:    no errors found 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: flock: es dauerte 58.431279 Sekunden, um die Sperre zu bekommen 
Apr 23 00:01:13 erlangen btrfs-scrub.sh[20003]: flock: btrfs wird ausgeführt 
Apr 23 00:01:13 erlangen systemd[1]: btrfs-scrub.service: Succeeded. 
Apr 23 00:01:13 erlangen systemd[1]: btrfs-scrub.service: Consumed 4.990s CPU time.
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Done, had to relocate 0 out of 35 chunks 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: es dauerte 72.750227 Sekunden, um die Sperre zu bekommen 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: btrfs wird ausgeführt 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Dumping filters: flags 0x1, state 0x0, force is off 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]:   DATA (flags 0x2): balancing, usage=5 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Done, had to relocate 0 out of 35 chunks 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: es dauerte 0.000002 Sekunden, um die Sperre zu bekommen 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: btrfs wird ausgeführt 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Dumping filters: flags 0x1, state 0x0, force is off 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]:   DATA (flags 0x2): balancing, usage=10 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Done, had to relocate 0 out of 35 chunks 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: es dauerte 0.000006 Sekunden, um die Sperre zu bekommen 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: btrfs wird ausgeführt 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Done, had to relocate 0 out of 35 chunks 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: es dauerte 0.000006 Sekunden, um die Sperre zu bekommen 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: btrfs wird ausgeführt 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Dumping filters: flags 0x6, state 0x0, force is off 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]:   METADATA (flags 0x2): balancing, usage=3 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]:   SYSTEM (flags 0x2): balancing, usage=3 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Done, had to relocate 1 out of 35 chunks 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: es dauerte 0.000006 Sekunden, um die Sperre zu bekommen 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: flock: btrfs wird ausgeführt 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: After balance of / 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Data, single: total=30.01GiB, used=24.69GiB 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: System, single: total=32.00MiB, used=16.00KiB 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Metadata, single: total=3.00GiB, used=1.22GiB 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: GlobalReserve, single: total=73.25MiB, used=0.00B 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf 
Apr 23 00:01:13 erlangen btrfs-balance.sh[20011]: /dev/nvme0n1p3   56G     28G   26G   53% / 
Apr 23 00:01:13 erlangen systemd[1]: btrfs-balance.service: Succeeded.