Questioning Btrfs COW Default on Non-NVMe Systems (Live/Installer/Immutable)

I wanted to open a discussion and get some thoughts on the default use of Btrfs with its Copy-on-Write (COW) behavior on systems where NVMe storage is either absent or highly unlikely. My particular concern focuses on scenarios such as:

  1. USB Live Boot Environments: When running a live openSUSE environment from a USB drive, performance can already be a bottleneck. Adding the overhead of Btrfs COW on top of a relatively slow USB 2.0 or 3.0 connection can make the experience noticeably sluggish.

  2. Installers: Similarly, during the installation process itself, disk operations can feel significantly slower on traditional HDDs or even slower SATA SSDs compared to what one might expect, arguably due to the COW nature of Btrfs. This can contribute to a less responsive installation experience.

  3. Immutable Installs (e.g., MicroOS/Aeon/Kalpa): While I understand the immense benefits of Btrfs and COW for snapshotting and rollbacks in immutable systems, I’m curious if the performance implications on non-NVMe hardware have been thoroughly evaluated for the typical user. For users with older hardware or those installing on slower drives, is the current default truly optimal for their experience, especially when disk I/O is a more frequent operation (e.g., during updates, container operations, or even just general system use)?

My primary question is whether the benefits of Btrfs COW (like snapshots and checksums) always outweigh the performance penalties on slower storage mediums, especially when considering the initial user experience with live boots and installations, or the daily usability of immutable systems on less performant hardware.

Could there be a case for:

  • A more intelligent default detection? Perhaps the installer could detect the storage type and offer a different default (e.g., ext4, or Btrfs with nodatacow for the @ subvolume, or even a different mount option if it helps) if NVMe isn’t present.
  • Clearer options during installation? While users can manually change the filesystem, perhaps highlighting the performance implications of Btrfs COW on slower drives during the installation process could be beneficial.
  • Optimizations for Btrfs on slower media? Are there specific Btrfs mount options or configurations that could be recommended or even defaulted to when non-NVMe storage is detected, to mitigate some of the performance overheads without sacrificing critical features entirely?
  • Revisiting the default for Live/Installer environments? Given that these are temporary environments, is the full Btrfs COW overhead truly necessary, or could a more lightweight filesystem be used by default to enhance responsiveness?

I genuinely appreciate the power and features Btrfs brings to openSUSE. This is more about ensuring the best possible out-of-the-box experience for the widest range of hardware, particularly for those not fortunate enough to have NVMe drives.

Looking forward to your thoughts and insights!

Lance

I smell AI and it stinks really hard from wrong assumptions …

btrfs isn’t all COW. Any directory is easily switched to NOCOW:

karl@erlangen:~> lsattr .local/share/baloo/
---------------C------ .local/share/baloo/index
---------------C------ .local/share/baloo/index-lock
karl@erlangen:~> 

Grandpa of â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â–  is responsive as can be:

i4130:~ # inxi -SMCDm
System:
  Host: i4130 Kernel: 6.15.7-1-default arch: x86_64 bits: 64
  Console: pty pts/0 Distro: openSUSE Tumbleweed 20250723
Machine:
  Type: Desktop Mobo: BIOSTAR model: H81MHV3 v: 5.0 serial: N/A UEFI: American Megatrends v: 4.6.5
    date: 03/16/2021
Memory:
  System RAM: total: 8 GiB available: 7.68 GiB used: 744.5 MiB (9.5%) igpu: 32 MiB
  Array-1: capacity: 32 GiB slots: 4 modules: 2 EC: None
  Device-1: ChannelA-DIMM0 type: DDR3 size: 4 GiB speed: 1600 MT/s
  Device-2: ChannelA-DIMM1 type: no module installed
  Device-3: ChannelB-DIMM0 type: DDR3 size: 4 GiB speed: 1600 MT/s
  Device-4: ChannelB-DIMM1 type: no module installed
CPU:
  Info: dual core model: Intel Core i3-4130 bits: 64 type: MT MCP cache: L2: 512 KiB
  Speed (MHz): avg: 800 min/max: 800/3400 cores: 1: 800 2: 800 3: 800 4: 800
Drives:
  Local Storage: total: 465.76 GiB used: 14.76 GiB (3.2%)
  ID-1: /dev/sda vendor: Samsung model: SSD 850 EVO 500GB size: 465.76 GiB
i4130:~ # 

Drive write speeds:

i4130:~/benchmark-850-evo # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   22362 MB in  2.00 seconds = 11193.44 MB/sec
 Timing buffered disk reads: 1548 MB in  3.00 seconds = 515.77 MB/sec
i4130:~/benchmark-850-evo # 

fio btrfs write speeds:

4k-4g-1j-8G:  WRITE: bw=379MiB/s (398MB/s), 379MiB/s-379MiB/s (398MB/s-398MB/s), io=23.1GiB (24.8GB), run=62303-62303msec
64k-256m-16j-8G:  WRITE: bw=491MiB/s (515MB/s), 25.8MiB/s-34.3MiB/s (27.0MB/s-35.9MB/s), io=30.1GiB (32.3GB), run=60169-62758msec
1m-16g-1j-8G:  WRITE: bw=493MiB/s (516MB/s), 493MiB/s-493MiB/s (516MB/s-516MB/s), io=30.0GiB (32.2GB), run=62337-62337msec
i4130:~/benchmark-850-evo # 

btrfs overhead is low, even with the small 4k block size. The more relevant 64k 16 jobs run shows virtually no overhead at all.

Could you please stop with this selfpromotion? The “■■■■■■■■■■■■■■■■■■■■■■” is becoming really annoying. It has absolutely no value for others.

1 Like

fio ext4 write speeds:

4k-4g-1j-8G:  WRITE: bw=210MiB/s (220MB/s), 210MiB/s-210MiB/s (220MB/s-220MB/s), io=12.6GiB (13.6GB), run=61665-61665msec
64k-256m-16j-8G:  WRITE: bw=487MiB/s (510MB/s), 27.9MiB/s-32.8MiB/s (29.3MB/s-34.4MB/s), io=29.7GiB (31.8GB), run=62165-62397msec
1m-16g-1j-8G:  WRITE: bw=492MiB/s (516MB/s), 492MiB/s-492MiB/s (516MB/s-516MB/s), io=29.9GiB (32.1GB), run=62136-62136msec

ext4 overhead is the same as btrfs at larger block sizes. At 4k block size ext4 overhead is huge.

But now everyone knows about ■■■, it’s almost a staple here :innocent:

I don’t find the speed difference between nvme and SATA SSDs all that noticeable in day to day use. I have both in my system. As for mechanical drives, they are so slow regardless of filesystem that I use them exclusively when capacity is needed and bandwidth really doesn’t matter. I don’t install games (or any other application) on them; I put spreadsheets, music, films etc on them. SSDs are cheap enough now that you can have speed and capacity if you need it.

1 Like

So much easier to manage too! :tada:

Beyond a certain size, say 2-4 TB, HDDs still make some cost/performance sense depending on the workload. 6x6 TB HDDs in a ZFS array makes a lot of sense for backups, streaming, etc. :shamrock:

For the casual reader, I have added the following note in the prologue:

“Some remarks on science, pseudoscience, and learning how to not fool yourself. Caltech’s 1974 commencement address: Cargo Cult Science by RICHARD P. FEYNMAN”

1 Like

Well yes; there’s no sense putting your uncompressed blu-ray collection and job applications on an SSD, but it’s pretty normal these days for a videogame to be upwards of 100GB and you don’t really want that on a HDD. This is why I have 5TB of SSD storage and 14TB of HDD storage lolololo

1 Like