btrfs compress a directory

Vanilla KDE/LEAP
This seems like it should be simple and straightforward but I’m totally failing at accomplishing this. Online searches don’t turn up anything useful except the docs, which are surprisingly cryptic on this issue, at least when I’ve repeatedly re-read them - maybe the words are echoing in my head now? https://btrfs.wiki.kernel.org/index.php/Compression

I have an 3.8TB (mounted as uncompressed, defaults during install btrfs single volume across two drives) btrfs volume holding my entire system, installed at LEAP install-time with standard defaults (snapshots enabled was ticked).
I just want to compress my data directory.
I ran chattr -R +c Data_drive (I may not have needed to do this recursively) on the directory and verified that all the files/dirs have the “c” attribute (per the docs), example:

> lsattr 
--------c---------- ./00__Run-Notes.txt 
--------c---------- ./1JAN2035.rsfE120_NINToR_Miche_08g_6d2c_2b.nc 
--------c---------- ./CONST_hack-add.f 
--------c---------- ./E120_NINToR_Miche_01.PRT 
--------c---------- ./E120_NINToR_Miche.R 
--------c---------- ./GEOM_B_stock2020_modB3a.f 
--------c---------- ./GHG_RCP85_stock.txt 
--------c---------- ./acc 
--------c---------- ./daily 
--------c---------- ./E120_NINToR_Miche.PRT 
--------c---------- ./E120_NINToR_Miche_02.PRT

But every time I run the following (per docs), the free space on my btrfs filesystem decreases (per KDE Dolphin → properties)

sudo btrfs filesystem defrag -rv -czstd /OSS/Data_drive/

What am I doing wrong? So are there lots of snapshots or metadata being written each time using up system space when I run that command?

I tried listing the sizes on my file system using** Dolphin -> Properties** and the three main large directories I know of are /home and my two data directories /OSS/Data_drive and /OSS/Data_Part. These are reported as containing 448GB, 214GB, and 133GB, respectively, for a total of ~800GB. But my btrfs filesystem size is 3.8TB! So there is a LOT of space gone somewhere and I don’t know how to find it and free it up. (k4dirstat - I know it’s an old tool but it has always seemed to work fine until this - doesn’t report correctly when run on “/” even as kdesu.)

Help? Thanks.

…I forgot to say in the last sentence that Dolphin now reports only 100GB space available on / - so that’s almost ~2.5TB missing somewhere?

(base) patti@linux-lhkc:~> df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 7.5G 21M 7.5G 1% /dev/shm
tmpfs 3.0G 340M 2.7G 12% /run
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/mapper/system-root 3.7T 3.6T 99G 98% /
/dev/mapper/system-root 3.7T 3.6T 99G 98% /.snapshots
/dev/mapper/system-root 3.7T 3.6T 99G 98% /boot/grub2/i386-pc
/dev/mapper/system-root 3.7T 3.6T 99G 98% /boot/grub2/x86_64-efi
/dev/mapper/system-root 3.7T 3.6T 99G 98% /opt
/dev/mapper/system-root 3.7T 3.6T 99G 98% /root
/dev/mapper/system-root 3.7T 3.6T 99G 98% /srv
/dev/mapper/system-root 3.7T 3.6T 99G 98% /tmp
/dev/mapper/system-root 3.7T 3.6T 99G 98% /usr/local
/dev/sda1 511M 332K 511M 1% /boot/efi
/dev/mapper/system-root 3.7T 3.6T 99G 98% /home
/dev/mapper/system-root 3.7T 3.6T 99G 98% /var
tmpfs 1.5G 68K 1.5G 1% /run/user/1000
(base) patti@linux-lhkc:~>

**linux-lhkc:/home/patti #** btrfs qgroup show / 
WARNING: qgroup data inconsistent, rescan recommended 
qgroupid         rfer         excl  
--------         ----         ----  
0/5          16.00KiB     16.00KiB  
0/256        16.00KiB     16.00KiB  
0/257         1.55GiB      1.55GiB  
0/258        16.00KiB     16.00KiB  
0/259         1.15GiB      1.15GiB  
0/260        16.00KiB     16.00KiB  
0/261       175.10MiB    175.10MiB  
0/262       408.31MiB    408.31MiB  
0/263       450.79GiB    450.79GiB  
0/264         3.79MiB      3.79MiB  
0/265        16.00KiB     16.00KiB  
0/266         7.56MiB      7.56MiB  
0/267       226.68GiB     93.38GiB  
0/3701        2.55TiB    132.74MiB  
0/3702        2.55TiB    121.84MiB  
0/4288        1.16TiB     18.14MiB  
0/4289        1.16TiB     57.34MiB  
0/4290        1.16TiB      7.23MiB  
0/4291        1.16TiB     32.38MiB  
0/4367      226.54GiB    280.00KiB  
0/4368      226.54GiB    528.00KiB  
1/0           2.89TiB      2.86TiB  
**linux-lhkc:/home/patti #**

Well, as long as I’m providing (hopefully useful) info for someone to help… here’s this… (I’m still clueless, btw)

**linux-lhkc:/home/patti #** btrfs filesystem du -s /OSS 
     Total   Exclusive  Set shared  Filename 
 347.75GiB   214.37GiB   115.66GiB  /OSS 
**linux-lhkc:/home/patti #** btrfs filesystem du -s / 
     Total   Exclusive  Set shared  Filename 
  13.07TiB   666.96GiB     2.98TiB  / 
**linux-lhkc:/home/patti #** btrfs filesystem du -s /home 
     Total   Exclusive  Set shared  Filename 
 448.75GiB   448.75GiB     4.49MiB  /home 
**linux-lhkc:/home/patti #**

I keep replying to my own thread… that’s pretty stupid as it biases against replies - but I think I may have found my missing space:
(snapshots must be removed with snapper, right?)

(base) patti@linux-lhkc:~> sudo btrfs subvolume show /  
[sudo] password for root:  
@/.snapshots/1/snapshot 
        Name:                   snapshot 
        UUID:                   272535a4-47ac-ec44-82bd-e4046d6c52c0 
        Parent UUID:            - 
        Received UUID:          - 
        Creation time:          2022-01-24 09:59:00 -0700 
        Subvolume ID:           267 
        Generation:             182984 
        Gen at creation:        31 
        Parent ID:              266 
        Top level ID:           266 
        Flags:                  - 
        Snapshot(s): 
                                @/.snapshots/111/snapshot 
                                @/.snapshots/112/snapshot 
                                @/.snapshots/145/snapshot 
                                @/.snapshots/146/snapshot 
                                @/.snapshots/147/snapshot 
                                @/.snapshots/148/snapshot 
                                @/.snapshots/149/snapshot 
                                @/.snapshots/150/snapshot



OK. Then.

**linux-lhkc:/home/patti #** snapper ls 
   # | Type   | Pre # | Date                            | User | Used Space | Cleanup | Description           | Userdata      
-----+--------+-------+---------------------------------+------+------------+---------+-----------------------+-------------- 
  0  | single |       |                                 | root |            |         | current               |               
  1* | single |       | Mon 24 Jan 2022 09:59:00 AM MST | root |   1.31 GiB |         | first root filesystem |               
111  | pre    |       | Mon 21 Feb 2022 12:21:09 PM MST | root | 132.74 MiB | number  | zypp(packagekitd)     | important=yes 
112  | post   |   111 | Mon 21 Feb 2022 12:23:23 PM MST | root | 121.84 MiB | number  |                       | important=yes 
145  | pre    |       | Wed 09 Mar 2022 10:15:46 AM MST | root |  18.14 MiB | number  | zypp(packagekitd)     | important=yes 
146  | post   |   145 | Wed 09 Mar 2022 10:19:39 AM MST | root |  57.34 MiB | number  |                       | important=yes 
147  | pre    |       | Wed 09 Mar 2022 08:07:29 PM MST | root |   7.23 MiB | number  | zypp(packagekitd)     | important=no  
148  | post   |   147 | Wed 09 Mar 2022 08:12:35 PM MST | root |  32.38 MiB | number  |                       | important=no  
149  | pre    |       | Fri 11 Mar 2022 07:23:28 AM MST | root | 280.00 KiB | number  | zypp(zypper)          | important=no  
150  | post   |   149 | Fri 11 Mar 2022 07:23:34 AM MST | root | 528.00 KiB | number  |                       | important=no  
151  | pre    |       | Fri 11 Mar 2022 11:22:17 AM MST | root |  16.00 KiB | number  | yast sw_single        |               
152  | pre    |       | Fri 11 Mar 2022 11:22:17 AM MST | root |  16.00 KiB | number  | yast sw_single        |               
153  | pre    |       | Fri 11 Mar 2022 11:22:17 AM MST | root |  16.00 KiB | number  | yast printer          |               
154  | pre    |       | Fri 11 Mar 2022 11:22:18 AM MST | root |  16.00 KiB | number  | yast sw_single        |               
155  | post   |   152 | Fri 11 Mar 2022 11:26:30 AM MST | root |  16.00 KiB | number  |                       |               
156  | post   |   153 | Fri 11 Mar 2022 11:26:44 AM MST | root |  16.00 KiB | number  |                       |               
157  | post   |   151 | Fri 11 Mar 2022 11:26:44 AM MST | root |  16.00 KiB | number  |                       |               
158  | post   |   154 | Fri 11 Mar 2022 11:26:44 AM MST | root |  16.00 KiB | number  |                       |               
**linux-lhkc:/home/patti #** snapper delete 111 
**linux-lhkc:/home/patti #** snapper ls 
   # | Type   | Pre # | Date                            | User | Used Space | Cleanup | Description           | Userdata      
-----+--------+-------+---------------------------------+------+------------+---------+-----------------------+-------------- 
  0  | single |       |                                 | root |            |         | current               |               
  1* | single |       | Mon 24 Jan 2022 09:59:00 AM MST | root |   1.31 GiB |         | first root filesystem |               
112  | post   |   111 | Mon 21 Feb 2022 12:23:23 PM MST | root |   1.73 TiB | number  |                       | important=yes 
145  | pre    |       | Wed 09 Mar 2022 10:15:46 AM MST | root |  18.14 MiB | number  | zypp(packagekitd)     | important=yes 
146  | post   |   145 | Wed 09 Mar 2022 10:19:39 AM MST | root |  57.34 MiB | number  |                       | important=yes 
147  | pre    |       | Wed 09 Mar 2022 08:07:29 PM MST | root |   7.23 MiB | number  | zypp(packagekitd)     | important=no  
148  | post   |   147 | Wed 09 Mar 2022 08:12:35 PM MST | root |  32.38 MiB | number  |                       | important=no  
149  | pre    |       | Fri 11 Mar 2022 07:23:28 AM MST | root | 280.00 KiB | number  | zypp(zypper)          | important=no  
150  | post   |   149 | Fri 11 Mar 2022 07:23:34 AM MST | root | 528.00 KiB | number  |                       | important=no  
151  | pre    |       | Fri 11 Mar 2022 11:22:17 AM MST | root |  16.00 KiB | number  | yast sw_single        |               
152  | pre    |       | Fri 11 Mar 2022 11:22:17 AM MST | root |  16.00 KiB | number  | yast sw_single        |               
153  | pre    |       | Fri 11 Mar 2022 11:22:17 AM MST | root |  16.00 KiB | number  | yast printer          |               
154  | pre    |       | Fri 11 Mar 2022 11:22:18 AM MST | root |  16.00 KiB | number  | yast sw_single        |               
155  | post   |   152 | Fri 11 Mar 2022 11:26:30 AM MST | root |  16.00 KiB | number  |                       |               
156  | post   |   153 | Fri 11 Mar 2022 11:26:44 AM MST | root |  16.00 KiB | number  |                       |               
157  | post   |   151 | Fri 11 Mar 2022 11:26:44 AM MST | root |  16.00 KiB | number  |                       |               
158  | post   |   154 | Fri 11 Mar 2022 11:26:44 AM MST | root |  16.00 KiB | number  |                       |               
**linux-lhkc:/home/patti #** snapper delete 112

Sooo… I guess the words on the btrfs docs are just bouncing around in my head. Each of the directories for these snapshots, in Dolphin:

*/.snapshots/145
*
*1.5 TiB (1,653,117,905,848)
*
*1,350,149 files, 109,205 sub-folders
*

  • but each of those folders’ calculations reaches beyond the folders themselves (listed below) to data/metadata elsewhere on the disks, because there isn’t enough space in my FS for each those directories to be ~1TB in size. Docs told about this - but it took a long time to find the right “slot” for that knowledge peg to fit into my brain. So Dolphin was is fooled (of course, we’re warned about this sort of thing also…) Anyway, I guess the warning is - 1.) be aware you’ll likely be confused (unless you’re a wizard), and 2.) get to know snapper right away if you have snapshots enabled on btrfs, and 3.) you’ll automatically loose FS space due to those snapshots - so you have to find a way to track the loss of space (snapper doesn’t tell you).
**linux-lhkc:/.snapshots #** ll 
total 4 
drwxr-xr-x 1 root root   32 Jan 24 09:59 **1**
drwxr-xr-x 1 root root   66 Mar  9 10:19 **145**
drwxr-xr-x 1 root root   98 Mar  9 10:19 **146**
drwxr-xr-x 1 root root   66 Mar  9 20:12 **147**
drwxr-xr-x 1 root root   98 Mar  9 20:12 **148**
drwxr-xr-x 1 root root   66 Mar 11 07:23 **149**
drwxr-xr-x 1 root root   98 Mar 11 07:23 **150**
drwxr-xr-x 1 root root   66 Mar 11 11:22 **151**
drwxr-xr-x 1 root root   66 Mar 11 11:22 **152**
drwxr-xr-x 1 root root   66 Mar 11 11:22 **153**
drwxr-xr-x 1 root root   66 Mar 11 11:22 **154**
drwxr-xr-x 1 root root   98 Mar 11 11:26 **155**
drwxr-xr-x 1 root root   98 Mar 11 11:26 **156**
drwxr-xr-x 1 root root   98 Mar 11 11:26 **157**
drwxr-xr-x 1 root root   98 Mar 11 11:26 **158**
-rw-r----- 1 root root 1752 Mar 11 12:06 grub-snapshot.cfg 
**linux-lhkc:/.snapshots #**

So, now, to wrap up this thread - is this the correct way to compress a subdirectory on a btrfs filesystem? Was the -R argument needed?

Also - from … https://en.opensuse.org/openSUSE:Snapper_Tutorial

**linux-lhkc:/home/patti #** yast2-snapper 
If 'yast2-snapper' is not a typo you can use command-not-found to lookup the package that contains it, like this: 
    cnf yast2-snapper 
**linux-lhkc:/home/patti #** cnf yast2-snapper 
yast2-snapper: searching ...  
Warning: incomplete repos found but could not refresh - try to refresh manually, e.g. with 'zypper refresh'. 
 yast2-snapper: command not found                               
**linux-lhkc:/home/patti #** 



OK - I got all my disk space back - according to https://en.opensuse.org/openSUSE:Snapper_Tutorial - “The default behavior when Snapper is configured to run on root, meaning /, is to exclude every Btrfs subvolume.”

That statement about excluding subvolumes is confusing. I didn’t create a separate /home partition (e.g., XFS) - so is snapper snapshotting everything on my 3.7TB drive? (my /home is just another subvolume)


[FONT=monospace]**linux-lhkc:/home/patti #**snapper list 
 # | Type   | Pre # | Date                            | User | Used Space | Cleanup | Description           | Userdata 
---+--------+-------+---------------------------------+------+------------+---------+-----------------------+--------- 
0  | single |       |                                 | root |            |         | current               |          
1* | single |       | Mon 24 Jan 2022 09:59:00 AM MST | root | 229.66 GiB |         | first root filesystem |          
**linux-lhkc:/home/patti #**

**linux-lhkc:/home/patti #** btrfs subvolume list / 
ID 256 gen 33 top level 5 path @ 
ID 257 gen 183359 top level 256 path @/var 
ID 258 gen 182725 top level 256 path @/usr/local 
ID 259 gen 183358 top level 256 path @/tmp 
ID 260 gen 182725 top level 256 path @/srv 
ID 261 gen 183358 top level 256 path @/root 
ID 262 gen 183230 top level 256 path @/opt 
ID 263 gen 183359 top level 256 path @/home 
ID 264 gen 182724 top level 256 path @/boot/grub2/x86_64-efi 
ID 265 gen 182724 top level 256 path @/boot/grub2/i386-pc 
ID 266 gen 183328 top level 256 path @/.snapshots 
ID 267 gen 183352 top level 266 path @/.snapshots/1/snapshot 
ID 4383 gen 183333 top level 263 path @/home/.snapshots 
**linux-lhkc:/home/patti #****

**(base) patti@linux-lhkc:~> df -h 
Filesystem               Size  Used Avail Use% Mounted on 
devtmpfs                 4.0M     0  4.0M   0% /dev 
tmpfs                    7.5G   32M  7.5G   1% /dev/shm 
tmpfs                    3.0G  316M  2.7G  11% /run 
tmpfs                    4.0M     0  4.0M   0% /sys/fs/cgroup 
/dev/mapper/system-root  3.7T  687G  3.0T  19% / 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /.snapshots 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /boot/grub2/i386-pc 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /boot/grub2/x86_64-efi 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /opt 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /root 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /srv 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /tmp 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /usr/local 
/dev/sda1                511M  332K  511M   1% /boot/efi 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /home 
/dev/mapper/system-root  3.7T  687G  3.0T  19% /var 
tmpfs                    1.5G   68K  1.5G   1% /run/user/1000 
(base) patti@linux-lhkc:~>
[/FONT]

That statement about excluding subvolumes is confusing. I didn’t create a separate /home partition (e.g., XFS) - so is snapper snapshotting everything on my 3.7TB drive? (my /home is just another subvolume)

This shows what is included/excluded for snapshots (assuming you haven’t modified the defaults):
https://en.opensuse.org/SDB:BTRFS

As for compression, I specify it for the subvolume in the fstab file:

UUID=009ebef9-275e-4d9f-918b-80a4856c7ff8  /media/data             btrfs  noatime,compress=zstd     
    0  0

More info/methods can be found at:
https://btrfs.wiki.kernel.org/index.php/Compression

btrfs filesystem defragment will rewrite files content thus consuming additional space; old files content remains locked in snapshots and continue to consume space. For it to happen “every time” you need to create snapshot between each two consecutive invocations. I suspect that “every time” is emotional exaggeration.

By definition snapshots preserve previous file content. How do you expect to store/keep it without consuming space?

so you have to find a way to track the loss of space (snapper doesn’t tell you).

Of course it does. “Snapshot size” is exactly the amount of additional space that will be available after this snapshot is deleted. This is as close to “how much **extra **space this snapshot takes” as you can get. Additionally the “excl” column for qgroup 1/0 shows how much space all snapshots consume in total.

As usual there is no single true way.

Was the -R argument needed?

It depends on what you need.

Setting “compress” attribute on a regular file tells btrfs to attempt compression for future writes if possible. btrfs may decide to skip it. Setting “compress” attribute does not affect existing data (it remains uncompressed).

Setting “compress” attribute on a directory makes all **new **files created in this directory inherit “compress” attribute. It does not change anything for files already existing in this directory.

Running “btrfs filesystem defragment -c” will compress existing data in affected files independently of the value of “compress” attribute. It will not set “compress” attribute itself so future writes into these files will not be (attempted to be) compressed.

How is /home relevant here at all? You never described where data you want to compress actually resides, but from your commands we can guess that it is in /OSS directory which is not in /home.

so is snapper snapshotting everything on my 3.7TB drive? (my /home is just another subvolume)

Default snapper configuration snapshots root subvolume only. It excludes all other subvolumes like /var, /home etc.

btrfs can be puzzling. Some (german) discussion: https://forums.opensuse.org/showthread.php/562626-Snapper-Funktionsweise

Thank you for the reply! - the business of an actual robust filesystem has been in the works for decades. It actually seems to be finally achieved, although I am not sure how btrfs performs under hardware failures.
For your second comment below - yes, I guess that’s probably best BUT, per the docs: Compression - btrfs Wiki
*
**"Can I set compression per-subvolume?
**Currently no, this is planned. You can simulate this by enabling compression on the subvolume directory and the files/directories will inherit the compression flag. "
*
So this is partially why btrfs is so confusing?
But I would have to change a directory tree to a subvolume in order to use this.
Maybe, if that actually works, it’s easier and more transparent to use?

I fail to see how it can be interpreted the way you did. All that quoted statement says - there is no subvolume property to enable compression, so you simply need to set compression flag on subvolume top level directory. Which you can do for any directory, whether it is subvolume or not.

Simple. “Can I set compression per subvolume?” == “Can I compress subvolumes.” It’s a classic “instructions unclear” from the perspective I read.

Ah, well, here I admit my own stupidity, of course.rotfl!

But, whatever.

The compression measurement tool compsize referenced in the link to:
https://github.com/kilobyte/compsize
is available in the main repository for Tumbleweed and may be available in Leap.


dos@DOS1:~> sudo zypper info compsize 
Loading repository data... 
Reading installed packages... 


Information for package compsize: 
--------------------------------- 
Repository     : Main Repository (OSS) 
Name           : compsize 
Version        : 1.5-1.8 
Arch           : x86_64 
Vendor         : openSUSE 
Installed Size : 25.8 KiB 
Installed      : Yes 
Status         : up-to-date 
Source package : compsize-1.5-1.8.src 
Upstream URL   : https://github.com/kilobyte/compsize 
Summary        : Utility for measuring compression ratio of files on btrfs 
Description    :  
    compsize takes a list of files (given as arguments) on a btrfs 
    filesystem and measures used compression types and effective 
    compression ratio, producing a report.

May be it is language problem, but while I do understand every word I do not understand what you are saying. Sorry.

It is available.

                 *"" Simple.  "Can I set compression per subvolume?"  ==  "Can I compress subvolumes."

*May be it is language problem, but while I do understand every word I do not understand what you are saying. Sorry."

Commenting now only on the statement above: “I do not understand what you are saying.”
I guess you have to track the discussion. The left hand side of the equality above, “Can I set compression per subvolume?” is a direct quote from the Wiki
Compression - btrfs Wiki
…and the answer is “no”

The right hand side is what you said you do - compress a subvolume (via mounting it using the compression flag in fstab).

I understood (obviously incorrectly) from the wording of the Wiki that these two statements in the equation above were equal (hence the “==” sign), but they obviously are not, as proved from your experience. Thus, the Wiki is referring to something different from what you do in fstab (I also assume the Wiki content is being kept up-to-date). My misunderstanding of that subtlety is what I was referring-to with the “instructions unclear” meme.

So, maybe some btrfs wizard here can explain exactly what that bit of the Wiki is actually intending to say?
Compression - btrfs Wiki

I did not say anything about fstab. Where did you find it?