Page 2 of 2 FirstFirst 12
Results 11 to 14 of 14

Thread: Has Btrfs "wasted" some space?

  1. #11
    Join Date
    Sep 2012
    Posts
    5,203

    Default Re: Has Btrfs "wasted" some space?

    Quote Originally Posted by IBBoard View Post
    making the "excl" numbers initially unintuitive.
    "Exclusive" means "how much space is not shared with subvolumes outside of this qgroup", while "Refer" means total space referred to by this qgroup. So Exclusive number for 1/0 qgroup shows how much space your snapshots consume in total, which actually answers "where has my space gone", while "exclusive" for each individual subvolume can be interpreted as "how much space I gain after deleting this snapshot". Due to shared nature deleting snapshot may suddenly make a lot of space in adjoining snapshots exclusive.

  2. #12

    Default Re: Has Btrfs "wasted" some space?

    Quote Originally Posted by arvidjaar View Post
    "Exclusive" means "how much space is not shared with subvolumes outside of this qgroup", while "Refer" means total space referred to by this qgroup. So Exclusive number for 1/0 qgroup shows how much space your snapshots consume in total, which actually answers "where has my space gone", while "exclusive" for each individual subvolume can be interpreted as "how much space I gain after deleting this snapshot".
    I knew those definitions and thought that I understood it, but it was this that I wasn't thinking about properly:
    Quote Originally Posted by arvidjaar View Post
    Due to shared nature deleting snapshot may suddenly make a lot of space in adjoining snapshots exclusive.
    Because Btrfs snapshots are Copy On Write then I was thinking that "excl" was effectively a difference to the previous one, and so I thought I should be able to add all of the 0/x "excl" values to a base value and find out how much disk space I was using.

    As you say, though, snapshots can overlap (and often will - a "post" will often be very similar to the following "pre" from the next update, and the same for timeline snapshots) and so you can't just add the numbers up that way.

    I had thought that it would be odd if Btrfs had "lost" some space somewhere, but it wasn't until people talked through the diagnosis that it made sense.

    Thanks for explaining things.

  3. #13
    Join Date
    Sep 2012
    Posts
    5,203

    Default Re: Has Btrfs "wasted" some space?

    Quote Originally Posted by IBBoard View Post
    I was thinking that "excl" was effectively a difference to the previous one
    This is true only as long as there is a single snapshot (and there is no other data sharing). And even then you need to have clear definition what "difference" means exactly.

    As soon as space is captured in two or more snapshots, it will always be accounted as shared. Here is trivial example:
    Code:
    leap15:/home/bor # mkfs -t btrfs -f /dev/sdb1btrfs-progs v4.15
    See http://btrfs.wiki.kernel.org for more information.
    
    
    Performing full device TRIM /dev/sdb1 (500.00GiB) ...
    Label:              (null)
    UUID:               b1935d4b-8c09-40da-bb06-c56cca1025c5
    Node size:          16384
    Sector size:        4096
    Filesystem size:    500.00GiB
    Block group profiles:
      Data:             single            8.00MiB
      Metadata:         DUP               1.00GiB
      System:           DUP               8.00MiB
    SSD detected:       no
    Incompat features:  extref, skinny-metadata
    Number of devices:  1
    Devices:
       ID        SIZE  PATH
        1   500.00GiB  /dev/sdb1
    
    
    leap15:/home/bor # mount /dev/sdb1 /mnt
    leap15:/home/bor # dd if=/dev/urandom of=/mnt/bigfile bs=1K count=201400
    201400+0 records in
    201400+0 records out
    206233600 bytes (206 MB, 197 MiB) copied, 1.8202 s, 113 MB/s
    leap15:/home/bor # fsync /mnt
    leap15:/home/bor # btrfs quota enable /mnt
    leap15:/home/bor # btrfs quota rescan -w /mnt
    quota rescan started
    leap15:/home/bor # btrfs qgroup show -p /mnt
    qgroupid         rfer         excl parent  
    --------         ----         ---- ------  
    0/5         196.71MiB    196.71MiB ---     
    leap15:/home/bor # btrfs su sn -r /mnt /mnt/snap1
    leap15:/home/bor # btrfs su sn -r /mnt /mnt/snap2
    eap15:/home/bor # btrfs qgroup show -p /mnt
    qgroupid         rfer         excl parent  
    --------         ----         ---- ------  
    0/5         196.71MiB     16.00KiB ---     
    0/258       196.71MiB     16.00KiB ---     
    0/259       196.71MiB     16.00KiB ---
    So at this point we have two snapshots with zero exclusive space. So far this is correct. We did not have change anything on active filesystem since snapshots had been created. Let's do large scale changes on active filesystem.
    Code:
    leap15:/home/bor # dd if=/dev/urandom of=/mnt/bigfile bs=1K count=201400
    201400+0 records in
    201400+0 records out
    206233600 bytes (206 MB, 197 MiB) copied, 1.80381 s, 114 MB/s
    leap15:/home/bor # fsync /mnt
    leap15:/home/bor # btrfs qgroup show --sync -p /mnt
    qgroupid         rfer         excl parent  
    --------         ----         ---- ------  
    0/5         196.71MiB    196.71MiB ---     
    0/258       196.71MiB     16.00KiB ---     
    0/259       196.71MiB     16.00KiB ---
    Oops. Both snapshots still have zero exclusive space. There is no way to use these metrics to determine how much space is consumed in total:
    Code:
    leap15:/home/bor # df -h /mnt
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdb1       500G  412M  498G   1% /mnt
    Let's create some more snapshots and do some more changes.
    Code:
    leap15:/home/bor # btrfs su sn -r /mnt /mnt/snap3
    Create a readonly snapshot of '/mnt' in '/mnt/snap3'
    leap15:/home/bor # btrfs su sn -r /mnt /mnt/snap4
    Create a readonly snapshot of '/mnt' in '/mnt/snap4'
    leap15:/home/bor # rm /mnt/bigfile
    leap15:/home/bor # btrfs qgroup show --sync -p /mnt
    qgroupid         rfer         excl parent  
    --------         ----         ---- ------  
    0/5          16.00KiB     16.00KiB ---     
    0/258       196.71MiB     16.00KiB ---     
    0/259       196.71MiB     16.00KiB ---     
    0/260       196.71MiB     16.00KiB ---     
    0/261       196.71MiB     16.00KiB ---     
    leap15:/home/bor # df -h /mnt
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdb1       500G  412M  498G   1% /mnt
    Still the same situation. All snapshots look entirely identical, there is zero data in active filesystem still total space consumption did not change.

    What is possible, is computing running total across consecutive snapshots and using delta as indication of change rate. Like
    Code:
    leap15:/home/bor # btrfs qgroup create 1/258 /mnt
    leap15:/home/bor # btrfs qgroup create 1/259 /mnt
    leap15:/home/bor # btrfs qgroup create 1/260 /mnt
    leap15:/home/bor # btrfs qgroup create 1/261 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/258 1/258 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/258 1/259 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/259 1/259 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/258 1/260 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/259 1/260 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/260 1/260 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/258 1/261 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/259 1/261 /mnt
    leap15:/home/bor # btrfs qgroup assign 0/260 1/261 /mnt
    leap15:/home/bor # btrfs qgroup assign --rescan 0/261 1/261 /mnt
    leap15:/home/bor # btrfs quota rescan -s /mnt 
    no rescan operation in progress
    leap15:/home/bor # btrfs qgroup show --sync -p /mnt
    qgroupid         rfer         excl parent                  
    --------         ----         ---- ------                  
    0/5          16.00KiB     16.00KiB ---                     
    0/258       196.71MiB     16.00KiB ---                     
    0/259       196.71MiB     16.00KiB ---                     
    0/260       196.71MiB     16.00KiB ---                     
    0/261       196.71MiB     16.00KiB ---                     
    1/258       196.71MiB     16.00KiB 0/258                   
    1/259       196.73MiB    196.73MiB 0/258,0/259             
    1/260       393.45MiB    196.75MiB 0/258,0/259,0/260       
    1/261       393.46MiB    393.46MiB 0/258,0/259,0/260,0/261 
    leap15:/home/bor #
    Now we can actually see something. To get estimation how much data was changed or deleted since snapshot, look at delta between excl of this snapshot and previous one. To get estimation how much data was added since snapshot, look at delta between rfer of next snapshot and this one.So we can see that between 0/258 and 0/259 a lot of data was changed because amount of new data is approximately the same as amount of deleted data. We see that after 0/261 much data was deleted or changed; to actually find out we can create one more snapshot and compute delta. Skipping commands
    Code:
    leap15:/home/bor # btrfs qgroup show --sync -p /mnt
    qgroupid         rfer         excl parent                        
    --------         ----         ---- ------                        
    0/5          16.00KiB     16.00KiB ---                           
    0/258       196.71MiB     16.00KiB ---                           
    0/259       196.71MiB     16.00KiB ---                           
    0/260       196.71MiB     16.00KiB ---                           
    0/261       196.71MiB     16.00KiB ---                           
    0/262        16.00KiB     16.00KiB ---                           
    1/258       196.71MiB     16.00KiB 0/258                         
    1/259       196.73MiB    196.73MiB 0/258,0/259                   
    1/260       393.45MiB    196.75MiB 0/258,0/259,0/260             
    1/261       393.46MiB    393.46MiB 0/258,0/259,0/260,0/261       
    1/262       393.48MiB    393.48MiB 0/258,0/259,0/260,0/261,0/262 
    leap15:/home/bor #
    So there is no new data between 0/261 and 0/262 which implies much was deleted but nothing overwritten or added.

    Of course this is not precise. Removing one file and adding another cannot be distinguished from overwriting the same file using these metrics. But that does not matter much as for the purpose of snapshot space consumption both cases are the same - changing file can be considered as removing old (implicitly preserving it in snapshot) and adding new with the same name.

    Unfortunately as can be seen management is really cumbersome. It would be good if more people played with it to get better understanding of requirements. Then we could enhance snapper to manage and display this information. But more real life confirmation is needed.

  4. #14

    Default Re: Has Btrfs "wasted" some space?

    Thanks for all of those worked examples. It looks like some of it is possible to calculate, but it looks like the counts available from Btrfs don't make it easy.

    Given that Snapper doesn't list space usage by snapshots and you've got to do your own matching of results across three commands (snapper, btrfs qgroup and btrfs subvolume) then it would initially be nice if it could at least print the values and give clear explanations. I'm not sure how easy it is to word, though: "Total space" and "Space freed up after deletion (but could be more if you delete another snapshot first" doesn't work too well!

    I might put a feature request in for Snapper for an alternate cleanup that would indirectly make this clearer: Deleting "Post" snapshots (but not Pre) once there is a subsequent snapshot. The way I look at snapshots then you need the Pre in case your update breaks something, and you need the Timeline in case you edit a config and break it, but all that Post does is captures the state after an upgrade (which subsequent timeline snapshots will do anyway) and overlaps with the next Pre snapshot (so you don't obviously see the disk usage).

    The only drawback would be if you upgrade, screw a config, timeline snapshot, delete the Post, then find you have to roll back to the Pre and do the upgrade again, but that's a corner case and it'd be why you don't make it a default enabled setting.

Page 2 of 2 FirstFirst 12

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •