Cleanuing up /, leftover snapshots?

Hello,

This is kind of a continuation of the thread: https://forums.opensuse.org/showthread.php/544078-Breaking-(zypper-dup)-distro-upgrade-into-multiple-phases-due-to-space-constraint-of-partition

I have a TW installation with 56GB / and I have snapshots disabled and deleted as far as I am concerned and this is the only system with btrfs.

udo snapper list-configs
Config | Subvolume
-------+----------
root   | /        


sudo snapper --config root list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+-------------
0  | single |       |                                  | root |            |         | current               |              
1* | single |       | Wed 29 Jul 2020 11:40:38 PM CEST | root |  16.92 GiB |         | first root filesystem |              
5  | pre    |       | Mon 07 Sep 2020 10:13:16 PM CEST | root |   1.01 MiB | number  | zypp(zypper)          | important=no
6  | post   |     5 | Mon 07 Sep 2020 10:13:17 PM CEST | root | 992.00 KiB | number  |                       | important=no


Lately I am realizing that this partition is full (53/56GB) while a nearly-identical ext4 partition with LEAP 15.2 is only half full (36/55GB). They almost have all the same packages and I don’t think there is anything drastically more installed on the TW than on the 15.2. Both / are on SSD.

I feel like I am missing something here. I think I didn’t do some form of maintenance to discard and clean up the clutters on the TW installation. Any suggestions on making sure all of the snapshots are cleaned up and some form of maintenane to retrieve wasted disk space?

Show usage of your filesystem:

**3400G:~ #** btrfs filesystem usage -T / 
Overall: 
    Device size:                  40.00GiB 
    Device allocated:             22.29GiB 
    Device unallocated:           17.71GiB 
    Device missing:                  0.00B 
    Used:                         14.27GiB 
    Free (estimated):             25.04GiB      (min: 25.04GiB) 
    Data ratio:                       1.00 
    Metadata ratio:                   1.00 
    Global reserve:               43.83MiB      (used: 0.00B) 
    Multiple profiles:                  no 

             Data     Metadata  System               
Id Path      single   single    single   Unallocated 
-- --------- -------- --------- -------- ----------- 
 1 /dev/sda3 21.01GiB   1.25GiB 32.00MiB    17.71GiB 
-- --------- -------- --------- -------- ----------- 
   Total     21.01GiB   1.25GiB 32.00MiB    17.71GiB 
   Used      13.67GiB 607.42MiB 16.00KiB             
**3400G:~ #**

BTW: I got lots of snapshots:

**3400G:~ #** snapper list 
    # | Type   | Pre # | Date                     | User | Cleanup | Description           | Userdata      
------+--------+-------+--------------------------+------+---------+-----------------------+-------------- 
   0  | single |       |                          | root |         | current               |               
 649* | single |       | Sat Oct 10 06:45:34 2020 | root |         | writable copy of #646 |               
1214  | pre    |       | Fri Nov 27 12:39:10 2020 | root | number  | zypp(zypper)          | important=yes 
1215  | post   |  1214 | Fri Nov 27 12:42:55 2020 | root | number  |                       | important=yes 
1216  | pre    |       | Fri Nov 27 12:43:24 2020 | root | number  | zypp(zypper)          | important=yes 
1217  | post   |  1216 | Fri Nov 27 12:43:46 2020 | root | number  |                       | important=yes 
1246  | pre    |       | Tue Dec  1 06:48:20 2020 | root | number  | zypp(zypper)          | important=yes 
1247  | post   |  1246 | Tue Dec  1 06:54:30 2020 | root | number  |                       | important=yes 
1248  | pre    |       | Tue Dec  1 10:20:43 2020 | root | number  | zypp(zypper)          | important=no  
1249  | post   |  1248 | Tue Dec  1 10:20:47 2020 | root | number  |                       | important=no  
1250  | pre    |       | Tue Dec  1 13:25:21 2020 | root | number  | yast bootloader       |               
1251  | post   |  1250 | Tue Dec  1 13:26:32 2020 | root | number  |                       |               
1252  | pre    |       | Tue Dec  1 17:09:57 2020 | root | number  | zypp(zypper)          | important=no  
1253  | post   |  1252 | Tue Dec  1 17:10:00 2020 | root | number  |                       | important=no  
1254  | pre    |       | Wed Dec  2 08:02:48 2020 | root | number  | zypp(zypper)          | important=yes 
1255  | post   |  1254 | Wed Dec  2 08:05:35 2020 | root | number  |                       | important=yes 
1256  | pre    |       | Wed Dec  2 08:05:59 2020 | root | number  | zypp(zypper)          | important=yes 
1257  | post   |  1256 | Wed Dec  2 08:06:22 2020 | root | number  |                       | important=yes 
**3400G:~ #**

In addition to “btrfs filesystem usage” show also output of “btrfs sub li /”, “btrfs qgroup show /”, “btrfs sub get-default /” and “grep ’ / ’ /proc/self/mountinfo”. Quotes in the last command are intentional.

**#** btrfs filesystem usage -T /
Overall:
    Device size:                  55.00GiB
    Device allocated:             55.00GiB
    Device unallocated:            1.00MiB
    Device missing:                  0.00B
    Used:                         53.60GiB
    Free (estimated):            723.94MiB      (min: 723.94MiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              128.34MiB      (used: 0.00B)
    Multiple profiles:                  no

                  Data     Metadata System               
Id Path           single   single   single   Unallocated
-- -------------- -------- -------- -------- -----------
 1 /dev/nvme0n1p7 53.00GiB  2.00GiB  4.00MiB     1.00MiB
-- -------------- -------- -------- -------- -----------
   Total          53.00GiB  2.00GiB  4.00MiB     1.00MiB
   Used           52.29GiB  1.31GiB 16.00KiB  

snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+-------------
0  | single |       |                                  | root |            |         | current               |              
1* | single |       | Wed 29 Jul 2020 11:40:38 PM CEST | root |  16.95 GiB |         | first root filesystem |              
5  | pre    |       | Mon 07 Sep 2020 10:13:16 PM CEST | root |   1.01 MiB | number  | zypp(zypper)          | important=no
6  | post   |     5 | Mon 07 Sep 2020 10:13:17 PM CEST | root | 992.00 KiB | number  |                       | important=no


btrfs sub li /
ID 256 gen 45439 top level 5 path @
ID 257 gen 45485 top level 256 path @/var
ID 258 gen 45451 top level 256 path @/usr/local
ID 259 gen 45485 top level 256 path @/tmp
ID 260 gen 43568 top level 256 path @/srv
ID 261 gen 45452 top level 256 path @/root
ID 262 gen 45433 top level 256 path @/opt
ID 263 gen 45433 top level 256 path @/boot/grub2/x86_64-efi
ID 264 gen 10157 top level 256 path @/boot/grub2/i386-pc
ID 265 gen 44949 top level 256 path @/.snapshots
ID 266 gen 45451 top level 265 path @/.snapshots/1/snapshot
ID 490 gen 45439 top level 265 path @/.snapshots/5/snapshot
ID 491 gen 45439 top level 265 path @/.snapshots/6/snapshot

**#** btrfs qgroup show /
qgroupid         rfer         excl  
--------         ----         ----  
0/5          16.00KiB     16.00KiB  
0/256        16.00KiB     16.00KiB  
0/257         2.85GiB      2.85GiB  
0/258         7.36GiB      7.36GiB  
0/259         2.88MiB      2.88MiB  
0/260        16.00KiB     16.00KiB  
0/261        14.63MiB     14.63MiB  
0/262         2.29GiB      2.29GiB  
0/263         3.79MiB      3.79MiB  
0/264        16.00KiB     16.00KiB  
0/265        20.00KiB     20.00KiB  
0/266        25.71GiB     16.95GiB  
0/490        23.99GiB      1.01MiB  
0/491        23.99GiB    992.00KiB  
1/0          23.99GiB     15.22GiB

**#** btrfs sub get-default /
ID 266 gen 45451 top level 265 path @/.snapshots/1/snapshot
**#** grep '/' /proc/self/mountinfo
23 102 0:21 **/****/**sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
24 102 0:22 **/****/**proc rw,nosuid,nodev,noexec,relatime shared:25 - proc proc rw
25 102 0:5 **/****/**dev rw,nosuid,noexec shared:21 - devtmpfs devtmpfs rw,size=16299440k,nr_inodes=4074860,mode=755,inode64
26 23 0:7 **/****/**sys**/**kernel**/**security rw,nosuid,nodev,noexec,relatime shared:3 - securityfs securityfs rw
27 25 0:23 **/****/**dev**/**shm rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,inode64
28 25 0:24 **/****/**dev**/**pts rw,nosuid,noexec,relatime shared:23 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
29 102 0:25 **/****/**run rw,nosuid,nodev shared:24 - tmpfs tmpfs rw,size=6524948k,nr_inodes=819200,mode=755,inode64
30 23 0:26 **/****/**sys**/**fs**/**cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs ro,size=4096k,nr_inodes=1024,mode=755,inode64
31 30 0:27 **/****/**sys**/**fs**/**cgroup**/**unified rw,nosuid,nodev,noexec,relatime shared:5 - cgroup2 cgroup2 rw,nsdelegate
32 30 0:28 **/****/**sys**/**fs**/**cgroup**/**systemd rw,nosuid,nodev,noexec,relatime shared:6 - cgroup cgroup rw,xattr,name=systemd
33 23 0:29 **/****/**sys**/**fs**/**pstore rw,nosuid,nodev,noexec,relatime shared:18 - pstore pstore rw
34 23 0:30 **/****/**sys**/**firmware**/**efi**/**efivars rw,nosuid,nodev,noexec,relatime shared:19 - efivarfs efivarfs rw
35 23 0:31 **/****/**sys**/**fs**/**bpf rw,nosuid,nodev,noexec,relatime shared:20 - bpf none rw,mode=700
36 30 0:32 **/****/**sys**/**fs**/**cgroup**/**cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:7 - cgroup cgroup rw,cpu,cpuacct
37 30 0:33 **/****/**sys**/**fs**/**cgroup**/**freezer rw,nosuid,nodev,noexec,relatime shared:8 - cgroup cgroup rw,freezer
38 30 0:34 **/****/**sys**/**fs**/**cgroup**/**hugetlb rw,nosuid,nodev,noexec,relatime shared:9 - cgroup cgroup rw,hugetlb
39 30 0:35 **/****/**sys**/**fs**/**cgroup**/**blkio rw,nosuid,nodev,noexec,relatime shared:10 - cgroup cgroup rw,blkio
40 30 0:36 **/****/**sys**/**fs**/**cgroup**/**rdma rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,rdma
41 30 0:37 **/****/**sys**/**fs**/**cgroup**/**memory rw,nosuid,nodev,noexec,relatime shared:12 - cgroup cgroup rw,memory
42 30 0:38 **/****/**sys**/**fs**/**cgroup**/**net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,net_cls,net_prio
43 30 0:39 **/****/**sys**/**fs**/**cgroup**/**devices rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,devices
44 30 0:40 **/****/**sys**/**fs**/**cgroup**/**perf_event rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,perf_event
45 30 0:41 **/****/**sys**/**fs**/**cgroup**/**cpuset rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,cpuset
46 30 0:42 **/****/**sys**/**fs**/**cgroup**/**pids rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,pids
102 1 0:45 **/**@**/**.snapshots**/**1**/**snapshot **/** rw,noatime,nodiratime shared:1 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=266,subvol=**/**@**/**.snapshots**/**1**/**snapshot
47 24 0:50 **/****/**proc**/**sys**/**fs**/**binfmt_misc rw,relatime shared:26 - autofs systemd-1 rw,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=19884
48 23 0:6 **/****/**sys**/**kernel**/**debug rw,nosuid,nodev,noexec,relatime shared:27 - debugfs debugfs rw
49 25 0:51 **/****/**dev**/**hugepages rw,relatime shared:28 - hugetlbfs hugetlbfs rw,pagesize=2M
50 25 0:20 **/****/**dev**/**mqueue rw,nosuid,nodev,noexec,relatime shared:29 - mqueue mqueue rw
51 23 0:11 **/****/**sys**/**kernel**/**tracing rw,nosuid,nodev,noexec,relatime shared:30 - tracefs tracefs rw
53 102 0:45 **/**@**/**.snapshots **/**.snapshots rw,relatime shared:31 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=265,subvol=**/**@**/**.snapshots
55 102 0:45 **/**@**/**boot**/**grub2**/**x86_64-efi **/**boot**/**grub2**/**x86_64-efi rw,relatime shared:32 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=263,subvol=**/**@**/**boot**/**grub2**/**x86_64-efi
56 102 0:45 **/**@**/**boot**/**grub2**/**i386-pc **/**boot**/**grub2**/**i386-pc rw,relatime shared:33 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=264,subvol=**/**@**/**boot**/**grub2**/**i386-pc
54 102 0:45 **/**@**/**opt **/**opt rw,relatime shared:34 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=262,subvol=**/**@**/**opt
57 102 0:45 **/**@**/**srv **/**srv rw,relatime shared:35 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=260,subvol=**/**@**/**srv
58 102 0:45 **/**@**/**root **/**root rw,relatime shared:36 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=261,subvol=**/**@**/**root
59 102 0:45 **/**@**/**tmp **/**tmp rw,relatime shared:37 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=259,subvol=**/**@**/**tmp
61 102 0:45 **/**@**/**usr**/**local **/**usr**/**local rw,relatime shared:38 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=258,subvol=**/**@**/**usr**/**local
52 102 0:45 **/**@**/**var **/**var rw,relatime shared:39 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=257,subvol=**/**@**/**var

continued grep

'/' /proc/self/mountinfo 

from above

145 102 259:1 **/****/**boot**/**efi rw,relatime shared:81 - vfat **/**dev**/**nvme0n1p1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro
148 23 0:60 **/****/**sys**/**fs**/**fuse**/**connections rw,nosuid,nodev,noexec,relatime shared:83 - fusectl fusectl rw
151 102 8:3 **/****/**home rw,relatime shared:85 - ext4 **/**dev**/**sda3 rw,data=ordered
154 102 8:2 **/****/**mnt**/**Shared_Data rw,nosuid,nodev,relatime shared:87 - fuseblk **/**dev**/**sda2 rw,user_id=0,group_id=0,allow_other,blksize=4096
751 48 0:11 **/****/**sys**/**kernel**/**debug**/**tracing rw,nosuid,nodev,noexec,relatime shared:419 - tracefs tracefs rw
874 29 0:63 **/****/**run**/**user**/**1000 rw,nosuid,nodev,relatime shared:490 - tmpfs tmpfs rw,size=3262472k,nr_inodes=815618,mode=700,uid=1000,gid=100,inode64
892 874 0:64 **/****/**run**/**user**/**1000**/**gvfs rw,nosuid,nodev,relatime shared:500 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=100
583 874 0:61 **/****/**run**/**user**/**1000**/**doc rw,nosuid,nodev,relatime shared:301 - fuse **/**dev**/**fuse rw,user_id=1000,group_id=100

Well, you have two snapshots (5 and 6) that consume not insignifcant amount of space (it is hard to tell what exactly due to shared usage, but it is between 15 and 23 GiB). You said you deleted all snapshots, but obviously not all of them. Delete these to free up space.

This command I wrote had spaces around / (’ / ') to catch just a single line:

102 1 0:45 **/**@**/**.snapshots**/**1**/**snapshot **/** rw,noatime,nodiratime shared:1 - btrfs **/**dev**/**nvme0n1p7 rw,ssd,discard,space_cache,subvolid=266,subvol=**/**@**/**.snapshots**/**1**/**snapshot

Okay, I am a bit confused because snapper list (sudo snapper --config root list) told me they are very small, <1MB

I will delete them.

Also regarding the single ’ / ', my mistake.

Yers, copy/paste is a hell of a job >:(.

I deleted the two snapshots

snapper --config root delete 5-6

and I got ~15GB back, is there anything else you can see that I can trim off?

Also,

**#** grep ' / ' /proc/self/mountinfo  
23 102 0:21** / **/sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
24 102 0:22** / **/proc rw,nosuid,nodev,noexec,relatime shared:25 - proc proc rw
25 102 0:5** / **/dev rw,nosuid,noexec shared:21 - devtmpfs devtmpfs rw,size=16299440k,nr_inodes=4074860,mode=755,inode64
26 23 0:7** / **/sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:3 - securityfs securityfs rw
27 25 0:23** / **/dev/shm rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,inode64
28 25 0:24** / **/dev/pts rw,nosuid,noexec,relatime shared:23 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
29 102 0:25** / **/run rw,nosuid,nodev shared:24 - tmpfs tmpfs rw,size=6524948k,nr_inodes=819200,mode=755,inode64
30 23 0:26** / **/sys/fs/cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs ro,size=4096k,nr_inodes=1024,mode=755,inode64
31 30 0:27** / **/sys/fs/cgroup/unified rw,nosuid,nodev,noexec,relatime shared:5 - cgroup2 cgroup2 rw,nsdelegate
32 30 0:28** / **/sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:6 - cgroup cgroup rw,xattr,name=systemd
33 23 0:29** / **/sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:18 - pstore pstore rw
34 23 0:30** / **/sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:19 - efivarfs efivarfs rw
35 23 0:31** / **/sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:20 - bpf none rw,mode=700
36 30 0:32** / **/sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:7 - cgroup cgroup rw,perf_event
37 30 0:33** / **/sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:8 - cgroup cgroup rw,cpuset
38 30 0:34** / **/sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:9 - cgroup cgroup rw,hugetlb
39 30 0:35** / **/sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:10 - cgroup cgroup rw,blkio
40 30 0:36** / **/sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,cpu,cpuacct
41 30 0:37** / **/sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:12 - cgroup cgroup rw,net_cls,net_prio
42 30 0:38** / **/sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,rdma
43 30 0:39** / **/sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,memory
44 30 0:40** / **/sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,freezer
45 30 0:41** / **/sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,devices
46 30 0:42** / **/sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,pids
102 1 0:45 /@/.snapshots/1/snapshot** / **rw,noatime,nodiratime shared:1 - btrfs /dev/nvme0n1p7 rw,ssd,discard,space_cache,subvolid=266,subvol=/@/.snapshots/1/snapshot
47 24 0:50** / **/proc/sys/fs/binfmt_misc rw,relatime shared:26 - autofs systemd-1 rw,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=18007
48 23 0:6** / **/sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:27 - debugfs debugfs rw
49 25 0:51** / **/dev/hugepages rw,relatime shared:28 - hugetlbfs hugetlbfs rw,pagesize=2M
50 25 0:20** / **/dev/mqueue rw,nosuid,nodev,noexec,relatime shared:29 - mqueue mqueue rw
51 23 0:11** / **/sys/kernel/tracing rw,nosuid,nodev,noexec,relatime shared:30 - tracefs tracefs rw
145 102 259:1** / **/boot/efi rw,relatime shared:81 - vfat /dev/nvme0n1p1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro
148 23 0:60** / **/sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime shared:83 - fusectl fusectl rw
151 102 8:3** / **/home rw,relatime shared:85 - ext4 /dev/sda3 rw,data=ordered
154 102 8:2** / **/mnt/Shared_Data rw,nosuid,nodev,relatime shared:87 - fuseblk /dev/sda2 rw,user_id=0,group_id=0,allow_other,blksize=4096
857 29 0:63** / **/run/user/1000 rw,nosuid,nodev,relatime shared:481 - tmpfs tmpfs rw,size=3262472k,nr_inodes=815618,mode=700,uid=1000,gid=100,inode64
875 857 0:64** / **/run/user/1000/gvfs rw,nosuid,nodev,relatime shared:491 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=100
545 857 0:61** / **/run/user/1000/doc rw,nosuid,nodev,relatime shared:301 - fuse /dev/fuse rw,user_id=1000,group_id=100

Yes Henk… I have some sort of phobia against copy-and-paste apparently.

What ever you do, when done run again ‘btrfs filesystem usage -T /’. You need unallocated space. You can’t write to btrfs if there is none, despite lots of free space being displayed.:wink:

    Device size:                  40.00GiB
    Device allocated:             22.29GiB
**    Device unallocated:           17.71GiB**
    Device missing:                  0.00B
    Used:                         14.27GiB
   ** Free (estimated):             25.04GiB**   

We are just joking now, but I have seen cases where people borked their system by deviating from the command suggested. And it is the one who hits the Return key that is responsible, not the one suggesting something. I encourage everybody:

  • to check what is there (either by copy/paste or by thorough typing), character by character, special for white space;
  • to try to understand what that should do, consult man pages, etc.

before sending it.

After all, even if it is by persons you trust on the forums you trust, it is still “on the Internet” and people can make errors themselves when advising.

For the sake of archives - that’s wrong in general. btrfs will reuse free space inside chunks for new allocations where possible.

You are always dealing with your special case and sometimes the stuff hits the fan:

“I run BTRFS on my root filesystem (on Linux), mostly for the quick snapshot and restore functionality. Yesterday I ran into a common problem: my drive was suddenly full. I went from 4GB of free space on my system drive to 0 in an instant, causing all sorts of chaos on my system.”

For my special case, I really want to prevent btrfs from using unallocated space because this SSD is pretty messy it has 7 partitions in this matter
1.W10 EFI
2.W10 C:
3.W10 recovery
4.Unallocated space
5.linux EFI
6.LEAP 15.1
7.LEAP 15.2
8.TW

The Unallocated space is pretty much there as a mental barrier between W10 and OpenSUSE and also for overprovisioning.

  • Next time please use something like
fdisk -l

when you want to show your partitioning.

  • The “unallocated space” there has nothing to do with the unallocated space within the btrfs (or any other) file system mentioned above. Btrfs, Like all file systems, can not reach outside the partition (or any other container) they are in.
  • I hope the mental barrier is your’s, for the computer it is just disk blocks where apparently never is read or written.:wink:

Snapper shows value of “exclusive” space consumed by snapshot (what corresponds to “excl” column in btrfs qgroup show output). The most precise meaning of this value - how much space will become available when you delete this snapshot. The problem is, this value is correct only as long as no snapshot was additionally created or deleted and must (and will) be recalculated with every snapper operation.

As an example. Start with root filesystem consuming 10GB. Create two snapshots, one by one. “Size” of each snapshot will be 0, because all data in both snapshots is shared between these snapshots and root filesystem.

Now upgrade your system. Let’s assume this upgrade changes every file, so 10GB in your root was replaced by new content. If you look at snapshot sizes in snapper (or “excl” column in qgroup output) you will still see 0 - because every snapshot shares 100% of its data with another snapshot, so no snapshot has anything consumed exclusively. Deleting any of two snapshots won’t make a single byte available.

Now delete one snapshot. Wait a bit to allow quota scan to complete. If you look now, “size” of remaining snapshot will be 10G. Because this snapshot does not share anything with root filesystem (remember, we replaced all data there), all data in snapshot belongs exclusively to this snapshot and if you delete it, you will gain 10GB.

For this reason snapper also creates summary qgroup with name 1/0. This group includes all snapshots, and exclusive data in this group is data referenced by any of these snapshots but not referenced by root itself. IOW this shows how much space will become available when you delete all snapshots.

Okay that makes sense but how can we determine the actual size of a specific snapshot? Assuming that I only have current running system and 1 snap shot.

Also Henk, I didn’t post fdisk -l because it is a bit messy, the partition isn’t ordered properly and etc. Also yes the mental barrier is only mine to make sure that I am not installing Linux on the Windows side since I intend to replace LEAP 15.1 and 15.2 eventually.

I misunderstood by unallocated space v.s. unallocated space for btrfs… I am still very new to btrfs and experimenting with it so far I am still convinced that for my personal use ext4 is more convenient but I am still trying to figure out btrfs at the same time.

Off topic a little bit, is it practical to have a small / partition say 25GB and have snapshots enabled? It’s just so much space used up.

Define “snapshot size”.