I don’t use it but believe btrfs uses up more space than others anyway. I’m using around 12gb for root space using ext4. Home and swap etc on another drive.
snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+----------------------------------+------+---------+-----------------------+---------
single | 0 | | | root | | current |
single | 1 | | Fri 21 Apr 2017 04:24:19 AM CEST | root | | first root filesystem |
Do you think I can deleted all the other folders and why it doesn’t do it when I delete it from Yast?
Some large Btrfs partition Leap 42.2 Btrfs reference information (Btrfs quota is disabled . . . )
# LANG=C btrfs fi usage /
Overall:
Device size: 80.00GiB
Device allocated: 16.07GiB
Device unallocated: 63.93GiB
Device missing: 0.00B
Used: 14.63GiB
Free (estimated): 64.57GiB (min: 32.60GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 40.33MiB (used: 0.00B)
Data,single: Size:14.01GiB, Used:13.37GiB
/dev/sda3 14.01GiB
Metadata,DUP: Size:1.00GiB, Used:647.56MiB
/dev/sda3 2.00GiB
System,DUP: Size:32.00MiB, Used:16.00KiB
/dev/sda3 64.00MiB
Unallocated:
/dev/sda3 63.93GiB
#
# LANG=C snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+-----+-------+--------------------------+------+---------+-----------------------+--------------
single | 0 | | | root | | current |
single | 1 | | Tue Mar 21 17:44:22 2017 | root | | first root filesystem |
pre | 231 | | Tue May 9 17:54:29 2017 | root | number | zypp(packagekitd) | important=yes
post | 232 | 231 | Tue May 9 18:02:51 2017 | root | number | | important=yes
pre | 281 | | Mon May 29 14:31:49 2017 | root | number | zypp(packagekitd) | important=yes
post | 283 | 281 | Mon May 29 14:36:45 2017 | root | number | | important=yes
pre | 315 | | Thu Jun 8 16:24:45 2017 | root | number | zypp(packagekitd) | important=yes
post | 318 | 315 | Thu Jun 8 16:31:59 2017 | root | number | | important=yes
pre | 319 | | Thu Jun 8 16:32:23 2017 | root | number | yast sw_single |
pre | 320 | | Thu Jun 8 16:33:00 2017 | root | number | zypp(y2base) | important=no
post | 321 | 320 | Thu Jun 8 16:33:02 2017 | root | number | | important=no
pre | 322 | | Thu Jun 8 16:39:21 2017 | root | number | zypp(y2base) | important=no
post | 323 | 322 | Thu Jun 8 16:39:23 2017 | root | number | | important=no
post | 324 | 319 | Thu Jun 8 16:42:24 2017 | root | number | |
pre | 325 | | Fri Jun 9 18:19:14 2017 | root | number | zypp(packagekitd) | important=no
post | 326 | 325 | Fri Jun 9 18:19:23 2017 | root | number | | important=no
pre | 327 | | Mon Jun 12 11:24:03 2017 | root | number | zypp(packagekitd) | important=no
post | 328 | 327 | Mon Jun 12 11:24:11 2017 | root | number | | important=no
pre | 329 | | Mon Jun 12 15:33:10 2017 | root | number | zypp(packagekitd) | important=no
post | 330 | 329 | Mon Jun 12 15:34:36 2017 | root | number | | important=no
#
# LANG=C df -h /.snapshots/
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 80G 15G 65G 19% /.snapshots
eck005:/root #
eck005:/root # btrfs inspect-internal tree-stats /dev/sda3
Calculating size of root tree
Total size: 64.00KiB
Inline data: 0.00B
Total seeks: 3
Forward seeks: 2
Backward seeks: 1
Avg seek len: 698.67KiB
Total clusters: 1
Avg cluster size: 0.00B
Min cluster size: 0.00B
Max cluster size: 16.00KiB
Total disk spread: 816.00KiB
Total read time: 0 s 3 us
Levels: 2
Calculating size of extent tree
Total size: 24.05MiB
Inline data: 0.00B
Total seeks: 1033
Forward seeks: 588
Backward seeks: 445
Avg seek len: 10.11GiB
Seek histogram
16384 - 4751360: 153 ###
4980736 - 31211520: 153 ###
31391744 - 70287360: 153 ###
70713344 - 248610816: 153 ###
248954880 - 383205376: 153 ###
384778240 - 73268674560: 153 ###
73279242240 - 73804677120: 109 ##
Total clusters: 258
Avg cluster size: 47.01KiB
Min cluster size: 32.00KiB
Max cluster size: 256.00KiB
Total disk spread: 68.77GiB
Total read time: 0 s 524 us
Levels: 3
Calculating size of csum tree
Total size: 16.28MiB
Inline data: 0.00B
Total seeks: 931
Forward seeks: 534
Backward seeks: 397
Avg seek len: 8.36GiB
Seek histogram
16384 - 114688: 145 ###
147456 - 8863744: 138 ###
9224192 - 64143360: 138 ###
65634304 - 161071104: 138 ###
161693696 - 273317888: 138 ###
277921792 - 72992849920: 138 ###
72998027264 - 73699868672: 79 #
Total clusters: 60
Avg cluster size: 44.80KiB
Min cluster size: 32.00KiB
Max cluster size: 176.00KiB
Total disk spread: 68.70GiB
Total read time: 0 s 1790 us
Levels: 3
Calculatin' size of fs tree
Total size: 16.00KiB
Inline data: 0.00B
Total seeks: 0
Forward seeks: 0
Backward seeks: 0
Avg seek len: 0.00B
Total clusters: 1
Avg cluster size: 0.00B
Min cluster size: 0.00B
Max cluster size: 16.00KiB
Total disk spread: 0.00B
Total read time: 0 s 0 us
Levels: 1
#
IMHO, Btrfs is a modern resource hungry file-system – which needs quite a bit of disk space to perform correctly.
[HR][/HR]IMHO, if the available disk space is less than 100 GB then, consider using a more “traditional” file-system such as ext4.
Given the parentage of XFS, I suspect that using XFS for “smaller” Home partitions may also provoke instability . . .
Hi
I have found 40GB to be a workable solution as long as you tweak the configs and ensure the snapper cleanup and btrfs cron jobs are run (esp on Tumbleweed). It all depends on your end use if you want to retain more snapshots to rollback etc then you do need to increase as required… AFAIK the real minimum is 30GB…?