Running out of root space hdd

Hi, I’m running out of root space since the main drive is only 20Gb

I’m trying to make some room in the root folder but I can’t figure out what is using so much space. - I have deleted all snapshot

  • home and data folders are on different partitions.
  • I have removed old kernel versions from yast
  • system is opensuse leap 42.2
  • ssd is type btrfs automatically created with subvolume standard installation settings.
btrfs fi usage /
Overall:
    Device size:                  20.18GiB
    Device allocated:             20.18GiB
    Device unallocated:            1.00MiB
    Device missing:                  0.00B
    Used:                         19.13GiB
    Free (estimated):            836.08MiB      (min: 836.08MiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               68.05MiB      (used: 0.00B)


Data,single: Size:19.15GiB, Used:18.33GiB
   /dev/sda6      19.15GiB


Metadata,single: Size:1.00GiB, Used:818.77MiB
   /dev/sda6       1.00GiB


System,single: Size:32.00MiB, Used:16.00KiB
   /dev/sda6      32.00MiB


Unallocated:
   /dev/sda6       1.00MiB

This is the root folder directory list with space usage. However the sum of these folders is about 9 Gb not 19…

Code:
0       Desktop
0       mnt
0       proc
0       selinux
0       sys
212     dev
2488    run
3540    tmp
5280    bin
10432   sbin
18876   lib64
24444   etc
26700   root
32380   srv
39436   opt
99904   boot
457160  lib
1267952 var
7475760 usr
---home (this folder is 46Gb but is in another partition)
---windows (2 different partitions 800Gb) 


Where all the rest of the space is gone? Is any way to save my system of you need a 40gb partition to run opensuse Leap?
Please help!

A few posts down from yours

https://forums.opensuse.org/showthread.php/521072-Running-out-of-space-in-root-partition

I don’t use it but believe btrfs uses up more space than others anyway. I’m using around 12gb for root space using ext4. Home and swap etc on another drive.

John

Maybe snapper it does not show with normal commands read here about how to manage

https://en.opensuse.org/Portal:Snapper

I thought it defaulted off on less then 20 gig but your are close to that. Recommended minimum space fo BTRFS with snapper used is 40Gig.

I found out I have a 66Gb .snapshot folder in root partition.
Is it possibile if I have a 20Gb dev/sda6 ?

df -P /.snapshots/ | tail -1 | cut -d' ' -f 1                
/dev/sda6 

du -sk .snapshots/
66964964        .snapshots/

 

In this folder there are many folder about snapshot I have deleted:

du -sk * | sort -n
0       346
0       347
0       348
4       grub-snapshot.cfg
5827920 19
5827960 23
5827964 22
8721180 1
8914592 186
8914596 185
9139500 206
13791316        317


snapper list   
Type   | # | Pre # | Date                             | User | Cleanup | Description           | Userdata
-------+---+-------+----------------------------------+------+---------+-----------------------+---------
single | 0 |       |                                  | root |         | current               |          
single | 1 |       | Fri 21 Apr 2017 04:24:19 AM CEST | root |         | first root filesystem |         

Do you think I can deleted all the other folders and why it doesn’t do it when I delete it from Yast?

Snapshots are not regular files they are only the changed blocks so they look bigger then they really are.

Though not exactly the same think zip files.

Remove all snaps except 0 then turn snapshots off they will only give pain on a 20 gig partition

That’s probably a good idea, but snapshot 1 cannot be removed. Any Idea about to remove it?

>snapper list       
Type   | # | Pre # | Date                     | User | Cleanup | Description           | Userdata
-------+---+-------+--------------------------+------+---------+-----------------------+---------
single | 0 |       |                          | root |         | current               |          
single | 1 |       | Fri Apr 21 04:24:19 2017 | root |         | first root filesystem |    

>snapper delete 1
Deleting snapshot failed.
    

If I remove snapper packages, the space reserved for the shapshot will be still kept?
How can I free up that space?

Some large Btrfs partition Leap 42.2 Btrfs reference information (Btrfs quota is disabled . . . )


 # LANG=C btrfs fi usage /
Overall:
    Device size:                  80.00GiB
    Device allocated:             16.07GiB
    Device unallocated:           63.93GiB
    Device missing:                  0.00B
    Used:                         14.63GiB
    Free (estimated):             64.57GiB      (min: 32.60GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:               40.33MiB      (used: 0.00B)

Data,single: Size:14.01GiB, Used:13.37GiB
   /dev/sda3      14.01GiB

Metadata,DUP: Size:1.00GiB, Used:647.56MiB
   /dev/sda3       2.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sda3      64.00MiB

Unallocated:
   /dev/sda3      63.93GiB
 # 
 # LANG=C snapper list
Type   | #   | Pre # | Date                     | User | Cleanup | Description           | Userdata     
-------+-----+-------+--------------------------+------+---------+-----------------------+--------------
single | 0   |       |                          | root |         | current               |              
single | 1   |       | Tue Mar 21 17:44:22 2017 | root |         | first root filesystem |              
pre    | 231 |       | Tue May  9 17:54:29 2017 | root | number  | zypp(packagekitd)     | important=yes
post   | 232 | 231   | Tue May  9 18:02:51 2017 | root | number  |                       | important=yes
pre    | 281 |       | Mon May 29 14:31:49 2017 | root | number  | zypp(packagekitd)     | important=yes
post   | 283 | 281   | Mon May 29 14:36:45 2017 | root | number  |                       | important=yes
pre    | 315 |       | Thu Jun  8 16:24:45 2017 | root | number  | zypp(packagekitd)     | important=yes
post   | 318 | 315   | Thu Jun  8 16:31:59 2017 | root | number  |                       | important=yes
pre    | 319 |       | Thu Jun  8 16:32:23 2017 | root | number  | yast sw_single        |              
pre    | 320 |       | Thu Jun  8 16:33:00 2017 | root | number  | zypp(y2base)          | important=no 
post   | 321 | 320   | Thu Jun  8 16:33:02 2017 | root | number  |                       | important=no 
pre    | 322 |       | Thu Jun  8 16:39:21 2017 | root | number  | zypp(y2base)          | important=no 
post   | 323 | 322   | Thu Jun  8 16:39:23 2017 | root | number  |                       | important=no 
post   | 324 | 319   | Thu Jun  8 16:42:24 2017 | root | number  |                       |              
pre    | 325 |       | Fri Jun  9 18:19:14 2017 | root | number  | zypp(packagekitd)     | important=no 
post   | 326 | 325   | Fri Jun  9 18:19:23 2017 | root | number  |                       | important=no 
pre    | 327 |       | Mon Jun 12 11:24:03 2017 | root | number  | zypp(packagekitd)     | important=no 
post   | 328 | 327   | Mon Jun 12 11:24:11 2017 | root | number  |                       | important=no 
pre    | 329 |       | Mon Jun 12 15:33:10 2017 | root | number  | zypp(packagekitd)     | important=no 
post   | 330 | 329   | Mon Jun 12 15:34:36 2017 | root | number  |                       | important=no 
 # 
 # LANG=C df -h /.snapshots/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        80G   15G   65G  19% /.snapshots
eck005:/root # 
eck005:/root # btrfs inspect-internal tree-stats /dev/sda3
Calculating size of root tree
        Total size: 64.00KiB
                Inline data: 0.00B
        Total seeks: 3
                Forward seeks: 2
                Backward seeks: 1
                Avg seek len: 698.67KiB
        Total clusters: 1
                Avg cluster size: 0.00B
                Min cluster size: 0.00B
                Max cluster size: 16.00KiB
        Total disk spread: 816.00KiB
        Total read time: 0 s 3 us
        Levels: 2
Calculating size of extent tree
        Total size: 24.05MiB
                Inline data: 0.00B
        Total seeks: 1033
                Forward seeks: 588
                Backward seeks: 445
                Avg seek len: 10.11GiB
        Seek histogram
                      16384 -     4751360:         153 ###
                    4980736 -    31211520:         153 ###
                   31391744 -    70287360:         153 ###
                   70713344 -   248610816:         153 ###
                  248954880 -   383205376:         153 ###
                  384778240 - 73268674560:         153 ###
                73279242240 - 73804677120:         109 ##
        Total clusters: 258
                Avg cluster size: 47.01KiB
                Min cluster size: 32.00KiB
                Max cluster size: 256.00KiB
        Total disk spread: 68.77GiB
        Total read time: 0 s 524 us
        Levels: 3
Calculating size of csum tree
        Total size: 16.28MiB
                Inline data: 0.00B
        Total seeks: 931
                Forward seeks: 534
                Backward seeks: 397
                Avg seek len: 8.36GiB
        Seek histogram
                      16384 -      114688:         145 ###
                     147456 -     8863744:         138 ###
                    9224192 -    64143360:         138 ###
                   65634304 -   161071104:         138 ###
                  161693696 -   273317888:         138 ###
                  277921792 - 72992849920:         138 ###
                72998027264 - 73699868672:          79 #
        Total clusters: 60
                Avg cluster size: 44.80KiB
                Min cluster size: 32.00KiB
                Max cluster size: 176.00KiB
        Total disk spread: 68.70GiB
        Total read time: 0 s 1790 us
        Levels: 3
Calculatin' size of fs tree
        Total size: 16.00KiB
                Inline data: 0.00B
        Total seeks: 0
                Forward seeks: 0
                Backward seeks: 0
                Avg seek len: 0.00B
        Total clusters: 1
                Avg cluster size: 0.00B
                Min cluster size: 0.00B
                Max cluster size: 16.00KiB
        Total disk spread: 0.00B
        Total read time: 0 s 0 us
        Levels: 1
 # 

IMHO, Btrfs is a modern resource hungry file-system – which needs quite a bit of disk space to perform correctly.
[HR][/HR]IMHO, if the available disk space is less than 100 GB then, consider using a more “traditional” file-system such as ext4.

  • Given the parentage of XFS, I suspect that using XFS for “smaller” Home partitions may also provoke instability . . .

Hi
I have found 40GB to be a workable solution as long as you tweak the configs and ensure the snapper cleanup and btrfs cron jobs are run (esp on Tumbleweed). It all depends on your end use if you want to retain more snapshots to rollback etc then you do need to increase as required… AFAIK the real minimum is 30GB…?