Root partition is full: How can I free space, and install new software?

When installing OpenSuse alongside windows (as a dual boot system), I went with the default settings for the size of the root partition. This turned out to be a major mistake, and the root partition is now full.

I have 3.1 GiB out of 40.0GiB space left, so it’s 92% full. Yesterday I had to delete some files from the root partition, because OpenSuse could not boot anymore. Isn’t this 40.0 GiB disk space for the root partition already way too small in the first place. I’m using quite a lot of 3rd party software, and R/python/Latex packages for scientific computing, etc., and have not even installed all the software I wanted to.

I’m now wondering what I should do to fix this problem, and have some questions.

  1. Is there a way to change the default directory for new installations in YaST, so that the programs don’t end up being install to the root partition in the first place?
  2. I’ve moved some files from /opt to my home directory, and then created symbolic links to these files, and so far these programs are working fine. Can I just move /usr/share/ to the home directory as well, and then create symbolic links to the files? Would YaST then install programs that would usually end up in /usr/share automatically into e.g. /home/share?
  3. Is there a way to boost the size of the root partition, without having to backup the data, format the partitions, and reinstall everything from scratch?
  4. If I end up having to format and reinstall, what would be a reasonable size for the root partition, given that I’m doing some scientific computing, and need a variety of software tools that are not part of the default installation, and that I’m mainly using YaST for software management (with its defaults).

I had a look at several websites and forums so far, some of the information there was probably dated, but could not really find a solution to my problem. For example, I found the suggestion to reduce the number of snapshots in snapper, but then these snapshots are useful, when having to restore a system that has been screwed up, so I’m not sure if this is a good idea in the first place.

Any help and further ideas about what I could do?

Hi and welcome to the Forum :slight_smile:
Yes, look at reviewing the snapshot configuration file /etc/snapper/configs/root.

Then you need to make sure the snapper and btrfs maintenance tools are installed/running, so manually run;


/etc/cron.daily/suse.de-snapper
/etc/cron.weekly/btrfs-balance

If the system isn’t on all the time, then suggest you manually run, or configure cron job time to run at a time when the computer is on but not in heavy use via YaST /etc/sysconfig editor.

40 gig should be plenty. Sounds like the snapper cleanup routines are not running.

https://en.opensuse.org/Portal:Snapper

Hi Malcolm,

Thanks for your swift reply. I have run these commands, but it only cleaned up 0.1GiB.


[root@linux-dmme patrick]# /etc/cron.daily/suse.de-snapper
[root@linux-dmme patrick]# /etc/cron.weekly/btrfs-balance
Before balance of /
Data, single: total=34.24GiB, used=32.33GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.12GiB, used=1.76GiB
GlobalReserve, single: total=110.39MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda7        43G   39G  3.7G  92% /
Done, had to relocate 0 out of 57 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=1
Done, had to relocate 0 out of 57 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=5
Done, had to relocate 0 out of 57 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=10
Done, had to relocate 0 out of 57 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 57 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=30
Done, had to relocate 0 out of 56 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=40
Done, had to relocate 0 out of 56 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=50
Done, had to relocate 0 out of 56 chunks
Done, had to relocate 0 out of 56 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=1
  SYSTEM (flags 0x2): balancing, usage=1
Done, had to relocate 1 out of 56 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=5
  SYSTEM (flags 0x2): balancing, usage=5
Done, had to relocate 1 out of 56 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=10
  SYSTEM (flags 0x2): balancing, usage=10
Done, had to relocate 1 out of 56 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=20
  SYSTEM (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 56 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=30
  SYSTEM (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 56 chunks
After balance of /
Data, single: total=33.24GiB, used=32.34GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.12GiB, used=1.76GiB
GlobalReserve, single: total=110.42MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda7        43G   39G  3.6G  92% /

I’m not completely sure what I need to change in /etc/snapper/configs/root:

[patrick@linux-dmme ~]$ cat /etc/snapper/configs/root

# subvolume to snapshot
SUBVOLUME="/"

# filesystem type
FSTYPE="btrfs"


# btrfs qgroup for space aware cleanup algorithms
QGROUP="1/0"


# fraction of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""

# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="2-10"
NUMBER_LIMIT_IMPORTANT="4-10"
                                                                                                                                             
                                                                                                                                             
# create hourly snapshots                                                                                                                    
TIMELINE_CREATE="no"                                                                                                                         
                                                                                                                                             
# cleanup hourly snapshots after some time                                                                                                   
TIMELINE_CLEANUP="yes"                                                                                                                       
                                                                                                                                             
# limits for timeline cleanup                                                                                                                
TIMELINE_MIN_AGE="1800"                                                                                                                      
TIMELINE_LIMIT_HOURLY="10"                                                                                                                   
TIMELINE_LIMIT_DAILY="10"                                                                                                                    
TIMELINE_LIMIT_WEEKLY="0"                                                                                                                    
TIMELINE_LIMIT_MONTHLY="10"                                                                                                                  
TIMELINE_LIMIT_YEARLY="10"                                                                                                                   


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"

# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"

Cheers, Patrick.

Hi
I would modify this one;

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="2-10"
NUMBER_LIMIT_IMPORTANT="4-10"

I would set to;


NUMBER_LIMIT="2-3"
NUMBER_LIMIT_IMPORTANT="3-4"

Then re-run the cron jobs again, see if that makes a difference, then look at changing those two settings either up or down… your call, but that’s what I run and have had no problems.

I have changed the setting in /etc/snapper/configs/root according to your suggestions, and this freed up some additional disk space, I’m now at 7.0GiB left (so 84% used). I only have 8 snapshots in snapper, and 3 have been deleted. This solved the problem for now, but I’m still a bit unsure how long it’ll take till the 7.0 GiB will fill up again. Could I safely reduce the number of snapshots even further?

I also looked at the disk usage on the root partition, and du strangely listed ./.snapshots as 166G. How is this possible if the overall size of the partition is only 40GiB?

14GiB are used by /user where YaST installs the software. Is there a way to change this directory in YaST to /home?

du -h --max-depth=1 
25M     ./etc
166G    ./.snapshots
153M    ./boot
16K     ./opt
1.3M    ./srv
434M    ./tmp
14G     ./usr
1.7G    ./var
...

Hi
It’s metadata so not valid… use (this is my 42.2 system);


btrfs fi usage /

Overall:
    Device size:          40.00GiB
    Device allocated:          20.78GiB
    Device unallocated:          19.22GiB
    Device missing:             0.00B
    Used:              15.59GiB
    Free (estimated):          24.07GiB    (min: 24.07GiB)
    Data ratio:                  1.00
    Metadata ratio:              1.00
    Global reserve:          44.39MiB    (used: 0.00B)

Wonder if it’s quota related, have a read of https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-quota

Btrfs actually lists only 6.52GiB as free.

btrfs fi usage /
Overall:
    Device size:                  40.00GiB
    Device allocated:             36.04GiB
    Device unallocated:            3.96GiB
    Device missing:                  0.00B
    Used:                         32.41GiB
    Free (estimated):              6.52GiB      (min: 4.54GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              101.05MiB      (used: 0.00B)

Data,single: Size:32.23GiB, Used:29.67GiB
   /dev/sda7      32.23GiB

Metadata,DUP: Size:1.88GiB, Used:1.37GiB
   /dev/sda7       3.75GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sda7      64.00MiB

Unallocated:
   /dev/sda7       3.96GiB

When I then try to use the btrfs du, it only list some of the files in //.snapshot, before giving me the following error message. I get the same error when using the -s option.

[root@linux-dmme /]# btrfs filesystem du --human-readable  //.snapshot
(...)
     0.00B       0.00B           -  //.snapshots/1/snapshot/etc/resolv.conf
  18.80MiB   328.00KiB           -  //.snapshots/1/snapshot/etc
ERROR: Failed to lookup root id - Inappropriate ioctl for device
ERROR: cannot check space of '//.snapshots': Operation not permitted

Hi
AFAIK you need to use an actual directory path eg /opt /home etc but it won’t work for .snapshots (maybe if system is booted with it unmounted from live/rescue media).

More reading :wink: https://btrfs.wiki.kernel.org/index.php/FAQ and http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html

From memory it could be tied into the Metadata,DUP value…

Le 01/11/2017 à 17:06, patrick87 a écrit :
>
> Btrfs actually lists only 6.52GiB as free.
>

what’s the problem? It’s Gb, not MB

anyway you can shrink a btrfs file system in several ways: remove
snapshots, balance

none of these method can be used without extreme care

jdd

Using any large databases? In most case SQL DBs live in root unless you put them else where. Also check for excessively large logs

growing a btrfs FS is trivial (there is nothing easier). you can expand the partition or just add (seamlessly) another partition to the ‘pool’, There is something unusual with the space your using. If you have files with heavy writing e.g. database these directories need to be implemented as ‘no cow’. If you genuinely have lots of apps using exceptional space just expand the pool.

Le 02/11/2017 à 08:56, ndc33 a écrit :
>
> growing a btrfs FS is trivial (there is nothing easier). you can expand
> the partition or just add (seamlessly) another partition to the ‘pool’,
> There is something unusual with the space your using. If you have files
> with heavy writing e.g. database these directories need to be
> implemented as ‘no cow’. If you genuinely have lots of apps using
> exceptional space just expand the pool.
>
>
problem in shrinking the xfs filesystem, as usually all the disk is
filled with /, swap and /home

jdd

ahh yes - xfs - tried it on first install and wiped it clean soon after, if i remember rightly very limited to resize. Still pretty trivial to copy out files from home, replace with ext4, resize btrfs, copy files back and update fstab?

Yep xfs does not shrink