Shrinking /home in order to grow root partition (btrfs, encrypted drive)

I’m on the brink of running out of space on my root partition and will have to steal some space from /home.

I’ve done this in the past on a different distribution, and if I’m not mistaken I should be able to do this without doing it on a live distro (if not, how do I deal with LUKS?) as long as I unmount my /home partition (growing should be fine on a mounted partition). However, I’m not sure how to either just log in as root or unmount the /home partition after booting like normal and changing run level. I tried appending “init=/bin/bash” after the kernel line in GRUB without success.

df -h (parts removed)

Filesystem                      Size  Used Avail Use% Mounted on
devtmpfs                        3.9G     0  3.9G   0% /dev
/dev/mapper/system-root          30G   28G  1.7G  95% /
/dev/sda2                       408M  100M  320M  24% /boot
/dev/sda1                       156M  4.7M  152M   3% /boot/efi
/dev/mapper/system-home         185G   85G  101G  46% /home

fdisk -l (parts removed)

Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors

Disklabel type: gpt

Device       Start       End   Sectors  Size Type
/dev/sda1     2048    321535    319488  156M EFI System
/dev/sda2   321536   1157119    835584  408M Linux filesystem
/dev/sda3  1157120 468860927 467703808  223G Linux LVM


Disk /dev/mapper/cr_ata-KINGSTON_RBU-SC100S37240GE_50026B7252088406-part3: 223 GiB, 239462252544 bytes, 467699712 sectors

Disk /dev/mapper/system-swap: 8 GiB, 8589934592 bytes, 16777216 sectors

Disk /dev/mapper/system-root: 30 GiB, 32212254720 bytes, 62914560 sectors

Disk /dev/mapper/system-home: 185 GiB, 198654820352 bytes, 387997696 sectors

btrfs filesystem show

Label: none  uuid: 001ceb05-5f5e-4369-8bc8-171eeda1bb2a
        Total devices 1 FS bytes used 26.92GiB
        devid    1 size 30.00GiB used 30.00GiB path /dev/mapper/system-root

Label: none  uuid: 7cbf403a-5ae6-42b6-900e-1c162b47fd02
        Total devices 1 FS bytes used 83.94MiB
        devid    1 size 408.00MiB used 252.00MiB path /dev/sda2

lvs && vgs && pvs

  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home system -wi-ao---- 185.01g                                                    
  root system -wi-ao----  30.00g                                                    
  swap system -wi-ao----   8.00g                                                    

  VG     #PV #LV #SN Attr   VSize   VFree
  system   1   3   0 wz--n- 223.02g 4.00m

  PV                                                                   VG     Fmt  Attr PSize   PFree
  /dev/mapper/cr_ata-KINGSTON_RBU-SC100S37240GE_50026B7252088406-part3 system lvm2 a--  223.02g 4.00m

Basically I want to add 20GB from home to root.

Hi
Is the btrfs / just allocated or used? Have the maintenance clean up routines run for btrfs? Not running snapshots?


snapper list
btrfs fi usage /
systemctl list-timers

To be perfectly honest I just went with the default options when I first installed the distro and the only thing I’ve hated so far is dealing with btrfs. My routine (after first running out of space a long time ago and quickly finding out about btrfs through Google) has been more or less to delete all snapshots after an update then do a full balance (which lately has been failing).

It does look like I’m genuinely running out of space though. Had to remove Chromium and LibreOffice after the the last couple of updates.

snapper list

Type   | # | Pre # | Date                     | User | Cleanup | Description           | Userdata
-------+---+-------+--------------------------+------+---------+-----------------------+---------
single | 0 |       |                          | root |         | current               |         
single | 1 |       | Fri Nov  6 01:17:37 2015 | root |         | first root filesystem |         

btrfs fi usage /

Overall:
    Device size:                  30.00GiB
    Device allocated:             30.00GiB
    Device unallocated:            1.00MiB
    Device missing:                  0.00B
    Used:                         27.70GiB
    Free (estimated):              1.68GiB      (min: 1.68GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:               87.27MiB      (used: 0.00B)

Data,single: Size:27.69GiB, Used:26.00GiB
   /dev/mapper/system-root        27.69GiB

Metadata,DUP: Size:1.12GiB, Used:867.59MiB
   /dev/mapper/system-root         2.25GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/mapper/system-root        64.00MiB

Unallocated:
   /dev/mapper/system-root         1.00MiB

systemctl list-timers

NEXT                         LEFT                LAST                         PASSED             UNIT                  
Fri 2018-02-09 15:00:00 CET  20min left          Fri 2018-02-09 14:00:22 CET  39min ago          snapper-timeline.timer
Sat 2018-02-10 00:00:00 CET  9h left             Fri 2018-02-09 10:09:41 CET  4h 29min ago       logrotate.timer       
Sat 2018-02-10 13:21:01 CET  22h left            Fri 2018-02-09 13:21:01 CET  1h 18min ago       snapper-cleanup.timer 
Sat 2018-02-10 13:26:34 CET  22h left            Fri 2018-02-09 13:26:34 CET  1h 13min ago       systemd-tmpfiles-clean
Mon 2018-02-12 00:00:00 CET  2 days left         Mon 2018-02-05 00:00:57 CET  4 days ago         btrfs-balance.timer   
Thu 2018-03-01 00:00:00 CET  2 weeks 5 days left Thu 2018-02-01 08:48:44 CET  1 weeks 1 days ago btrfs-scrub.timer     

6 timers listed.
Pass --all to see loaded but inactive timers, too.

Hi
Did you also modify the snapper config /etc/snapper/configs/root?

Logs and or coredumps taking up space?


coredumpctl list
du -sh /var/lib/systemd/coredump/
du -sh /var/log
(run to clean out logs, but leave last 2 days)
journalctl --vacuum-time=2d

Manually run the balance;


/usr/share/btrfsmaintenance/btrfs-balance.sh

I did modify the snapper config, but it’s been a while so I can’t remember what exactly. The intent was at least to save space.

First ‘du’ command outputted 0. With journal vacuum 112MB was freed.

/usr/share/btrfsmaintenance/btrfs-balance.sh

Before balance of /
Data, single: total=27.69GiB, used=25.87GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=1.12GiB, used=867.28MiB
GlobalReserve, single: total=87.14MiB, used=0.00B
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-root   33G   30G  2.0G  94% /

After balance of /
Data, single: total=27.31GiB, used=25.87GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=1.12GiB, used=866.69MiB
GlobalReserve, single: total=86.55MiB, used=0.00B
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-root   33G   30G  2.0G  94% /

Hi
I would have a look around /var and /usr to see where possible space is being consumed. You could have lots of rpm’s cached perhaps… for example;


du -sh /usr
du -sh /var
du -sh /var/cache

See how that goes first, if you can get a few more GB back I think you will be fine with what you have.

du -sh /usr && du -sh /var && du -sh /var/cache

13G     /usr
1.3G    /var
278M    /var/cache

Hi
OK, so drill down into /usr (FYI my system uses 6G here) see what is lurking… Also check /tmp. Not running a web server or such, if so check /srv.

/tmp is 2GB. In /usr, bin is around 600MB, lib64 (2.7GB excluding subfolders) and share are each 4GB, and src 1.5GB.

Hi
OK, well /tmp is just as it says temp stuff… looks like it’s never been cleaned or…or are you logging something there?

I also don’t see a systemd-tmpfiles-clean.service if you manually run, does this reduce /tmp usage?


systemctl start systemd-tmpfiles-clean.service
systemctl status systemd-tmpfiles-clean.service

So you build the nvidia proprietary driver manually (uses about 300M every build), or some other driver manually?

You have a lot in /src, old kernels?


ls -la /boot/initrd*
zypper se -si kernel-default

Has the purge kernels service run?


systemctl status purge-kernels.service

I clean my room, not my /tmp folder!

Uh, yeah, I don’t think I’ve ever cleaned it. systemd-tmpfiles-clean was disabled. At any rate, it’s running now.

My laptop has Intel only.

Also, purge-kernel shows condition failed at this moment, but it should be working seeing as I only have two kernels.

Hi
So what is consuming space down in /src or did you mean /srv?

In /usr/src there’s linux-4.15.0-1 and linux-4.15.1-1.

If you’re trying to remove unnecessary RPMs, you can simply clear your rpm cache

zypper clean -all

TSU

Hi
In post #9 you indicated src had 1.5GB? Is this /usr/src? Or is this /srv? Please confirm which directory has 1.5GB.

How does your disk space look now?


btrfs fi usage /

Did you investigate if you have any unlisted snapshots?

btrfs subvolume list / 

Alright, I’ve cleaned all repos.

It’s /usr/src. The two subfolders I mentioned together pretty much make up 1.5GB.

btrfs fi usage /

Overall:
    Device size:                  30.00GiB
    Device allocated:             29.56GiB
    Device unallocated:          448.00MiB
    Device missing:                  0.00B
    Used:                         27.94GiB
    Free (estimated):              1.64GiB      (min: 1.42GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:               86.72MiB      (used: 0.00B)

Data,single: Size:27.25GiB, Used:26.04GiB
   /dev/mapper/system-root        27.25GiB

Metadata,DUP: Size:1.12GiB, Used:968.69MiB
   /dev/mapper/system-root         2.25GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/mapper/system-root        64.00MiB

Unallocated:
   /dev/mapper/system-root       448.00MiB

Here’s the output of ‘btrfs subvolume list /’, I’m not sure what it means.

ID 257 gen 1212287 top level 5 path .snapshots
ID 258 gen 1212504 top level 257 path .snapshots/1/snapshot
ID 259 gen 1212237 top level 5 path opt
ID 260 gen 1212237 top level 5 path srv
ID 261 gen 1212504 top level 5 path tmp
ID 262 gen 1212271 top level 5 path usr/local
ID 263 gen 1211583 top level 5 path var/crash
ID 264 gen 1211583 top level 5 path var/lib/libvirt/images
ID 265 gen 1211583 top level 5 path var/lib/mailman
ID 266 gen 1211583 top level 5 path var/lib/mariadb
ID 267 gen 1211583 top level 5 path var/lib/named
ID 268 gen 1211583 top level 5 path var/lib/pgsql
ID 269 gen 1212504 top level 5 path var/log
ID 270 gen 1211583 top level 5 path var/opt
ID 271 gen 1212503 top level 5 path var/spool
ID 272 gen 1212504 top level 5 path var/tmp
ID 1666 gen 1211583 top level 5 path var/lib/machines
ID 5053 gen 1209295 top level 257 path .snapshots/5/snapshot
ID 10228 gen 1212366 top level 257 path .snapshots/2/snapshot

Worth noting that /tmp is still 2GB. Here’s the output of ‘service systemd-tmpfiles-clean status’:

● systemd-tmpfiles-clean.service - Cleanup of Temporary Directories
   Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.service; static; vendor preset: disabled)
   Active: inactive (dead) since Mon 2018-02-12 02:01:13 CET; 2min 18s ago
     Docs: man:tmpfiles.d(5)
           man:systemd-tmpfiles(8)
  Process: 8660 ExecStart=/usr/bin/systemd-tmpfiles --clean (code=exited, status=0/SUCCESS)
 Main PID: 8660 (code=exited, status=0/SUCCESS)

Feb 12 02:01:13 Hostname systemd[1]: Starting Cleanup of Temporary Directories...
Feb 12 02:01:13 Hostname systemd-tmpfiles[8660]: [/usr/lib/tmpfiles.d/tmp.conf:13] Duplicate line for path "/var/tmp", 
Feb 12 02:01:13 Hostname systemd-tmpfiles[8660]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", 
Feb 12 02:01:13 Hostname systemd-tmpfiles[8660]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache"
Feb 12 02:01:13 Hostname systemd-tmpfiles[8660]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", 
Feb 12 02:01:13 Hostname systemd-tmpfiles[8660]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool"
Feb 12 02:01:13 Hostname systemd[1]: Started Cleanup of Temporary Directories.


Notice the unlisted snapshots 2 and 5.

The two methods I have heard work are:

I’ve never tried either.

On Mon 12 Feb 2018 01:16:01 AM CST, Serophis wrote:

Alright, I’ve cleaned all repos.

malcolmlewis;2854915 Wrote:
> In post #9 you indicated src had 1.5GB? Is this /usr/src? Or is this
> /srv? Please confirm which directory has 1.5GB.

It’s /usr/src. The two subfolders I mentioned together pretty much make
up 1.5GB.

<snip>

Hi
You need to look down in /tmp to see what’s old and can be manually
deleted… the same with /usr/src is there a /usr/src/packages
directory? If so did you rebuild some rpms as root (not a good idea)
these can be built as your user and installed by root from the user
build location…


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE Leap 42.3|GNOME 3.20.2|4.4.114-42-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

My word, ravas. 2 was 0 Bytes, so I ignored it, but 5 was a whopping 15GB! Now full balance works as well. Guess I won’t need to grow the root partition at all for the foreseeable future.

I guess that’s it. Thanks for all the help, everyone! I think I learned a thing or two in the process.

15GB snapshot though. What was that about? I haven’t rebooted yet though, so maybe disaster awaits, lol. I have a backup anyway.

EDIT: Reboot went fine besides some stability issues with the latest update.