Root partition full on encrypted HDD, no easy way to resize/delete unwanted data?

Hello all,

My root partition is full and now the OS fails to boot. Looking at other threads people seemed to have this problem due to media programs also (Plex) plus all the snapshots that have accumulated.

I still have 42.2 on a USB stick but when I boot into it and select the Rescue System option it would only accept root as user (+no password) and I am unable to ‘find’ the hard disk. I cannot access the hard disk from running e.g. a live version of Ubuntu Mate as my disk is encrypted (LVM, done directly at the time when installing Leap). I do not have GParted on a usb stick :frowning:

Any command line magic that would help fix this?

Many thanks in advance

run fdisk -l to see you drives and partitions. You must mount the partitions but that can be tricky when you have set encryption.

But first you must see what is where

I get the following:

/dev/loop0: 50.3 MiB
/dev/loop1: 9.4 MiB
/dev/loop2: 41.7 MiB
/dev/loop3: 34.4 MiB
/dev/loop4: 4.1 MiB
/dev/sda: 223.6 GiB (my hard disk)

for the HDD more specifically:
dev/sda1: Size: 399M, Type: BIOS boot
dev/sda2: Size: 223.2G, Type: EFI System

and then the USB stick:
/dev/sdb: 15 GiB

I’m not using “btrfs”, so I can’t help you with the snapshot cleanup.

To access the system, you need something like:

cryptsetup luksOpen /dev/sda2 cr_lvm
vgchange -a y

I’m guessing that “sda2” is the encrypted LVM.

After that, look in “/dev/mapper” to find the device nodes for the various volumes.

Then you will need some sort of “btrfs” tools on your rescue system to do the cleanup.

Many thanks, this worked, I’m in :slight_smile:

Now for /dev/mapper, I get those five:

control cr_lvm system-home system-root system-swap

If there would be any btrfs tools that would be useful for a cleanup in system-root that would be fantastic

The system-root is the main one that you need.

You can use:

mount /dev/mapper/system-root  /mnt

But then you may need to mount the various subvolumes (listed in “/mnt/etc/fstab”)

I have just booted the 42.2 install media to rescue mode. It does have some “btrfs” tools in “/sbin”. This includes the “btrfs” command. But I don’t know how to use them. Perhaps someone else can jump in.

Showing output of “df -h” after mounting root would be a good start.

Here the output of “df -h” after mounting:

Filesystem Size Used Avail Use Mounted On
dev/loop0 51M 51M 0 100% /parts/mp_0000
dev/loop1 9.5M 9.5M 0 100% /parts/mp_0001
devtmpfs 7.8G 0 7.8G 0% /dev
/dev/loop2 42M 42M 0 100% /mounts/mp_0000
/dev/loop3 35M 35M 0 100% /mounts/mp_0001
/dev/loop4 4.2M 4.2M 0 100% /mounts/mp_0002
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 8.9M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
tmpfs 7.9G 0 7.9G 0% /tmp
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/mapper/system-root 40G 39G 1.3G 97% /mnt

Well, it is far from being full, at least so full as to cause boot failure. So may be you explain, what happens when you boot and why you think it is related to full partition.

[to add to ideas: an effective strategy from a bootable system]
if one cannot do a balance directly, then the add method:
btrfs device add <can even use a tiny usb partition>, balance, delete snapshots, balance, remove usb pool, -> system fixed. (youll find complete recipe online)

i have used this method where i could boot and its simple, if it can be done from external live usb then ok. its very clean (a better strategy than hacking at the FS). you will have to do your own research/experiments! (from live usb i assume one would boot to system after (1) add, (2) balance)

forgot to say btrfsprogs (sometimes called btrfs-tools i think) can be installed on the live usb, it may not be in the ubuntu repos (or check the name). PS I think tumbleweed or certainly argon/krypton have live versions.

before using “btrfs add” you should try the straight balance on the btrfs wiki, I think they suggest balance with zero. Again i have no idea if any of this can be done from live usb and you have the added complexity of encryption, good look.

About a week ago I got a warning message saying that now less than 1.5G are free on the root partition but did not think much of it.
Then a few days ago first I had a longer than usual boot process before I got to the DE, then additionally after another reboot/start up also the browser failed to start and then after the next reboot it was just a long loading screen and finally a message saying that there is not enough space and the boot process is being aborted.

Now I have just a permanent loading screen (ie the three dots underneath the Leap logo, with the dots lighting up green in sequence). I just tried this out for 30 minutes and the permanent loading screen is all that appears.

If it helps to diagnose the issue better, I have two USB devices connected, a Logitech wheel and a printer, both initialise properly during the boot process (ie the wheel spins/calibrates as always on start-up and the printer also does the usual initial move of the print head).

Just plugged in a new USB stick (unused, so FAT32), recognised as /dev/sdc1


btrfs device add /dev/sdc1 /mnt (?)


btrfs balance start ?whatdoiputhere?

I am unfamilar with the balance command and the btrfs wiki does not have my usecase :frowning:

What would the exact commands be?

i do not know how to do this from a rescue disk, i have done it twice from system and cannot recall the exact steps. btrfs should have booted (even read only) but your setup is complex. I dug up a few articles on google that have the steps + other advice.

I thought I found the solution in the kossboss article you found, for a case where it’s full with metadata but not 100% full (like it shows in the df -h output: 97%).

So I tried:

btrfs balance start -v -dusage=0 /dev/mapper/system-root

and I get:

ERROR: not a btrfs filesystem: /dev/mapper/system-root

Why? I took the default setting, btrfs for root and xfs for the home partition…


btrfs fi show /dev/mapper/system-root 

shows the usage of 36.64 GiB of 40.00GiB.

I get the same result for

btrfs fi show /mnt

so I tried

btrfs start /mnt -dlimit=3

and it recognises it as btrfs but I am getting the error:

ERROR: error during balancing ‘/mnt’ No space left on device.

What I would need now is some command for balancing that would work in my usecase…

Use snapper commands to remove some snapshots

I had already seen that in another thread but for

snapper list


snapper -c root list

I get bash: snapper: command not found.

So I was thinking of maybe being able to delete some snapshots manually somewhere with “rm” but there is nothing in /boot and in /lib/modules there is only 1 kernel listed (

Are the old snapshots hidden files? Am I looking in the wrong place? Im still doing everything from the USB stick in rescue system mode, wouldn’t snapper be available there?

No you can not use normal delete technics You will destroy the file system DON’T DO THAT :open_mouth:

snapper is there somewhere but probably not in the defined paths used from a rescue disk so a full path will have to be used.

Someone that knows where will have to jump in.I don’t use it myself

use this to get a better idea of space usage:
sudo btrfs filesystem usage -h /
look and metadata and unallocated space.
try zero balance with:

sudo btrfs balance start -dusage=0 /

sudo btrfs balance start -musage=0 /
if that works keep increasing the value:

sudo btrfs balance start -dusage=5 /

sudo btrfs balance start -musage=5 /

if non of that works, (i think) it MAY be possible to delete snapshots directly from btrfs command. but your playing russian roullette, you may damage things and in any case it may not even free any space (deleting doesnt often free space on btrfs).

(i think) the best way would be the stratergy i gave you (and shown in article) -> add space, balance.
(check and adapt to your situation)
sudo btrfs device add /dev/sdXY /
sudo btrfs balance start -dusage=10 /