Storage Issue Questions

Hi a couple of questions about storage which I hope I might be able to get some help / guidance on.
Firstly I’ve been debating for a while about deleting an Ext 4 partition (you’ll like this) which contains a now defunct MX Linux install. With the idea being to free up the 340gb allocated to it and make it available to or expand a 660mb LVM2 PV which contains my daily Tumbleweed OS. There’s also a small EFI-system partition FAT 268mb which I assume contains the grub etc. as I have my BIOS set to boot with this SSD and have a separate Windows SSD which I can then rarely choose from Grub.

Any way sorry if that is too much background, can you advise what command / output I might need to run to show this set up for advice? Secondly is the a safe and easy way to do this without risking borking my Tumbleweed system?

Reason for asking now is that the last couple of days a Gnome Disk Usage Analyser has popped up a couple of times suggesting my root is running out of space, although in files it seems to have around 117gb available, so I’m a bit confused about that and maybe the a suitable command from above might shed some light on it? On a cursory inspection there seem to be lots of old kernel files in there so I was wondering if this is an issue is there a safe and easy way to purge some of these?

Thanks for reading, sorry about the long winded questions - any help or suggestions gratefully received.

The Disk usage thing just triggered again and here are a couple of screen shots


Most likely anything older than 6.15.2 are remnants of “foreign” modules (Nvidia?) for kernels that are no more installed on the system.
Those should be periodically purged manually since the purge-kernels service that removes older kernels is unable to get rid of modules built locally on the system.
Please check if you really need any of those; if not so, you can safely delete those directories (but please make sure to keep the 6.15.4 and 6.15.3 that you still need for still installed kernels).

1 Like

Thanks for your reply. Would this be a good way to try that?
zypper help purge-kernels
purge-kernels [OPTIONS]

Autoremoves installed kernels according to list of kernels to keep from
/etc/zypp/zypp.conf:multiversion.kernels which can be given as ,
latest(-N), running, oldest(+N).

Command options:

--details               Show the detailed installation summary. Default:
                        false

-D, --dry-run Don’t change anything, just report what would be
done. A meaningful file conflict check can only be
performed if used together with ‘–download-only’.
Default: false

No, purge-kernels already did its job (you only have the two latest kernels installed) but left behind a few directories where additional modules were installed.
Check what is inside those directories and see if you still need what is there.
If you don’t need those modules anymore you can delete those directories with any file manager of your choice or via:

sudo rm -r /usr/lib/modules/6.14.6-2*

and adjusting for all the directories you want to delete.
Please be sure NOT to delete the 6.15.4 and 6.15.3 directories that store the modules for the currently installed kernels.

What file system do you have? If you also have btrfs it could be a snapshot problem. As you already wrote “sudo zypper purge-kernels” works, I use it at every new Kernel update

The purge kernels said nothing to do, g-parted suggests I have an LVM file system & also suggests it is all used up even though files says I have 117gb free so I’m not sure what it going on. I don’t remember making the root or OS bit 24gb or whatever, seem to remember I went with whatever was recommended by the installer, although I thought it I was using btrfs for backups etc. Is there some other command I can run to show useful information as per my original post?

What is in sda3 to completely fill the file system? Do you know? Or is it something unexpected?

That is the Tumbleweed OS, which is the reason for the post. The whole thing is a 1tb SSD with a grub sda1, a redundant MX Linux on SDA2 ext4 and SDA3 which is a partition for Opensuse tumbleweed which is LVM or a btrfs file system.

You were right in that the files listed were related to old nvidia drivers I think. I tried what you suggested in the terminal, but it didn’t seem to do anything - perhaps I need to make some changes to the numbers in there? Also tried manually deleting them in files but couldn’t do that - presumably as permission thing?

So still have the same situation with my btrfs partition saying the filesystem root is running out of space which only seems to be about 25bg for some reason.

I know old kernels are saved for booting into if there are problems with updates. Is there some way of checking an managing these if this might be causing the issue?
Or is there a safe and easy way to expand the root filesystem with existing free space on the partition & or deleting another Ext 4 partition that is on the same disc and utilizing that in some way?
It is the second disk shown below and SDA2 & SDA3 that I am referring to, hoping to not have to do fresh reinstall, which I was hoping Tumbleweed would help to avoid.

Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 980 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 78D8D233-AEE5-4B00-82EA-F1B3F2278943

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 206847 204800 100M EFI System
/dev/nvme0n1p2 206848 239615 32768 16M Microsoft reserved
/dev/nvme0n1p3 239616 1952128623 1951889008 930.7G Microsoft basic data
/dev/nvme0n1p4 1952129024 1953519615 1390592 679M Windows recovery environm

Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WDS100T2B0A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C63FB575-BADF-4F57-974D-FCD23C70AD04

Device Start End Sectors Size Type
/dev/sda1 2048 526335 524288 256M EFI System
/dev/sda2 526336 664545279 664018944 316.6G Linux filesystem
/dev/sda3 664545280 1953525134 1288979855 614.6G Linux LVM

Yes you need to adapt the numbers. the easiest way is to perform following commands:

sudo rm -r /usr/lib/modules/6.9.*
sudo rm -r /usr/lib/modules/6.10.*
sudo rm -r /usr/lib/modules/6.11.*
sudo rm -r /usr/lib/modules/6.12.*
sudo rm -r /usr/lib/modules/6.13.*
sudo rm -r /usr/lib/modules/6.14.*

This will give you at least 5GB free space.

Also check the size of /var/log/journal

25GB for / is quite small

OK thanks I’ll try that. Yes I know 25gb is small - not sure how I ended up with that when I had 660gb of disc space available. Can’t remember if that was offered by the installer or if I stupidly chose it for some reason.

On small correction. As these are directories, the commands were insufficient. These will work:

sudo rm -rf /usr/lib/modules/6.9.*
sudo rm -rf /usr/lib/modules/6.10.*
sudo rm -rf /usr/lib/modules/6.11.*
sudo rm -rf /usr/lib/modules/6.12.*
sudo rm -rf /usr/lib/modules/6.13.*
sudo rm -rf /usr/lib/modules/6.14.*
1 Like

Thank you that seems to have worked now and freed up a bit of space. So I should be OK for now, but guess I might need to see if I can extend or expand that root partition at some point?

Having said that I had a quick chat with GPT (I know he’s liar and not to be trusted) but interestingly he did suggest that the Gnome Disc analyser can’t cope with btrfs systems.

Blockqu Why GNOME Disk Usage Shows “Only 25 GiB” Free

GNOME Disk Usage Analyzer (baobab) typically gets its space info from statfs() (same as df) — which, for Btrfs, does not show the true total free space, especially when the filesystem:

  • Uses dynamic allocation (like Btrfs does),
  • Has unallocated space still available in the block device.

Btrfs manages its space in pools:

  • Even if the full device is 583 GB,
  • Only ~447 GB is currently allocated to Btrfs data/metadata pools,
  • The rest (~135 GB) is available but not allocated yet — and baobab doesn’t see that.

So baobab reports based on what’s currently allocated, not what Btrfs could still use.

It suggests using sudo btrfs filesystem usage / instead which I did and it seems I’m fine.
It did also suggest that Opensuse offers a graphical tool for btrfs calle:
btrfs-assistant but I couldn’t find that in YAST so maybe it made that up?

So bottom line if you are on Gnome you can probably ignore warnings from the disc usage analyser.
Output from sudo btrfs filesystem usage below if that’s of any interest?
btrfs filesystem usage /
Overall:
Device size: 583.50GiB
Device allocated: 447.56GiB
Device unallocated: 135.94GiB
Device missing: 0.00B
Device slack: 0.00B
Used: 426.26GiB
Free (estimated): 155.40GiB (min: 87.43GiB)
Free (statfs, df): 155.40GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no

Data,single: Size:441.50GiB, Used:422.04GiB (95.59%)
/dev/mapper/system-root 441.50GiB

Metadata,DUP: Size:3.00GiB, Used:2.11GiB (70.37%)
/dev/mapper/system-root 6.00GiB

System,DUP: Size:32.00MiB, Used:64.00KiB (0.20%)
/dev/mapper/system-root 64.00MiB

Unallocated:
/dev/mapper/system-root 135.94GiB

Install ncdu, and then from the terminal sudo ncdu / so you can see the directories and their contents well

Thanks I’ll give that a go out of interest. FWIW I found the btrfs assistant in the end by searching as it comes under a slightly different name - if that of any interest to any one else it can be found here:
btrfs-assistant
Although the RPM lacked some signatures apparently.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.