How to safely grow boot partition?

Let me just start this off by saying I have never made a forum post for Linux (been a user for about 8 months, all on Tumbleweed) so I will likely not include all the needed information at first and for this I am sorry. Also, I have troubleshooted this myself for several months and pretty much everything else a forum post I have found not to be necessary, but my computer is totally screwed without this fix and I need help.

I created my boot partition too small (35gb) and also used up all my other partition space my allocating it to home instead of letting it hang free. I have done a lot of research on this the past couple months about different ways to resize the partition. I read that using resize2fs is outdated and dangerous and it’s better to use lvresize, but I’m unsure if it will work on my filesystem type and also other problems due to the guide not being for OpenSuse.

As you can see these are my partitions. In a perfect world I would trim 100gb off of home2 (/dev/nvme0n1p3) and grow my boot partition (/dev/nvme0n1p6) using the newfound 100gb space. Currently after the latest python upgrade I need to update a lot of packages and my zenity has totally broken as well almost all other things that need updated but I do not have enough space. Also I have very very slow internet (200kbps max) so installing things takes forever which is why I am so hesitant to just wipe my computer and install everything again, and why it is really important that I am able to resize my partitions without losing my data.

I feel like I am forgetting to mention something, but I think this describes my issue enough for whoever is reading this to understand where I am coming from, and I am happy to provide any command results or anything you need to see to troubleshoot, and I am extremely grateful in advance to everyone. Thank you.

Welcome to the forum!

My partitions:

> lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
nvme1n1       259:0    0   1.9T  0 disk  
├─nvme1n1p1   259:1    0     1G  0 part  /boot/efi
├─nvme1n1p2   259:2    0     1G  0 part  /boot
├─nvme1n1p3   259:3    0   100G  0 part  /

And /boot is only 14% full:

> sudo du -h /boot
4.0M    /boot/grub2/locale
1.5M    /boot/grub2/fonts
84K     /boot/grub2/themes/openSUSE
88K     /boot/grub2/themes
3.8M    /boot/grub2/x86_64-efi
9.4M    /boot/grub2
16K     /boot/lost+found
1.9M    /boot/efi/EFI/boot
4.0M    /boot/efi/EFI/opensuse
4.0M    /boot/efi/EFI/tumbleweed-main
52M     /boot/efi/EFI/HP/DEVFW
52M     /boot/efi/EFI/HP
61M     /boot/efi/EFI
12K     /boot/efi/System Volume Information
8.0K    /boot/efi/$RECYCLE.BIN
61M     /boot/efi
186M    /boot

So if your 35G boot partition is too small I think the problem is more with the data on it or with the btrfs filesystem.

Can you post your output for “du -h /boot” for your computer?

It is not the boot partition which is full. According to the screenshot, 35GB are allocated to / (which includes /boot). The / partition is to small…

Show output of

lsblk -f
fdisk -l

as preformatted text, not as pictures.

There is not even a boot partition!

1 Like

There is a/boot/efi of 260 Mb.

There are also 3 /home, 2 of them in ext4 and one in the btrfs… It let me think that snapshots are not well configured, hence the problem of space.

And there is also a swap from only 2Gb.

This storage configuration is a mess. With Yast installer’s proposition , there would have been no problems.

Chanci needs to do a clean reinstallation, like Marel’s, to get off to a good start.
With a single partition /home separated in XFS, and a \swap with a size at least equal to RAM.

if the question is how to use the space

  1. reduce the size of /home 2 *should probably be mounted as /home not home two because by default there is a home mounted in the root.
  2. move /root up to the end of /home2 then expand to the desired size.

This is certainly not a standard set-up. And before we all try to guess what is what and used for what, and give suggestions based on that guessing, it is better to wait for the OP to post what @arvidjaar asked for and an explanation of the OP on what he thinks all those file systems are used for.

When I discover (or notified) that my / (or any other partition) is running low on disk space, I have this very strange habit. The habit is to discover “what” is filling up that space.

If using BTRFS for / partition, the very first thing I’d do is to check the “/.snapshots” sub-dir and how much space is being consumed. If a person is not very familiar with BTRFS and is using Snapshots, I’ll bet they’ve got waaaaaay too many Snapshots that have accumulated.

At one time, the / on one of the machines was running low on space, so I cleaned up the Snapshots and easily (and quickly) recovered over 10 gb of space.

So what next? These are simple examples, without going into any details, but are here as a suggestion.

Run df , which will give you a quick view of filesystem usage:

# cd /
# df -h

Then I would do a quick file usage check with the command “du”, so if we’re concerned about Snapshot usage:

# du -sh /.snapshots/

You can also ask snapper for a list, which provides useful output for analysis:

# snapper list

I would also check on old log files that are archived … clean those out (remove) if not required.

There are also graphical tools that allow you to see where all the space is being used up.

This information is completely useless. If you really want to pursue this, at least use btrfs filesystem du which will properly show exclusive and shared space which together will show you the real space consumption of your current root subvolume and historical snapshots (the Total column shows the same bogus value as du and can be ignored). You will need to do the same for the current root to estimate how much space is taken by snapshots.

Assuming the default setup, the quick way to see the same information is

btrfs qgroup show /

To add to my previous post, @Chanci , you might find this useful

https://en.opensuse.org/SDB:Cleanup_system

dontuwantmebaby@localhost:~> lsblk -f
NAME FSTYPE FSVER LABEL  UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
nvme0n1
│                                                                            
├─nvme0n1p1
│    vfat   FAT32 SYSTEM F008-C70E                             196.6M    23% /boot/efi
├─nvme0n1p2
│    ext4   1.0          9e3cec31-43da-49d9-aa4e-2421728e5a52     22G     0% /home3
├─nvme0n1p3
│    ext4   1.0          7b8d1355-c3d6-4d62-ae0e-90541eb90a1a  398.8G    49% /home2
├─nvme0n1p6
│    btrfs               b07d04d9-8e07-4d13-b3e3-396569f0feae    3.7G    87% /var
│                                                                            /usr/local
│                                                                            /root
│                                                                            /srv
│                                                                            /opt
│                                                                            /home
│                                                                            /boot/grub2/x86_64-efi
│                                                                            /boot/grub2/i386-pc
│                                                                            /.snapshots
│                                                                            /
└─nvme0n1p7
     swap   1            026a3693-1e75-476b-8921-8319b03f3d56                [SWAP]

dontuwantmebaby@localhost:~> fdisk -l
Absolute path to 'fdisk' is '/usr/sbin/fdisk', so running it may require superuser privileges (eg. root).
dontuwantmebaby@localhost:~> sudo fdisk -l
[sudo] password for root: 
Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: GIGABYTE AG450E1024-SI                  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CF82DEB3-69E1-4033-8328-3B700F4FE5CB

Device              Start        End    Sectors   Size Type
/dev/nvme0n1p1       2048     534527     532480   260M EFI System
/dev/nvme0n1p2 1950531584 2000408575   49876992  23.8G Linux filesystem
/dev/nvme0n1p3     534528 1872936959 1872402432 892.8G Linux filesystem
/dev/nvme0n1p6 1872936960 1946337279   73400320    35G Linux filesystem
/dev/nvme0n1p7 1946337280 1950531583    4194304     2G Linux swap

Partition table entries are not in disk order.

Here is the requested information that @arvidjaar asked for. Also I should mention when I set up this computer I was having a lot of issues from my home being on that smaller partition so my solution at the time was to change the default home, so home2 is my home directory and is it mounted separately event though it says its on the smaller drive. Also, I wish that it was the snapshots, but I regularly prune every snapshot I can to save space. I have less than 2GB of snapshots when viewing snapper list and this is more than I usually have. What I will probably end up doing is just moving all my files onto an external HDD and wiping the computer to set it up again as I obviously messed my computer up big time from the jump. Also I prune the old kernels, I am really just out of space. Has anyone ever used the lvresize command? It is already on my computer but I saw it in an Arch forum in which someone was giving a tutorial on how to resize using resize2fs. This is really my hail Mary because as I mentioned my internet is so slow that even doing all this will cost me a few days and I would prefer until I have better internet to completely redo the computer. If I could just get some extra space on that 35gb drive, as messed up as it is, it would totally save my behind. Thanks for all the replies btw, a lot of people are eager to help and I really appreciate it.

How is it relevant? Did you ever ask yourself what “lv” in lvresize stands for?

… or “e2fs” here? But at least it is applicable to your home filesystems.

Which does not tell us now much space your snapshots consume at all. It only means you have 2GB of non-shared data in all snapshots. But if two snapshots share 20GB of data, you won’t see these 20GB in the snapper output.

Anyway. It is trivial to resize a btrfs filesytsem (just do Internet search for “resize btrfs”). What is challenging is to resize the underlying device on which such filesystem is located. Using volume managers like LVM certainly makes this task easier.

In your case you have almost 700GB half empty partition nvme0n1p3 with ext4 filesytsem on top. You could certainly

  1. Shrink /home2
  2. Shrink nvme0n1p3
  3. Create new partition in the now unused space
  4. Move your root filesystem from the current nvme0n1p6 to the new partition. It can be done online using btrfs device replace.
  5. After btrfs device replace finished, grow your root filesystem to fill in all new partition (btrfs filesystem resize).

This leaves you with unused partition nvme0n1p6. You can use it for something else or delete this partition, add its space to the newly created one (which will be adjacent) and grow root to the new device size again.

Oh, stop, it is even simpler. After nvme0n1p6 you have swap and unused partition nvme0n1p2 with /home3. Just delete them and add this space to the nvme0n1p6 partition which will give you 25GB extra and then resize root filesystem. Can be done completely online and is safe (well, as safe as is possible anyway). Whether it will be enough is up to you.

That is the safest option anyway.

Always good to make a backup. I use two encrypted portable 1T USB drives where one is at a remote location.

The good thing about your setup is that you have a separate /home, that makes that with a new installation (using the data of this /home partition) easier, your setup is already partly done.

You could add another NVME drive just 256GB would be enough, use a PCIe adapter card if your Mobo does not have any more free M.2 slots.

Screenshot_20250504_083735

Then reinstall, make this new drive your root (“/”) and keep using /home2 as your /home.

Never partition a drive unless you are forced to. When upgrading storage in 2021 I assigned all available space to a single btrfs:

erlangen:~ # fdl /dev/nvme0n1
Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 990 EVO 2TB                 
Disklabel type: gpt
Disk identifier: 9CC8F3A2-0DC2-4237-8673-D76DCF8AF37F

Device          Size Type
/dev/nvme0n1p1  100M EFI System
/dev/nvme0n1p2  1.8T Linux filesystem
erlangen:~ # 

This approach increases flexibility and minimizes maintenance costs. Hence I install all new Tumbleweed systems the same way and converted existing ones to single partition setup.