Pondering adopting a Distrobox approach on old PC with SSD/HD and many partitions

In a separate thread the concept of using Distrobox was raised to me, and naturally I was curious.

Pointed out to me was this link: Distrobox - openSUSE Wiki
and
also this link: https://distrobox.it/

At this stage, this is not a detailed help request per say for me (not yet), but rather it is intended for this thread to be a chat, to explore technical approach/considerations with Distrobox, although it may have technical aspects.

In my case?

In my case I am pondering the utility (and pros/cons) of installing LEAP-16.0 on my old PC (that has an SSD and an HD) and then in LEAP-16.0 install Distrobox, and then installed different openSUSE versions in specific locations.

The SSD on the old desktop has many 25 GB partitions (and the HD many 25 GB and larger partitions), so my thought is to then format each SSD partition (say as EXT4) and then mount that partition (intended for container(s)) under /home/oldcpu. ie something like:

$HOME/.local/share/container/LEAPnew <<< for a new LEAP version
$HOME/.local/share/container/SLOWROLL <<< for a new SLOWROLL version
$HOME/.local/share/container/TUMBLEWEED <<< for a new Tumbleweed version

or better perhaps, instead mount the different partitions (intended for containers), in a place under / (system) like:

mnt/SSD/container/LEAPnew <<< for a new LEAP version
mnt/SSD/container/SLOWROLL <<< for a new SLOWROLL version
mnt/SSD/container/TUMBLEWEED <<< for a new Tumbleweed version

Ultimately, I want the / (system) for each container version of LEAP to run on the SSD (for improved speed).

And then install distrobox and somehow (?) configure Distrobox to place the containers in the noted locations.

If this is possible, it might reduce the amount of re-partitioning I have to do in this desktop and even free up more HD space (due to shared /home ).

Also, am I correct that the /efi/boot would not need be any larger for Distrobox, given the same kernel version would be shared by each Linux version?

I might post , providing my rather convoluted partitioning (on my old Desktop) later, as I decide what degree of re-partitioning is necessary, if any. I suspect re-partitioning will be needed.

Clearly, I need to read more about this. I have not decided to go the Distrobox approach, its all very new to me, and I am about to go globe trotting soon (travel around the world a bit) and so I won’t have any time to study much then on this.

The idea of sharing the /home/oldcpu between LEAP/SLOWROLE/Tumbleweed versions is interesting, but I fear a shared .local in /home/oldcpu could break things given different KDE desktop plasma versions. I also read the kernel would be shared between LEAP/SLOWROLE/Tumbleweed versions, and that too gives me pause.
.
But I may have all of that wrong.

Funnily enough I have been toying with the same idea on an old PC I have too. As I don’t use it that often & it is a bit of a pain with all the updates for Tumbleweed I was thinking of switching it over to the Aoen Version which is mentioned along the way in the Distrobox link you shared. So rather than start a new thread I thought I’d put some questions on here too as they are similar to yours in a way.

My questions relate to the Aoen installation that I was having a go at to install in on a fresh SSD I have just got to replace the Tumbleweed on an old HDD. Having gone through most of the installation set up I chose to import my data / set up, but the installer then offered to replace all the partitions etc on my old HDD & I couldn’t select the new SSD which I’m guessing is probably because it is unformatted at present. I was also unsure if the minimal install was the Gnome type set up as I could only see Docker and KDE options beyond that + network etc.

So my questions are:

  1. Do I need to format the SSD before trying to install Aoen? I was hoping / expecting for the installer to see it and format it appropriately.
  2. I guess I need to go down the expert install option for this and then select the SSD along the way through that?
    3)Does that offer or is it possible to bring across user data from the HDD install that way? Would be nice, but not essential but might be a pain setting up a few things again.
  3. Could Docker help with this post install? - I’m not too familiar with docker TBH.

@oldcpu the intention of using distrobox was only for applications that were not packaged for Leap 16, openSUSE or a flatpak version. This way one can keep a pristine install of Leap 16 with default repositories for maintenance, then use what ever in a distrobox or flatpak at a user level without breaking the underlying host…

1 Like

It is called “Aeon”, and that is a completely different topic.

Tries installing AEON pretty useless - wiped out my working Tumbleweed and end up at a Local host Login screen where no amount of user names or passwords get recognised not even tik & no password. This happened with both the MicroOS and KDE options - not sure if it is because I set up the wired network with DHCP? Pretty unimpressed for what’s supposed to be a simple system. Never managed to get it to access my unformatted SSD even though that was recognised by the BIOS.

One could have known that by reading the installation instructions…

If the installation fails, users are expected to create a bugreport with the tik log (as this distro is still under developement as noted at the homepage).

@jjis multiple systems running Aeon, no issues. It’s not designed for multiboot, single install only! TPM 2.0 ver 1.38+ if don’t want to run in fallback mode.

Yes I knew that, as I hadn’t done much with that particular TW. Just expected it "to work " as advertised. Not sure how I would produce a burgeport if it doesn’t work.Although I was able to login to root at one point but maybe I could run some commands from there?

Distrobox containerizes the parts of the operating system that are necessary to do things like package management - but not the entire system (containerization != virtualization), so there are things that are unnecessary, like /efi/boot.

It uses docker, so disk usage management is done the way you manage space with docker - by default, the containers are stored in /var/lib/docker/containers, but you can tell the docker daemon to use a different path by modifying the /etc/docker/daemon.json configuration file.

I’d suggest trying it out and taking a look at it before building a bunch of containers, and see how the disk space is set up. You’ll find that you don’t run the traditional installer at all, so no partitioning is needed (which is because it’s all done in containers). And with containerized workloads (including distrobox images), the host kernel is what’s used (containers don’t run their own kernel).

Your user home directory and other shared configuration is made available inside the distrobox container (I haven’t looked at how, but knowing docker, it’s likely using bind mounts).

Thanks, I guess that rules this machine out as it is pretty old, but I guess it could do fall back mode if I set it in the BIOS. The disks it insisted on installing it to had other redundant distros on them so maybe it didn’t like that, although nothing else apart from MicroOs came up in the boot loader before I got the localhost screen - so I don’t know. Never go the option to install in to the unformatted virgin SSD - maybe if I had a working system on there I’d have to format that to EXT4 or something before. Guess maybe I’ll give Slow roll a go instead of just go back to Tumbleweed.

I just spun up leap in distrobox as an example. Here’s a few things:

$ uname -a
Linux TheEarth 6.16.3-1-default #1 SMP PREEMPT_DYNAMIC Tue Aug 26 05:31:27 UTC 2025 (b954ff4) x86_64 x86_64 x86_64 GNU/Linux

This is a TW kernel; this is expected, because containers use the host kernel, as I noted in my previous reply.

$ docker inspect leap
[...]
        "HostConfig": {
            "Binds": [
                "/dev:/dev:rslave",
                "/dev/null:/dev/ptmx",
                "/run/user/1000:/run/user/1000:rslave",
                "/tmp:/tmp:rslave",
                "/usr/bin/distrobox-init:/usr/bin/entrypoint:ro",
                "/usr/bin/distrobox-export:/usr/bin/distrobox-export:ro",
                "/:/run/host/:rslave",
                "/sys:/sys:rslave",
                "/etc/hostname:/etc/hostname:ro",
                "/usr/bin/distrobox-host-exec:/usr/bin/distrobox-host-exec:ro",
                "/etc/hosts:/etc/hosts:ro",
                "/etc/resolv.conf:/etc/resolv.conf:ro",
                "/home/<username>:/home/<username>:rslave"
            ],
[...]

These are the parts of the filesystem that are bind mounted into the container. You’ll see, for example, that my home directory is bind mounted into the expected place so it appears to be the same as on the host (and has the same contents). The rslave parameter is used to ensure that mounts in the host that are within the mounted filesystem are available in the container (if this wasn’t specified and you had something mounted to /mount/user/tmp in the host, that wouldn’t be visible in the container). At least that’s my understanding.

$ zypper lr -d
$ zypper lr -d
#  | Alias                       | Name                                         | Enabled | GPG Check | Refresh | Keep | Priority | Type   | URI                                                                     | Service
---+-----------------------------+----------------------------------------------+---------+-----------+---------+------+----------+--------+-------------------------------------------------------------------------+--------
 1 | repo-backports-debug-update | Update repository with updates for openSUS-> | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/update/leap/15.6/backports_debug/          | 
 2 | repo-backports-update       | Update repository of openSUSE Backports      | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://download.opensuse.org/update/leap/15.6/backports/                | 
 3 | repo-debug                  | Debug Repository                             | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/debug/distribution/leap/15.6/repo/oss/     | 
 4 | repo-debug-non-oss          | Debug Repository (Non-OSS)                   | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/debug/distribution/leap/15.6/repo/non-oss/ | 
 5 | repo-debug-update           | Update Repository (Debug)                    | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/debug/update/leap/15.6/oss/                | 
 6 | repo-debug-update-non-oss   | Update Repository (Debug, Non-OSS)           | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/debug/update/leap/15.6/non-oss/            | 
 7 | repo-non-oss                | Non-OSS Repository                           | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.6/repo/non-oss/       | 
 8 | repo-openh264               | Open H.264 Codec (openSUSE Leap)             | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://codecs.opensuse.org/openh264/openSUSE_Leap/                      | 
 9 | repo-oss                    | Main Repository                              | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.6/repo/oss/           | 
10 | repo-sle-debug-update       | Update repository with debuginfo for updat-> | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/debug/update/leap/15.6/sle/                | 
11 | repo-sle-update             | Update repository with updates from SUSE L-> | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://download.opensuse.org/update/leap/15.6/sle/                      | 
12 | repo-source                 | Source Repository                            | No      | ----      | ----    | -    |   99     | N/A    | http://download.opensuse.org/source/distribution/leap/15.6/repo/oss/    | 
13 | repo-update                 | Main Update Repository                       | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://download.opensuse.org/update/leap/15.6/oss/                      | 
14 | repo-update-non-oss         | Update Repository (Non-Oss)                  | Yes     | (r ) Yes  | Yes     | -    |   99     | rpm-md | http://download.opensuse.org/update/leap/15.6/non-oss/                  | 

While the host is TW, in the distrobox container, the repos are Leap repos, as expected.

lsmod isn’t installed, but looking at /sys/module, you can see that the installed modules match the host (because /sys is mounted from the host - which makes sense, since the kernel is the host’s kernel).

That sounds interesting, but i don’t know enough to understand it (yet).

If this meant two separate LEAP-16. installs, one in a container and one outside the container (ie host ?? system (I don’t know the terminology)) then I could see that.

However that implies to me extra effort to maintain two LEAP-16.0 installs, so I think two LEAP-16.0 installs is not what you meant.

Looks to me that my learning curve will be a bit steep.

The important thing with learning containers is generally not to think of them like a virtualization solution. They are typically used for microservices - for example, I have containers that serve a single Apache instance, and a setup that uses two containers - one that’s a web service, and one that’s a databse.

Distrobox is a bit different, because it’s the libraries (and environment) from a full distribution and can be used to emulate a nearly full environment on your host - but the principles behind them are still container principles (because that’s the tech it uses): It’s not virtualization in any way, shape, or form. There’s a fair amount of isolation, but it’s not the same as running something in KVM, QEMU, Virtualbox, or VMware (which also don’t provide full isolation either, as evidenced by various CPU exploits that allow a VM’s host to be accessed).

Having an understanding of containerization through Docker or Podman will really help you understand how Distrobox works.

https://labs.play-with-docker.com/ is a good tool for learning Docker basics (using a remote instance of Docker) - in particular, the link to https://training.play-with-docker.com/ which gives you some hands-on exercises to do to help you start understanding containerization in general.

@oldcpu It was more pointed towards running a Tumbleweed container and then install a tumbleweed application.

Likewise you could test/experiment in a Leap 16.0 container before deploying it onto the host to figure out what works/what doesn’t without touching the host Leap 16.0 install and potentially breaking that. Or you could have a chess engine running and using an nvidia container (separate from distrobox) of if a newer intel GPU a oneAPI container… It’s all about being able to run things as your user and not touching the underlying operating system…

1 Like

I don’t understand this.

Lets hypothetically consider LEAP-16.0 in both host(core) and also a duplicate of its libraries/binaries inside the container.

Are you saying that flatpaks must run in the container, or in the host(core) ? and zypper apps run in the container or in the host(core)?

I do not find this all that straight forward, … likely due to zero experience.

@oldcpu isolation, so if something doesn’t work or breaks, just delete the container and move on, create a new container etc. No impact on the host and it’s operation.

ok … .so flatpak apps and zypper app installs, all done inside the container? only system apps installed outside in core(host) ie installed inside the container. Do I have that correct? or are flatpaks installed on the host(core)?

@oldcpu Yes,they all reside on the host in their own space. I always install flatpaks as my user to isolate from the system, then use flatseal as required for permissions/access. For distrobox you can customize (assemble) your own image/files for your requirements and the application your wanting to run, so don’t really need zypper.