openSUSE Docker/Podman; MicroOS, Leap or Tumbleweed?

Richard brown did post his promised blog on why not to use Leap and go Tumbleweed (well, really, MicroOS) only…

For me, the great strength of Tumbleweed, as a “general purpose” distro, is also it’s greatest weakness; constantly rebuilding packages. Watching zypper dup download and then individually update 1000+ packages on a weekly basis or more is a bit much to me, even for a single desktop. On a general purpose sprawling server that feels like a nightmare just waiting to happen.

On the other hand, I do fully agree that rolling for Micro OS and single function containers, where a whole pre-made system image snapshot is provided that is a product delivery or an update, makes a lot of sense. I don’t have to watch 1000+ packages try to individually update, running their scripts, etc, and hope nothing goes wrong. I just re-image each application service, once, when I want to update. Even the great anxiety of rolling distros, of what happens if I don’t update for several weeks or months, is gone, as it is always installed as a clean snapshot image rather than trying to rebuild itself from individual packages all running scripts, and any one of which that may assume an update you may have missed had already been installed.

If Richard Brown is proposing things like Micro OS with rolling single snapshot updates (and ideally an immutable rootfs) should entirely replace conventional servers, I could fully embrace that. If he is proposing that rolling distros should be the only form of all linux distros for all uses, which he also advocates, I feel he is dead wrong there. For example, for development work and generic desktops, having used both now, if I had to choose, I would pick leap over tumbleweed.

I’m of two minds about this. So I continue to use Leap, but I have Tumbleweed there in another partition so that I can switch fairly easily.

Each has its advantages and its disadvantages. But then the grass is always greener on the other side of the field.

If your deploying an environment into production (desktop or server), then would be Leap for sure…

Home user systems, take your pick, I enjoy Tumbleweed, no major issues. Depends on the packages for big numbers, eg gcc was big, but most updates I get are not that big bandwidth or package count wise, it’s been awhile since I saw anything close to 1000 in one update, maybe over a week at times perhaps. Since the first of the month 6 updates in total, kernel updates no issues with Nvidia installed the hard way, qemu machines all up and running, probably one thing about Tumbleweed is the need to get your hands dirty at times for the likes of Nvidia drivers. Older Nvidia hardware requires patches for drivers if you want to keep them going.

I probably download more updates into my package caches for building rpms that I do for my system updates…

A strong blog post by Richard Brown, who is a unique technologist.
And, as a unique technologist, he chose this moment to also evangelize MicroOS and describe how he architected his personal network server infrastructure.
Only slight criticism I have is that Richard writes to an audience with a fairly high level of technical experience and expertise who already understands the reasons behind his choices. For many, the technical choices need to be translated into common everyday tasks and activities less technical people base their decisions on.

Which is really cool.

Because it’s basically a modernized update of what I have implemented for years and still practice today.
Richard builds on MicroOS.
Because I (and many others like me) preferred a minimal system for single (or restricted) use before MicroOS existed or want the flexibility to use virtualization instead of containers, I have been using JeOS or a very minimal “server” version of LEAP.
I strongly advocate a multi-tenant approach.
I utilize a combination of various virtualization technologies and containers.
Richard advocates containers.
Richard relies heavily on Transactional Server to manage updates and Snapper to roll back when necessary.
For those who can sustain a downtime for the time it takes to identify a problem, diagnose, make a decision to undo and then to determine exactly how to use Snapper to undo the problem, that might be OK but would be unacceptable in many situations where the QoS is high for any of a number of different reasons.

I don’t know if it’s a matter of style of Development, but I generally set up Development as Richard and I describe… In virtual machines or containers. But, within those Development Environments, I deploy code managers and otherwise ensure that coding targets specific platforms using specific standardized frameworks that don’t change regardless of anything else that might be happening on the machine.

IMO the one most important thing that Richard omits from his blog is what I consider the most obvious strong point for Transactional Server on whatever distro version (It’s available for use on TW, Leap, MicroOS), is its imperviousness to compromise based on writing anything to disk… Because of course the disk partition is RO when mounted.


I knew this would be an interesting topic. Richard Brown has convinced me that rolling makes perfect sense for typical containers, which are often built on demand anyway, and are updated as whole image installs. Also obs makes it so easy to auto-rebuild the individual packages destined for containers on tumbleweed. The oci busybox tumbleweed image as a base image, while maybe not as small as alpine can be, is certainly small enough given other advantages, as the real strength is in leveraging obs as a means to go from a pile of code to production delivery on containers pipeline. I could also like the idea of microos rolling with image updates as the host os for the containers.

I actually use lxc images for some development purposes, as the environment is more mutable than a container offers, and more performant than a vm. But I do personally prefer working on a native os directly.

One of the things I’d like to emphasize, is that Richard’s blogpost no doubt is an attempt to start discussion about this idea, hence his somewhat provocative writing style. And given this thread and some others I’ve seen in various places, that attempt is pretty successful.
I had some small conversation with Richard on Discord about this hackweek project, and I like the concept, the philosophy . Specially if think of server setups.

Small comment on the use of Busybox which I wasn’t aware of…
As I described in my Presentation back in 2016 (2016 - The Year of IoT and Taking Down the Internet), a major reason for those incidents is that unlike conventional Linux which has typically delivered User tools as individual app files, Busybox compiled a vast amount of those tools in a single binary and sym-linked to retain legacy/conventional access.

The downside of that was that Busybox couldn’t be upgraded, and therefor even today a great many devices on the Internet are sitting there as targets for intrusion.

That may be less of an issue today and especially on openSUSE since

  • A large number of those same tiny User apps are now integrated into the Linux kernel
  • Each time a container is upgraded, everything in it gets upgraded anyway… Individual apps can be replaced with newer versions because there is a replacement mechanism which doesn’t exist in many embedded “IoT” devices.


You probably aren’t using Latex.

@dyfet and @tsu2, can you provide your use cases?

What benefits do these technology’s offer to the average user?

@dyfet, perhaps your Thread title is not descriptive enough for our Forum readers, perhaps it should be updated?

Well, I was going to, but I found a fundamental issue, which goes back to Richards attitude about AlpineLinux. Yes, the opensuse tumbleweed busybox image is about as small. But Alpine’s tiny image also includes apk, and so can install any arbitrary packages from Alpine repos. You can specify apk commands in your dockerfile run, etc. The opensuse busybox image has no zypper and no apparent means to install anything from repos, or even a base rpm database so that you could maybe use it as an initialized chroot. Unless I am missing something, that to me seems a major use case / usability deal breaker at the moment.

It’s also true that for most production containers those binaries are only executed at initial image build, whether in a dockerfile run command or buildah script, so are often only used to support container config and setup. The container, in production, normally just starts a daemon process directly.

Where did you find an openSUSE busybox image?
I only found a busybox package, but despite its description when I installed it into a JeOS, I didn’t find any changes… According to the description and what I know about busybox, I expected a long list of binaries would be replaced with symlinks… But when I inspected the contents of /bin I found nothing had changed. Thinking I somehow was mistaken what binaries were supposed to be replaced, I inspected the contents of the busybox package and found I wasn’t mistaken.

So, bottom line so far I haven’t found that installing busybox as a package does anything I can discern.


My main use case is as I described, managing a team of Developers, and processing code through development stages and testing.
Each stage or test is done in its own isolated VM.
I developed this flow long before I became aware of Continuous Integration projects like Jenkins CI.
It’s all portable of course (one of the benefits of virtualization), can be run on anyone’s machine, on a server or locally on a machine.

A case can be made that the concepts can or should be applied on common personal workstations…
Apps like torrenting comes to mind for its security issues.
The benefit of better isolation of processes is that any kind of compromise for any reason (badly written code?) restricts the compromise only to that one app.
In fact, 'way back when systemd first appeared I opined at the time that it could provide the basis for a future “very secure” architecture by isolating every application, service and process within its own container. To a certain degree, this already is being done in x64 architecture (every running application is supposed to think it is the only one using resources and can access practically every address in the memory map because apps run in their own virtual environments, not to be confused with virtualization).


Yes, I had no idea either where these things he talked about were until I had asked Richard Brown. What you are looking for is You should be able to directly

docker pull

from it. It is a 7.5meg image. It is NOT the same as JeOS, and it’s not just enough to do anything other than run a hand made executable copied into it, as in particular it has no zypper. The Alpine base image is about the same size, but it comes with apk, so I can for example create a mariadb alpine docker image that also remains rather small simply by running apk (Alpine’s package manager) in the container to add a pre-made mariadb from the alpine repo directly, or from run commands in a Dockerfile. I have no idea how to even do the equivalent starting with an opensuse busybox base image at this point, and I would not want to compile things by hand locally that are already in a repo. That feels like a major step backwards. I am simply hoping he is using something like kiwi, or some other tool, to create the images in this registry from regular tumbleweed packages since so far that part is not described anywhere that I have yet found. Kiwi to me seems a likely candidate to do that with, though I suppose once you deal with the opensuse dependency tree busybox as a base will no longer will be as small anymore.

There is a minimal debian image that digital ocean makes for docker, which is kind of a “just enough debian” to run apt. I now suspect we are also not going to get much below what JeOS already offers in generic uses, which can potentially be/become the opensuse equivalent of the digital ocean minideb base image, and I now feel Richard Brown simply did not really understand the Alpine use cases since that busybox image is his suggested replacement for it. I also am guessing the busybox stand-alone package cannot simply be installed on top of an already existing image to gain benefit if it does not kick out and would not overwrites binaries from other packages.

I did not see an option to rename a thread, but maybe something like “OpenSUSE Docker/Podman; MicroOS, leap, or tumbleweed?” is perhaps a bit clearer…?

For me, it is also about finding the most efficient way to leverage obs for creating containers.

Nope thank goodness :wink:

The point of my post is not ‘Leap vs Tumbleweed’ but perhaps more ‘Leap vs MicroOS’

For my use for my own desktops, Leap never had a chance, I dont even remember installing it even for testing purposes.

But Leap served well for a server, but now with MicroOS there is literally no usecase I could possibly see myself using Leap for, and my experience is so good I’m seriously thinking of extending that advice to that MicroOS is the only Server OS I’d currently recommend.

But MicroOS is a different operating system with a different concept and mindset required - you cannot just hop between Leap and MicroOS the way you can do with Leap and Tumbleweed

Fair enough. I’ll admit that if I were running a server, I would want it to use a fairly minimal underlying OS.

I think that MicroOS can make a case for being the Guest on a multi-tenant Server.
But the question remains what would be best for the base HostOS, and for Workstations.

Tumbleweed can at least be trialed with the following caveats…

  • You absolutely must have fast and reliable broadband, There is no getting around the massively larger number of packages that are upgraded and at a much faster pace than LEAP.
  • With change, there is the inescapable increase in risk. For most Workstation use, the ability to roll back, change kernel or otherwise undo the problematic change easily might not be a great issue. But for situations where there is a contracted QoS specifying very restrictive standards of several “nines” I can’t see any architect intentionally increasing risk in any way. There’s enough effort into designing sufficient fault tolerance and robustness, why do anything that goes in the opposite direction at all?
  • Complex User Workstations which might do everything a business might require <and> Users’ personal desires like gaming, multimedia, streaming, social media and more can make a machine extremely complicated with vast exposure to anything at all going wrong. When a machine is used this way, does it make sense to roll everything back just to fix one problem? What if the exposure is so great that problems crop up more often than not? This is very different than isolating numerous functionality in their own containers which can be individually addressed without affected all others on the machine.

I can’t see any sane SysAdmin/Architect willing to chance more headaches in any way.
Is why CentOS became a favorite for so long, although I’ve heard murmurs that rapidly changing hardware is causing difficulties.