[IDEA[ Onion package management

In real world containers play a great role in industry. Also in entertainment (Look at flatpack and Steam) it takes advantage. Docker is very great technology.

But… I have an idea to create something new and old.
It uses old mechanism, but it’s quite new way to use them.

I called it package tracing or Onion Package Management.
Zypper and others allows to change the root directory. When working in this mode, it also changes package database path. How about integrating many package databases in onion-like fashioned manner and changing root directory for installing packages. It could helps reduce space usage. With some environment variables, like PATH and LD_LIBRARY_PATH, it could helps to keep many system-like tier, but with reduce space. That’s won’t be a real container until we won’t create hardlinks to real libraries/executables/other in previous layer file system. It could be helpfull for Steam, which uses SteamTricks to solve dependencies and install it on system.

Imagine Steam is using my solution now, so it make packages installing into package tracing directory (rot set in zypper) and sets PATH and LD_LIBRARY_PATH to correct package tracing subdirectories. Now magic. It sets ONION_PACKAGES=/:/opt/Steam . Steam request to install libsomething. Zypper found libsomething requires libbenice, but libbenice is installed on /lib64, so zypper installs only libsomethings into /opt/Steam/lib64 . That’s all!

Of course - there’s one bad thing. What about remove libbenice from / ? I thing, that:

  1. Each tool that uses package tracing should run dependency checking each time before start
    or
  2. There should exist global database

About point 2:
We can make single database, but stores installation prefix in it.

Also see GNU GUIX project, which aims to be interdistribution package management system, but very different than snap/flatpak.

Your ideas about package management are probably not entirely new, and to your credit does try to address known inefficiencies observed when distributing containerized apps… But you should recognize that others have thought about such things and then try to understand why decisions were made for how things work now. In other words, understand what is before starting to try to improve, otherwise what you come up with won’t be better if you’re not solving problems others have already thought about before.

In particular, consider how containerized apps are created (and create some on your own to become better acquainted)… You can build your app as only a snipet of code that re-uses components outside of your container, but in that case your app will only work when those outside dependencies exist. Instead, if you want to build your continerized app with everything your app needs, it would be exceptionally portable, but the included components can duplicate components already on the system. That’s disk inefficient, but maximizes deployment options. That’s why a container that can run on any distro actually contains a mini version of an entire OS.

Also, even within any particular package management system, there already is a master tool. So, for instance for any RPM system, although you can use tools like zypper, yum and dnf, youc always use rpm, which will work everywhere on any rpm system.

HTH and IMO,
TSU

Thanks for reply. Maybe instead creating a new filesystem, create altered rpm db in advance. zypper knows, what will be going if we agree to do operation, like dist upgrade, installation, removal. I know nothing about rpm db, so suggest me I’m wrong if am I, but… What about creating altered rpm db in advance? In this case, zypper can calculate next operation, when previous zypper instance download packages or even install it? Of course, If previous zypper instance made something wrong, we terminate also newer zypper instances and remove rpm’s db created in advance. Also, when something wrong occur, we send an e-mail to root.
Imagine situation:
I order to install kdevelop. Zypper calculates entire change and write new rpm db in special location. My brother (we use the same computer) would like to install game, so he login as root and type zypper in wesnoth. Newer instance of zypper will look at rpm db created by previous and calculate changes related to state of newer db. Again, newer version of rpm db will be created. Both instances start download packages. When my instance end of work correctly and zypper called by my brother download all dependencies + wesnoth package, it will install wesnoth. After all, newest rpm db will replace oldest.

I do not understand all you say, but I see you talk about "I’ and “my brother” installing system software. I could be that you mean it correct, but saying it like this (and I admit many people talk about “I” when telling things done on their system) it is sheer nonsense. The system does not know “I” (or you may prefer “you” here) or “your brother”. The system only knows UserIDs. And for installing system software there is but one UserID able of doing that: UserID=0, better known under the User name “root”. Other Users, even if the User name is “I” or “my_brother” can not do any install or change anything in the system area of files (or something is severaly broken on the sytem).

Sorry - my English is very poor. I will try to describe what I mean.

  1. Topic isn’t related to my last post
  2. The idea is about allowing to order system to install something, when zypper is working
  3. When user order system to install something, when another instance of zypper is running, changes are calculated based on modified copy of rpm db and packages will be downloaded meantime previous zypper instance is running
  4. When previous instance of zypper done work without errors, altered rpm db (generated by zypper that previously works) replace oldest and we can install packages given to zypper currently waiting for previous instance to done work
  5. When previous instance of zypper done work with errors, we remove entire set of altered rpm db and send e-mail to root and do nothing

The idea is about allow to order system to install packages, when system rpm management is locked

Then this is all out of my comprehension. When the system administrator/manager is manageing the system (as root) and during that session runs zypper, (s)he is just doing just that.and no second zypper will be started at the same time (when (s)he is knowing what (s)he is doing).

Sorry, I will leave the thread because I can not follow your intentions.

Your comment suggests you’re looking to try to address the commonly seen issue that multiple instances of package management(ie YaST Software Management, Packagekit, zypper) cannot be run simultaneously

  • Circumstantial evidence suggests this is managed by libzypp.
  • There are probably good reasons for package management to block multiple instances from running simultaneously. Even if there is no certain reason, it could be simply for siding on the side of caution because there are so many libzypp functions to account for.

Your train of thought treads down the path of conflict management.
To gain some understanding of the scope and nature of what you’re trying to address, I’d suggest you study what happens when files are accessibly by more than one person (eg in a Network Share, but can be found in many scenarios) and why care must be taken to avoid file corruption and/or loss of data, managing file conflicts when accessed by multiple Users is not exactly the same but is likely easier to envision and study similar issues.

The most basic scenario can be described by what happens in a Network Share.
A file can be opened (checked out) by User1 and User2 simultaneously.
User 1 makes some change to the file (eg adds or deletes some content) and saves his changes.
The problem is that User2 now has a file that is no longer consistent with the new state of the file stored on the Network Share.
User2 is unaware of, and is working on a file that doesn’t have the modifications User1 has made so

  • If User2 simply closes his copy of the file, then no harm, no foul.
  • If User2 attempts to save his copy of the file regardless whether he makes any changes of his own, then conflicts arise and need to be resolved.

There is a large amount written on how such conflicts should be resolved, and suggest rules to implement.
But, there actually is no perfect set of rules that can be written, and some data must be lost in some scenarios… For example if User1 and User2 make changes to the exact same data in a way that the changes can’t be merged perfectly or even contradict each other, then someone or something has to decide whose data changes should be discarded.

That is why generally speaking the simplest and best solution is to avoid conflicts in the first place.
MSWindows enforces the rule that only the first User who “checks out” a file can make modifications. Until that User checks the file back in, everyone else who also checked out the file has a Read-Only copy.
In Linux, there is no such set rule so by default conflicts can occur but in recognizing the consequences applications used to access shared data generally are written to enforce some kind of rule(s) to avoid conflicts.
Similarly, it’s possible to theorize that libzypp was designed to support only one application use at a time.

That is why what you suggest at least raises the possibility of problems, and the golden rule should be to avoid any possibility of a conflict at all.

HTH,
TSU

@tsu2:
I know a little about conflicts and file locking. I’m C/PHP/JS developer. I know in Unix world there even can exist lock for part of file, but many applications simple doesn’t use this API, but create lock, which is a file.

But my idea is not related to looking. It’s addressed to this solution:
Take in mind user A and B could be the same user or different.
In first step user A order zypper to install wesnoth
Zypper create a local copy of rpmdb and bring changes on it, which will be bring once wesnoth was installed (Mark wesnoth as installed)
User B order to install Firefox
Zypper create local copy of the previously created copy of rpmdb and bring changes on it, which will be bring once Firefox was installed (Mark Firefox as installed)
Zypper ends installing of Wesnoth
A. Some errors occur, so zypper remove second(Wesnoth) and third(Firefox) rpmdb
B. No error occur, so zypper copy second (first copy) rpmdb onto first (original rpmdb)
B. Zypper starts download and install Firefox
B,A. Some errors occur, so zypper remove third (Firefox) rpmd
B.B No error occur, so zypper copy third (second copy) rpmdb onto first (first copy, but now base rpmdb)

To be clear - my idea isn’t about especially about creating many instances of package management, but rather allow to run zypper or rpm daemon, which do task from queue. Programs like zypper, Yast2, PackageKit could add operations to this queue. Also, if many instances of package managers + locking will be better solution, maybe I will implement it (or someone, who will be interested in it).

The point is that if the issue is conflict resolution (my speculation but IMO a reasonable guess), the safe decision is to avoid conflicts in the first place. You can suggest all the rules for conflict resolution you want, but you’d already have opened a Pandora’s box of possibilities…ie. Are the rules you propose valid and work for those scenarios? You’ve proposed rules for only two applications being installed simultaneously, what happens if parameters expand, like more than 2 simultaneous installations? Is there a better resolution or more desired result possible by using different rules, or if the rules were similar but different packages, could that cause different types of conflicts that would necessitate new and different rules? And so on…

But before you even explore the kaleidoscope of possible interactions based on different combinations you should first ask whether there is any expected real benefit. More than likely, the disk I/O would likely be a constraining bottleneck. On some systems, maybe memory or CPU (although far less likely). If there is no tangible benefit for changing how things now work(eg no time saved), then you’ll have to weigh the worth of all the additional complexity and possible points of failure.

IMO,
TSU