Starting with openSUSE 42.2, there seems to be an artifically created conflict between the following packages:
virtualbox-host-kmp-default (needed by virtualbox);
virtualbox-guest-kmp-default (needed by virtualbox-guest-tools, virtualbox-guest-x11).
As a result, one can only have either host or guest tools installed at any time, but not both simultaneously.
Previous versions of openSUSE were able to function in all of the following modes, given that proper software was installed:
conventional bare-metal;
host for Xen;
host for Qemu/KVM;
host for VirtualBox;
guest for Xen, both paravirtual and fully virtualized;
guest for Qemu/KVM;
guest for VirtualBox.
The last one allowed to dual-boot into Windows and use VirtualBox there to run openSUSE from its raw disks, as well as other guests, but still retaining the ability to boot back to openSUSE and use it as VirtualBox host to run the very same guests. As for now, however, I have to carefully choose between option 4 and option 7 as they have become mutually exclusive — or at least how it seems. Doesn’t this bother anyone except me? I tried searching throughout the forums and in wiki, but found no mention of this newly introduced issue, not even a single complaint. Or am I missing something?
Hi, I don’t remember exactly the details at the moment but at some point near the release of 42.2 there were problems with some systems when both guest-kmp and host-kmp were installed, so an “artificial” conflict was included in the packaging to avoid such a situation with default installs.
I think you can override that if you want and if your system does not show adverse effects when both packages are installed.
Maybe other forum members remember the details about that (wolfi323?). Or a better search of the Forums around the end of 2016 might offer some meaningful result.
In short, the main reason that the conflict was reintroduced (in 13.2 already too) was that the guest-kmp was “broken” for a while and also broke installing other kmps (like nvidia) on the host, the purpose was to prevent them from getting installed on the host in the first place (they get pulled in automatically on the host too for some reason).
Shouldn’t be necessary any more as that problem should be fixed.
It may not make sense today, but will sometime in the future.
Practically all virtualization technologies are trying to implement “nested” virtualization (ie Guests which are configured as virtualized HostOS) by in this special case granting the Guest direct access to the CPU virtualization extensions and bypassing the HostOS.
Every virtualization technology calls this by a different name,
In VBox’s case I tested this about 6 months ago and it didn’t really work but that isn’t unsual… First, although offered as an option by practically every common virtualization technology, it’s very bleeding edge with varying success. And, VBox has a long history of some components not fully implemented for several releases.
So,
It <can> make sense for a Guest to also need to have HostOS modules installed.