virt-manager throws an error after downgrade from Tumbleweed to 13.2

After downgrade from Tumbleweed to 13.2 KVM virtual machine manager doesn’t find a guest I created in Tumbleweed. Upon starting it I get an error dialog with details:

Error launching manager: list index out of range

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 927, in _do_show_create
    self._get_create_dialog().show(src.topwin, uri)
  File "/usr/share/virt-manager/virtManager/create.py", line 174, in show
    self.reset_state(uri)
  File "/usr/share/virt-manager/virtManager/create.py", line 382, in reset_state
    (index, inst_repos) = util.getInstallRepos()
  File "/usr/share/virt-manager/virtinst/util.py", line 607, in getInstallRepos
    str = locations[index]
IndexError: list index out of range

and the guest is not shown in the window as it used to be.

I was advised to try to install the latest version of virt-manager from http://software.opensuse.org/package/virt-manager (by using “Show unstable packages”) but I am not quite convinced that I want to use unstable packages. So before going that way I wanted to ask:

Is there another way to fix that?

I recommend you just do a force re-install of libvirt.

zypper in -f libvirt

That will re-install libvirt using whatever repositories current configured.
I assume the TW repo has been disabled or removed, and your OSS repositories are enabled.

For that matter,
If you’ve updated your system since your downgrade, you should have gotten a bunch of notifications if not actual downgrade options. Run the following if you haven’t already before anything else which should make a number of changes automatically, and then force installs will be only for those that didn’t change automatically.

zypper up

TSU

Done. No change. No other notifications/options either.

Pls post your repo list with the following command

zypper lr -d

TSU

Looks like this bug:
https://bugzilla.opensuse.org/show_bug.cgi?id=933242

Although that’s reported against Tumbleweed…
Apparently the latest zypper update broke it on 13.2 too, so you should reopen that bug report and mention that it’s broken now on 13.2 too.

zypper 

A workaround seems to be to disable all repos:

sudo zypper mr -d --all

The package in the Virtualization repo (version 1.2.1) should work though according to the bug report.
So you could also install that as I already suggested.

So, it appears that there are bits that still exist in your system from TW.
The alternative to doing an update and maybe a force re-install with the virtualization repo added probably is to do a complete uninstall of libvirt and re-install if a force re-install isn’t sufficient.

Either way can be tried, and if it doesn’t work then do the other.

I’m unclear how that workaround is supposed to work (as described in the bugzilla). Disabling all repos doesn’t seem to be an effective fix to me in any way except maybe to block an online update from the TW repo (which in the current situation shouldn’t even exist) and supposes existing bits on disk work.

TSU

No.
zypper has been updated in 13.2 recently, and with this update the repo list output format has been changed to the same as in Tumbleweed.
And that’s apparently what causes the problem.

Done. And working as all your answers! :slight_smile: Thank you so much.

Just to ask - Am I safe to continue with the Virtualization repo or how do I handle this long term?

You can continue with the Virtualization repo, yes. Should be safe.
You’ll always get the latest version then, and could also install the latest versions for other virtualization software like VirtualBox (but packages won’t be switched automatically to this repo, you have to do it manually on a case-by-case basis via the “Versions” tab in YaST e.g.).

Ideally, an update to fix this should be released for 13.2 though. For this someone has to reopen the bug report and mention that it is now an issue in 13.2 too since the recent zypper update.
As I don’t use virt-manager myself (no hardware virtualization support here), it would be good if you could do that.

Great.
Ok, I have added the comment to the bug link you provided and marked the status as Reopened + reference to this thread. I hope that is the correct thing to do.

Well, I just noticed that na update is already running:

So reopening the bug wouldn’t have been necessary.
Hm, the maintainers will probably close it again themselves. Personally I prefer to do that only when an update is actually released… :wink:

Good :slight_smile:

BTW something other strange happened. I have just rebooted and there is no sound at all.
I opened YaST > Sound and both the SB Audigy (which I used) and the built in audio show as “Non configured”. I picked the Audigy, Quick automatic setup, Next… it asked to reinstall alsa-firmware and one awesfx, confirmed, closed YaST - still no sound.

Any idea what might be happening and why it happened after this change?

Another reboot and sound is back. Weird.

Hm. Maybe some process/application grabbed the audio device and prevented others from outputting sound?

Sounds unrelated to adding the Virtualization repo though, especially if you only installed virt-manager from there…

You should not really have to configure it.
If it isn’t the kernel will pick the drivers/settings automatically during boot, and this should just work nowadays.

Hm. No sound again (didn’t even reboot).

Back to YaST > Sound, delete the SB and reconfigure it (Normal), Playing the test sound without problem (it got them from the original CD). Then closed and no sound in any app. And the KDE mixer shows no devices.

Also in Kmix audio setup the only listed device is PulseAudio Sound Server. But hitting the Play button doesn’t make any sound with it at all.

No app using sound was running (except the browser but there was nothing playing in the recent hour).

Reboot #3 and everything looks normal again. Hopefully it will stay like that

BTW the guys updated the bug status.

Many thanks once again wolfi!

But, the error seems to be thrown by virt-manager, not zypper. Updating zypper shouldn’t be relevant.
So, updating zypper isn’t likely the issue (unless there is more to what is happening than what is described).
I’m under the impression that the bugzilla was about the libvirt package in the virtualization repo being patched, not anything to do with zypper.

So, still mytified about what “disabling all repos” did in that bugzilla.

TSU

It is.

I’m under the impression that the bugzilla was about the libvirt package in the virtualization repo being patched, not anything to do with zypper.

Yes, it is a “bug” in virt-manager. Apparently it runs “zypper lr -u” to get a list of enabled repos. And it seems to not have been able to cope with the changed output format in the latest zypper version and crashed.

It runs “zypper lr -u” because of this:

Enhancement that gets the hosts installation location from install.inf and also collects the repos provided by zypper. These locations are then presented as potential installation locations when createing a VM.

The update adapts virt-manager to accept that new output format.

So, still mytified about what “disabling all repos” did in that bugzilla.

“zypper lr” will print an empty list then, and virt-manager doesn’t crash.

After some long thought,
I can’t think of a reason for zypper to be invoked except one corner scenario supporting LXC, because LXC isn’t virtualization… It’s only isolation so only in that case it makes sense that libvirt would want to use the native package manager to install and maintain a Container… and even so, if the LXC container is a pre-built image (which is how LXC containers were deployed a year ago when I was looking at this) then zypper would be embedded in the image.

In other and plain words, I can’t think of a single reason for zypper to be running in the “Host” app, only in the “Guest” or “Container” environment.

Except for that one very unusual corner instance(LXC openSUSE container installed from scratch), I cannot think of a reason why libvirt would use an embedded zypper package.

The problem for me evaluating this issue is that I can’t find an easily viewable public source, like in a github or svn repo. The OBS project lists its patch history, but I don’t see a link to source so I can inspect/verify why zypper is even there.

TSU

PS. Shortly after writing this post I thought of a second but again very rare instance running zypper in the Host app, when again implementing LXC containers, it’s possible to run a command from the Host making changes in the Container because the Container’s file system is completely visible to the Host. This is very rare because there are very few scenarios when the command shouldn’t instead be run from within the Container.