If you did an offline upgrade, it’s always critical to update the system immediately afterwards with the following command to pull down the latest files to get all fixes, patches, updates, etc.
zypper up
Then, reboot and re-evaluate what works and what doesn’t.
Following a lead from another thread, I created a user and group qemu, made the qemu group primary to user qemu, and added bin and libvirt groups to user qemu, and rebooted.
Now I get
# systemctl status libvirtd
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: active (running) since Wed 2014-10-29 20:05:38 EDT; 8min ago
Main PID: 2935 (libvirtd)
CGroup: /system.slice/libvirtd.service
└─2935 /usr/sbin/libvirtd --listen
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory
I can now login with VMM, however all the guest machines are paused, and fail to run.
From what I see something is not generating the virtual network interfaces for the vm’s.
I find that when starting libvirtd initially I get this:
# systemctl -l status libvirtd
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: active (running) since Wed 2014-10-29 22:13:06 EDT; 31min ago
Main PID: 32692 (libvirtd)
CGroup: /system.slice/libvirtd.service
└─32692 /usr/sbin/libvirtd --listen
Oct 29 22:13:06 VERN systemd[1]: Starting Virtualization daemon...
Oct 29 22:13:06 VERN systemd[1]: Started Virtualization daemon.
Oct 29 22:13:06 VERN libvirtd[32692]: libvirt version: 1.1.2
Oct 29 22:13:06 VERN libvirtd[32692]: Configured security driver "none" disables default policy to create confined guests
Oct 29 22:13:06 VERN libvirtd[32692]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 29 22:13:06 VERN libvirtd[32692]: Failed to get host CPU
Oct 29 22:13:07 VERN libvirtd[32692]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 29 22:13:07 VERN libvirtd[32692]: Failed to query host NUMA topology, disabling NUMA capabilities
It’s only after that I see the error mentioned in the previous post, and the error repeats adnausium firing up the VMM.
Could there have been an upgrade in xen versions between 12.3 and 13.1?
# cat /var/log/libvirt/libxl/libxl.log
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
libxl: error: libxl.c:87:libxl_ctx_alloc: Is xenstore daemon running?
failed to stat /var/run/xenstored.pid: No such file or directory
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
libxl: error: libxl.c:87:libxl_ctx_alloc: Is xenstore daemon running?
failed to stat /var/run/xenstored.pid: No such file or directory
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
Makes me think so.
I also came up with a new error, I think!
# systemctl -l status libvirtd
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: active (running) since Thu 2014-10-30 07:11:21 EDT; 6min ago
Main PID: 2971 (libvirtd)
CGroup: /system.slice/libvirtd.service
└─2971 /usr/sbin/libvirtd --listen
Oct 30 07:11:24 VERN libvirtd[2971]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 30 07:11:24 VERN libvirtd[2971]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:26 VERN libvirtd[2971]: this function is not supported by the connection driver: virConnectListAllDomains
As I go through this I’m coming across other relevant info.
When I experienced this error on a KVM machine years ago (which may or may not be caused by something different but with the same result),
Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory
I resolved by completely uninstalling libvirt, verified all related files were removed and then re-installing.
More or less addresses what has been described in this thread…
The OS has been upgraded <and updated>.
If libvirt didn’t successfully upgrade, then remove everything and re-install.
Note that this may result in a working libvirt that isn’t already loaded with your Guests, you may need to add them back into vm manager manually.
Of course, if you want to do things like snapshots using vm manager, can’t be done with a non-working vm manager. But, if you want to do that (I can’t think of a reason why) you should be able to use the command line, for anything you do in vm manager you should also be able to do using virsh in a console.
The more I think about this and noticing that xend is going away, it may be wiser to move my config to the newer (xl toolset). I believed I have pulled my xend configs for the domU’s using the command “xm -list -l domU”, which I saved as text files. I really don’t know what broke xend and libvirt, and maybe the latter isn’t broke if I get xl running with the proper configs.
I suspect libvirt might work against a functioning hypervisor.
Just pondering before I go doing something foolish, again.
To do this as cleanly as possible, I rebooted into the desktop kernel instead of the xen kernel. Then uninstalled, rebooted again back to the desktop kernel, and re-installed.
No change and same errors.
Now after initial upgrade I noticed neither xend or libvirt were not enable or running. Makes sense if your downgrading xend, and I don’t know regarding libvirt. To confirm this;
xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 24110 16 r----- 602.5
xm list
Error: Unable to connect to xend: No such file or directory. Is xend running?
So since dom0 is the only listed domain under the new tools, I’m gonna go out on a limb here and say the next step would be to get my domU’s migrated, and running.
Followed by running locate (If you use this utility you need to install mlocate, then run “updatedb” to create your filename database)
locate libvirt
If the above didn’t return clean, then I manually removed what was listed.
This basically ensures in libvirt is removed including configurations.
Then, I installed
zypper in libvirt
At least for me in my situation, it resolved some kind of mis-configuration and libvirt knew how to access the localhost libvirt server again.,
I suspect the issue is similar no matter what the underlying virtualization technology is… There is a “server” side and “user/client” side to libvirt, and the “client” needs to know how to communicate with the “server.” I’ve guessed without any supporting info that it’s a simple configuration issue, it might be a corrupted file or a file with incorrect information. That led to my approach to do more than simply uninstalling, I needed to know that all configuration files were removed as well.
I am proud to say I’ve made some progress in recovering from the upgrade. I have the domU’s up and operating almost as they were previously, accessible via vnc. However I still have loads of questions which still remain unanswered.
Best I can tell xend has been downgraded and sort of useless. I’m almost leaning in the same direction with libvirt and other related tools too. YMMV although I’m gonna keep posting because I find it hard to believe that I’m the only one suffering from this virtualization fiasco, what I learn along the way…
In there it tells you that xend has been downgraded, and will go away in the future. And that is why I don’t know how this will effect other related tools that worked against xend.
Following the a fore mentioned documents I reconfigured/renamed my the network bridge to comply with the convention xenbr0. I found the easiest way, was since I already had the old br0 bridged interface, was the following as root.