openSUSE 12.3 -> 13.1 upgrade XEN unable to connect VMM

I ran an offline upgrade from 12.3 to 13.1

I am pretty sure I’m on the dom0, but can see to connect with the Virtual Machine Manager.

I get the following error.

Unable to connect to libvirt.

Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory

Verify that:

  • A Xen host kernel was booted
  • The Xen service has been started

Any suggestions on how I can get back into the vm’s?

How Do you do that?
Do you use a CD/USB? Or do you want to upgrade a Xen VM with a iso image?
What are the steps you have used?

Downloaded and burned 13.1 x86_64 DVD

rebooted system on dvd and upgraded.

If you did an offline upgrade, it’s always critical to update the system immediately afterwards with the following command to pull down the latest files to get all fixes, patches, updates, etc.

zypper up

Then, reboot and re-evaluate what works and what doesn’t.

TSU

I would say:

  • create snapshots of all the virtual machines
  • use a zypper up online
  • bring XEN to running:
    • What doesn’t run? (error messages / /var/log/messages ) -> fixing
    • upgrade the XEN version
    • reinstallation

They are 3 different ways for fixing XEN.
We would need more informations for the first way.
After that you can use and integrate vm snapshots again.

No Change after the updates, which were done via apper.

Again trying to connect with VMM and get the following error.

Unable to connect to libvirt.

Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory

Verify that:

  • A Xen host kernel was booted

uname -a
Linux VERN 3.11.10-21-xen #1 SMP Mon Jul 21 15:28:46 UTC 2014 (9a9565d) x86_64 x86_64 x86_64 GNU/Linux

  • The Xen service has been started

From ps -A
1482 ? 00:00:00 xend
1489 ? 00:00:04 xend

So there is a problem with libvert, as it fails to run. (From syslog)

10/29/14 05:13:09 PM 2014-10-29T16:16:14.480603-04:00 VERN systemd[1]: Starting Suspend Active Libvirt Guests…
10/29/14 05:13:09 PM 2014-10-29T16:16:16.779605-04:00 VERN libvirtd[2990]: libvirt version: 1.1.2
10/29/14 05:13:09 PM 2014-10-29T16:16:16.780124-04:00 VERN libvirtd[2990]: invalid argument: Failed to parse user ‘qemu’
10/29/14 05:13:09 PM 2014-10-29T16:16:16.780643-04:00 VERN libvirtd[2990]: Initialization of QEMU state driver failed: invalid argument: Failed to parse user ‘qemu’
10/29/14 05:13:09 PM 2014-10-29T16:16:16.781113-04:00 VERN libvirtd[2990]: Driver state initialization failed

Suggestions?

Progress;

Following a lead from another thread, I created a user and group qemu, made the qemu group primary to user qemu, and added bin and libvirt groups to user qemu, and rebooted.

Now I get


# systemctl status libvirtd
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Wed 2014-10-29 20:05:38 EDT; 8min ago
 Main PID: 2935 (libvirtd)                                                                                  
   CGroup: /system.slice/libvirtd.service                                                                   
           └─2935 /usr/sbin/libvirtd --listen                                                               
                                                                                                            
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0                  
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory    
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0                  
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory    
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0                  
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory    
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0                  
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory    
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route - destination and gw are both 0/0
Oct 29 20:13:52 VERN libvirtd[2935]: Cannot create route from netlink message: No such file or directory

I can now login with VMM, however all the guest machines are paused, and fail to run.

From what I see something is not generating the virtual network interfaces for the vm’s.

suggestions?

I find that when starting libvirtd initially I get this:


# systemctl -l status libvirtd
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Wed 2014-10-29 22:13:06 EDT; 31min ago
 Main PID: 32692 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─32692 /usr/sbin/libvirtd --listen


Oct 29 22:13:06 VERN systemd[1]: Starting Virtualization daemon...
Oct 29 22:13:06 VERN systemd[1]: Started Virtualization daemon.
Oct 29 22:13:06 VERN libvirtd[32692]: libvirt version: 1.1.2
Oct 29 22:13:06 VERN libvirtd[32692]: Configured security driver "none" disables default policy to create confined guests
Oct 29 22:13:06 VERN libvirtd[32692]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 29 22:13:06 VERN libvirtd[32692]: Failed to get host CPU
Oct 29 22:13:07 VERN libvirtd[32692]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 29 22:13:07 VERN libvirtd[32692]: Failed to query host NUMA topology, disabling NUMA capabilities

It’s only after that I see the error mentioned in the previous post, and the error repeats adnausium firing up the VMM.

And no the default route points to br0.

Still searching.

More:

System is a Dell Precision T5500


processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 2
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 4
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 5
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 6
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 7
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 8
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 9
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 10
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 11
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 12
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 13
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 14
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


processor	: 15
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping	: 5
cpu MHz		: 1596.000
cache size	: 8192 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl aperfmperf pni est ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dtherm
bogomips	: 4527.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

grep --color vmx /proc/cpuinfo

Produced no output.


# zypper lr -d
#  | Alias                     | Name                               | Enabled | Refresh | Priority | Type   | URI                                                                             | Service
---+---------------------------+------------------------------------+---------+---------+----------+--------+---------------------------------------------------------------------------------+--------
 1 | KDE Extra                 | KDE Extra                          | Yes     | Yes     |   99     | rpm-md | http://download.opensuse.org/repositories/KDE:/Extra/KDE_Current_openSUSE_13.1/ |        
 2 | KDE SC packages           | KDE SC packages                    | Yes     | Yes     |   99     | rpm-md | http://download.opensuse.org/repositories/KDE:/Current/openSUSE_13.1/           |        
 3 | LibreOffice STABLE        | LibreOffice STABLE                 | Yes     | Yes     |   99     | rpm-md | http://download.opensuse.org/repositories/LibreOffice:/Stable/openSUSE_13.1/    |        
 4 | VLC VideoLan Client       | VLC VideoLan Client                | Yes     | Yes     |   99     | rpm-md | http://download.videolan.org/pub/vlc/SuSE/12.3/                                 |        
 5 | Wine                      | Wine                               | Yes     | Yes     |   99     | rpm-md | http://download.opensuse.org/repositories/Emulators:/Wine/openSUSE_13.1/        |        
 6 | openSUSE-13.1-1.10        | openSUSE-13.1-1.10                 | Yes     | No      |   99     | yast2  | cd:///?devices=/dev/disk/by-id/ata-HL-DT-ST_DVDRAM_GH24NS50_K3G9AM94921         |        
 7 | packman                   | packman                            | Yes     | Yes     |   99     | rpm-md | http://packman.inode.at/suse/openSUSE_13.1/                                     |        
 8 | packman-essentials        | packman-essentials                 | Yes     | Yes     |   99     | rpm-md | http://packman.inode.at/suse/openSUSE_13.1/Essentials/                          |        
 9 | packman-multimedia        | packman-multimedia                 | Yes     | Yes     |   99     | rpm-md | http://packman.inode.at/suse/openSUSE_13.1/Multimedia/                          |        
10 | repo-debug                | openSUSE-13.1-Debug                | No      | Yes     |   99     | NONE   | http://download.opensuse.org/debug/distribution/13.1/repo/oss/                  |        
11 | repo-debug-update         | openSUSE-13.1-Update-Debug         | No      | Yes     |   99     | NONE   | http://download.opensuse.org/debug/update/13.1/                                 |        
12 | repo-debug-update-non-oss | openSUSE-13.1-Update-Debug-Non-Oss | No      | Yes     |   99     | NONE   | http://download.opensuse.org/debug/update/13.1-non-oss/                         |        
13 | repo-non-oss              | openSUSE-13.1-Non-Oss              | Yes     | Yes     |   99     | yast2  | http://download.opensuse.org/distribution/13.1/repo/non-oss/                    |        
14 | repo-oss                  | openSUSE-13.1-Oss                  | Yes     | Yes     |   99     | yast2  | http://download.opensuse.org/distribution/13.1/repo/oss/                        |        
15 | repo-source               | openSUSE-13.1-Source               | No      | Yes     |   99     | NONE   | http://download.opensuse.org/source/distribution/13.1/repo/oss/                 |        
16 | repo-update               | openSUSE-13.1-Update               | Yes     | Yes     |   99     | rpm-md | http://download.opensuse.org/update/13.1/                                       |        
17 | repo-update-non-oss       | openSUSE-13.1-Update-Non-Oss       | Yes     | Yes     |   99     | rpm-md | http://download.opensuse.org/update/13.1-non-oss/                               |        


# rpm -qa xen*
xen-libs-4.3.2_02-27.1.x86_64
xen-4.3.2_02-27.1.x86_64
xen-libs-32bit-4.3.2_02-27.1.x86_64
xen-kmp-desktop-4.3.0_14_k3.11.6_4-1.3.x86_64
xen-doc-html-4.3.2_02-27.1.x86_64
xen-xend-tools-4.3.2_02-27.1.x86_64
xen-tools-4.3.2_02-27.1.x86_64
xen-kmp-desktop-4.3.2_02_k3.11.10_21-27.1.x86_64

Went to look for some log files.

var/log/libvirt/libvirtd.log no longer exists.

Could there have been an upgrade in xen versions between 12.3 and 13.1?

# cat /var/log/libvirt/libxl/libxl.log
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
libxl: error: libxl.c:87:libxl_ctx_alloc: Is xenstore daemon running?
failed to stat /var/run/xenstored.pid: No such file or directory
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
libxl: error: libxl.c:87:libxl_ctx_alloc: Is xenstore daemon running?
failed to stat /var/run/xenstored.pid: No such file or directory
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0
xc: debug: hypercall buffer: total allocations:7 total releases:7
xc: debug: hypercall buffer: current allocations:0 maximum allocations:1
xc: debug: hypercall buffer: cache current size:1
xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0

Makes me think so.

I also came up with a new error, I think!


# systemctl -l status libvirtd
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Thu 2014-10-30 07:11:21 EDT; 6min ago
 Main PID: 2971 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─2971 /usr/sbin/libvirtd --listen


Oct 30 07:11:24 VERN libvirtd[2971]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 30 07:11:24 VERN libvirtd[2971]: Failed to query host NUMA topology, disabling NUMA capabilities
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:25 VERN libvirtd[2971]: Unable to issue hypervisor ioctl 3166208: Permission denied
Oct 30 07:11:26 VERN libvirtd[2971]: this function is not supported by the connection driver: virConnectListAllDomains

As I go through this I’m coming across other relevant info.


# xm info
host                   : VERN
release                : 3.11.10-21-xen
version                : #1 SMP Mon Jul 21 15:28:46 UTC 2014 (9a9565d)
machine                : x86_64
nr_cpus                : 16
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 2
cpu_mhz                : 2261
hw_caps                : bfebfbff:28100800:00000000:00003b00:009ce3bd:00000000:00000001:00000000
virt_caps              : hvm
total_memory           : 24573
free_memory            : 2074
free_cpus              : 0
max_free_memory        : 23248
max_para_memory        : 23244
max_hvm_memory         : 23181
xen_major              : 4
xen_minor              : 3
xen_extra              : .2_02-27.1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 27404
xen_commandline        : 
cc_compiler            : gcc (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388
cc_compile_by          : abuild
cc_compile_domain      : 
cc_compile_date        : Wed Oct  1 14:43:47 UTC 2014
xend_config_format     : 4

and I think this gives a better view of the installed packages.


# rpm -q -a |grep xen
kernel-xen-3.11.10-21.1.x86_64
xen-libs-4.3.2_02-27.1.x86_64
xen-4.3.2_02-27.1.x86_64
patterns-openSUSE-xen_server-13.1-13.6.1.x86_64
libvirt-daemon-driver-xen-1.1.2-2.36.1.x86_64
libvirt-daemon-xen-1.1.2-2.36.1.x86_64
xen-libs-32bit-4.3.2_02-27.1.x86_64
xen-kmp-desktop-4.3.0_14_k3.11.6_4-1.3.x86_64
kernel-xen-3.11.6-4.1.x86_64
xen-doc-html-4.3.2_02-27.1.x86_64
xen-xend-tools-4.3.2_02-27.1.x86_64
xen-tools-4.3.2_02-27.1.x86_64
xen-kmp-desktop-4.3.2_02_k3.11.10_21-27.1.x86_64

I think this may also be relevant.

[URI]http://wiki.xen.org/wiki/MigrationGuideToXen4.1%2B#Toolstack_upgrade_notes[/URI]

The adventure continues :open_mouth:

So how do I go about this if the vm’s aren’t running?

tia

Wil

Folks it looks like they upgraded XEN not just openSUSE.

I hope there is someone that can assist me in the migration path.

Thanks

Wil

Is the XEN configuration right after the upgrade?
Snapshots of the vm should run after the new import.

Honestly I don’t know. I do know the vm’s are not running, and attempts to use xend, and libvirtd error out trying to run them.

I wasn’t prepared for this in an upgrade.

Wil

This may or may not work.

When I experienced this error on a KVM machine years ago (which may or may not be caused by something different but with the same result),

Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory

I resolved by completely uninstalling libvirt, verified all related files were removed and then re-installing.

More or less addresses what has been described in this thread…

  • The OS has been upgraded <and updated>.
  • If libvirt didn’t successfully upgrade, then remove everything and re-install.

Note that this may result in a working libvirt that isn’t already loaded with your Guests, you may need to add them back into vm manager manually.

Of course, if you want to do things like snapshots using vm manager, can’t be done with a non-working vm manager. But, if you want to do that (I can’t think of a reason why) you should be able to use the command line, for anything you do in vm manager you should also be able to do using virsh in a console.

Might work for you, too.

TSU

That being the case, do I want to uninstall all of the following?


# rpm -q -a |grep libvirt
libvirt-daemon-driver-vbox-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-libxl-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-secret-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-network-1.1.2-2.36.1.x86_64
libvirt-1.1.2-2.36.1.x86_64
libvirt-daemon-config-network-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-qemu-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-nwfilter-1.1.2-2.36.1.x86_64
libvirt-python-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-interface-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-xen-1.1.2-2.36.1.x86_64
libvirt-daemon-xen-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-storage-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-lxc-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-nodedev-1.1.2-2.36.1.x86_64
libvirt-daemon-config-nwfilter-1.1.2-2.36.1.x86_64
libvirt-daemon-1.1.2-2.36.1.x86_64
libvirt-client-1.1.2-2.36.1.x86_64
libvirt-daemon-driver-uml-1.1.2-2.36.1.x86_64

And then how do I verify all have been completely removed?

I’m not so interested in doing a snapshot right now. My primary concern is getting the VM’s back online, in the easiest manner possible.

Hope you can stick with me through all of this.

Wil

TSU,

The more I think about this and noticing that xend is going away, it may be wiser to move my config to the newer (xl toolset). I believed I have pulled my xend configs for the domU’s using the command “xm -list -l domU”, which I saved as text files. I really don’t know what broke xend and libvirt, and maybe the latter isn’t broke if I get xl running with the proper configs.

I suspect libvirt might work against a functioning hypervisor.

Just pondering before I go doing something foolish, again.

Wil

TSU and all,

To do this as cleanly as possible, I rebooted into the desktop kernel instead of the xen kernel. Then uninstalled, rebooted again back to the desktop kernel, and re-installed.

No change and same errors.

Now after initial upgrade I noticed neither xend or libvirt were not enable or running. Makes sense if your downgrading xend, and I don’t know regarding libvirt. To confirm this;

xl list

Name ID Mem VCPUs State Time(s)
Domain-0 0 24110 16 r----- 602.5

xm list

Error: Unable to connect to xend: No such file or directory. Is xend running?

ps -A | grep xen*

121 ? 00:00:00 xenwatch
122 ? 00:00:00 xenbus
989 ? 00:00:00 xen_pciback_wor
1006 ? 00:00:00 xenstored
1017 ? 00:00:00 xenconsoled

So since dom0 is the only listed domain under the new tools, I’m gonna go out on a limb here and say the next step would be to get my domU’s migrated, and running.

Additional help would be enormously appreciated.

Will

Yes, IIRC I first uninstalled the libvirt package

zypper rm libvirt

Followed by running locate (If you use this utility you need to install mlocate, then run “updatedb” to create your filename database)

locate libvirt

If the above didn’t return clean, then I manually removed what was listed.
This basically ensures in libvirt is removed including configurations.
Then, I installed

zypper in libvirt

At least for me in my situation, it resolved some kind of mis-configuration and libvirt knew how to access the localhost libvirt server again.,

I suspect the issue is similar no matter what the underlying virtualization technology is… There is a “server” side and “user/client” side to libvirt, and the “client” needs to know how to communicate with the “server.” I’ve guessed without any supporting info that it’s a simple configuration issue, it might be a corrupted file or a file with incorrect information. That led to my approach to do more than simply uninstalling, I needed to know that all configuration files were removed as well.

TSU

I am proud to say I’ve made some progress in recovering from the upgrade. I have the domU’s up and operating almost as they were previously, accessible via vnc. However I still have loads of questions which still remain unanswered.

Best I can tell xend has been downgraded and sort of useless. I’m almost leaning in the same direction with libvirt and other related tools too. YMMV although I’m gonna keep posting because I find it hard to believe that I’m the only one suffering from this virtualization fiasco, what I learn along the way…

For anyone going down this road I highly recommend reviewing;
http://wiki.xenproject.org/wiki/Migration_Guide_To_Xen4.1%2B and,
http://wiki.xenproject.org/wiki/Network_Configuration_Examples_(Xen_4.1%2B), then
http://wiki.xen.org/wiki/XL and pick your xen version. And there may be others.

In there it tells you that xend has been downgraded, and will go away in the future. And that is why I don’t know how this will effect other related tools that worked against xend.

Following the a fore mentioned documents I reconfigured/renamed my the network bridge to comply with the convention xenbr0. I found the easiest way, was since I already had the old br0 bridged interface, was the following as root.


cd /etc/sysconfig/network
systemctl stop network
cp ifcfg-br0 ifcfg-xnebr0
mv ifcfg-bro xifcfg-br0
systemctl start network

Then went into yast and reconfigured the firewall appropriately. There may be other ways but this is what worked for me.:wink:

Luckily I still had valid config files in /etc/xen/vm/“domU” and started each using the follow command syntax command;


xl create /etc/xen/vm/"domU"

and,


xl vncviewer "domU"

connects vnc to the the domU console.

Now to figure out how to get it them to start on boot.

Wil