Using rbd’s as physical disks passed to xen domains is on my list of things to play with.
Working with two xen dom0’s that are ceph clients. It is really simple to “map” an rbd as a block device on each dom0.
xen0:~ # rbd lock ls test1
xen0:~ # rbd map rbd/test1
/dev/rbd0
xen0:~ # ls -l /dev/rbd
rbd/ rbd0 rbd0p1 rbd0p2
xen0:~ # ls -l /dev/rbd
rbd/ rbd0 rbd0p1 rbd0p2
xen0:~ # ls -l /dev/rbd/rbd/test1
lrwxrwxrwx 1 root root 10 Nov 19 21:50 /dev/rbd/rbd/test1 → …/…/rbd0
The test1 domain’s xl config is on a cephfs that both dom0’s share:
xen1:/cephfs/space/etc/xen/vm # cat test1
< … snip … >
disk = “vdev=xvda,target=/dev/rbd/rbd/test1”,
“file:/cephfs/space/etc/xen/images/openSUSE-Leap-15.1-DVD-x86_64.iso,xvdb:cdrom,r” ]
<… snip … >
xen1:/cephfs/space/etc/xen/vm # rbd map rbd/test1
/dev/rbd0
xen1:/cephfs/space/etc/xen/vm # ls -l /dev/rbd/rbd/test1
lrwxrwxrwx 1 root root 10 Nov 19 21:51 /dev/rbd/rbd/test1 → …/…/rbd0
xen1:/cephfs/space/etc/xen/vm # xl create test1
Parsing config from test1
xen1:/cephfs/space/etc/xen/vm # xl migrate test1 xen0
migration target: Ready to receive domain.
Saving to migration stream new xl format (info 0x3/0x0/1686)
Loading new save file <incoming migration stream> (new xl fmt info 0x3/0x0/1686)
Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Saving domain 13, type x86 HVM
xc: info: Found x86 HVM domain from Xen 4.12
xc: info: Restoring domain
xc: info: suse_precopy_policy: domU 13, too many iterations (6/5)
xc: info: Restore successful
xc: info: XenStore: mfn 0xfeffc, dom 1, evt 1
xc: info: Console: mfn 0xfefff, dom 0, evt 2
migration target: Transfer complete, requesting permission to start domain.
migration sender: Target has acknowledged transfer.
migration sender: Giving target permission to start.
migration target: Got permission, starting domain.
migration target: Domain started successsfully.
migration sender: Target reports successful startup.
Migration successful.
test1:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 3.8G 0 rom
xvda 202:0 0 36G 0 disk
├─xvda1 202:1 0 8M 0 part
└─xvda2 202:2 0 36G 0 part
├─system-swap 254:0 0 1.8G 0 lvm [SWAP]
├─system-root 254:1 0 19.8G 0 lvm /
└─system-home 254:2 0 14.5G 0 lvm /home
test1:~ #
xen0:~ # xl migrate test1 xen1
mgration target: Ready to receive domain.
Saving to migration stream new xl format (info 0x3/0x0/1686)
Loading new save file <incoming migration stream> (new xl fmt info 0x3/0x0/1686)
Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Saving domain 55, type x86 HVM
xc: info: Found x86 HVM domain from Xen 4.12
xc: info: Restoring domain
xc: info: suse_precopy_policy: domU 55, too many iterations (6/5)
xc: info: Restore successful
xc: info: XenStore: mfn 0xfeffc, dom 1, evt 1
xc: info: Console: mfn 0xfefff, dom 0, evt 2
migration target: Transfer complete, requesting permission to start domain.
migration sender: Target has acknowledged transfer.
migration sender: Giving target permission to start.
migration target: Got permission, starting domain.
migration target: Domain started successsfully.
migration sender: Target reports successful startup.
Migration successful.
It’s not a bad setup, but i have little expericence with it. The complex part of this approach is having to have the “rbdmap” sync’d across all the xen dom0:
https://docs.ceph.com/docs/master/man/8/rbdmap/?highlight=rbdmap
I’m thinking qcow2 issue is in the qemu code … I found “is unexpected” in the qemu source but don’t have time to run it down.