Newbie in KVM

Hello.

I’m willing to start learning KVM, after a long time using VirtualBox. Already have read several threads here in the sub forum, but seemingly there’s more than one way to get started --i.e., install needed packages first of all.

By any chance, is there kind of an “official” guide to get started with KVM, installing and managing virtual machines? That is, if 42.3 repositories are already stable/complete enough…

Thanks beforehand.

Installing KVM should always be done using the YaST virtualization module.
When you launch the module, it will offer to install KVM, Xen, or LXC.
Choose KVM if that is your choice. You can install any combination or all three but I don’t recommend doing that for a first install,

When your install proceeds, it will install libvirt which is a virtualization management system which supports about a dozen different virtualization and isolation technologies (not just the 3 mentioned). You will also be prompted whether to install a bridging device, which you should accept although I don’t often use it. Bridging devices are common to nearly all virtualization technologies, and is a common way to implement and configure virtual networks. You can later create and configure numerous bridging devices bound to the same or different physical networking adapters, each configured differently, eg NAT, Host-Only, different IP address ranges, with or without its own DHCP, etc.

When your virtualization install is completed, you’ll have KVM installed with libvirt which will also provide you with two graphical tools,
vm manager for managing your Guests, creating and managing virtual networks, configuring storage pools, more
virt install for creating new Guests.

I recommend the SUSE 11 SP4 documentation for everything that comes after installation. Although installations on SLES and openSUSE are different, KVM and libvirt are pretty much the same for everything else. You can also ask questions here.

SLES 11 SP4 KVM documentation
https://www.suse.com/documentation/sles11/singlehtml/book_kvm/book_kvm.html

TSU

Thanks sir.

  1. The linked docs are for SLES 11, while Leap 42.3 is AFAIK based on SLES 12. Wouldn’t this be a little better? (I just found it…) https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/index.html

  2. WTH with windows-virtio.iso -or however…- being long deprecated already? So now I must forcefully buy the driver pack from novel if I want to have virtio hdd, network and RAM balloon drivers in a Windows guest?

  3. How can I really make/restore VM backups? Is it the called “snapshots” here?

  4. Could someone help understanding what’s “libvirt” and “qemu”, and how is it that they are similar but entirely different products, or whatever? This topic has always creeped my nerves a bit…

Thanks beforehand.

Hi
Your not forced into anything, why shouldn’t an entity charge to build/sign and maintain them?

You could try the Fedora ones, or build them yourself…?
https://fedoraproject.org/wiki/Windows_Virtio_Drivers

libvirt is virtualization management framework. QEMU is hardware emulator. KVM is specific virtualization API. Colloquial use of KVM for virtualization almost unanimously means “QEMU with KVM support”, meaning - QEMU on Linux host kernel with KVM support and running guest which is also KVM aware.

First, a summery and links to my previous posts about documentation… And my criticsm has been hard because IMO it’s especially important for recognized and recommended documentation to be accurate. Really good and comprehensive documentation is hard to come by, and when documentation is poor, I can’t think of many worse things that exist because of the effect documentation has on disseminating truths and not falsehoods.

I’ve reviewed the SLES 12 KVM documentation and find much to be desired. I have a particular beef with the documentation currently easing into the same improperly named concepts as the openSUSE community documentation (which I object to even more strenuously than the SLES 12 documentation).

My previous posted opinion about SLES 12 virtualization documentation
https://forums.opensuse.org/showthread.php/524724-quot-New-quot-SUSE-virtualization-documentation-(Applies-also-to-openSUSE)

And, the following is my earlier criticism of the openSUSE community documentation. In fact, I posted enough detail that if someone were properly compensated or otherwise motivated, each item I identified could be verified and the documentation fixed.
https://forums.opensuse.org/showthread.php/514397-Where-has-official-(PDF)-documentation-gone?highlight=tsu2+virtualization

Note that there is some good stuff in the SLES 12 documentation, but because of the misinformation which is also there a student would either have to already know enough to know what to disregard or be prepared to have to unlearn/relearn a number of concepts to be able to move ahead.

Anyway, bottom line is that IMO the SLES 11 SP4 KVM documentation is <exceptionally> good and complete… And at least for now can be considered an excellent reference. Only after the User becomes well versed in that documentation, then the SLES 12 SP2 KVM documentation is an interesting read because there <are> some nuggets of really good stuff in there, too.

Now, to your other points…

  1. Virtio is an interesting technology. I haven’t delved deeply into the underlying facts but have just relied on general documentation. Only Windows still uses virtio. On Linux, virtio at one time was considered “better” but was deprecated (about 2 years ago?). You should be able to obtain Windows virtio drivers for free (I did awhile back, I don’t think anything should have changed).

  2. There are many ways to make backups…

  • You can simply copy the entire machine
  • You can execute an ordinary backup from within the Guest
  • You can execute an ordinary backup from the Host
  • You can “clone” the Guest which creates a copy with changes that permit the clone to run simultaneously on the same network (You can’t do that with a regular copy)
  • If you consider High Availability as a type of backup (It’s actually more a fault tolerant solution), then there are HA setups. Although normally expected to be deployed on bare metal, they’ll work for virtualized machines as well.

Snapshots aren’t generally considered as a type of backup, they’re more typically used to rollback changes when you’re about to do something highly risky or experimental. Was a godsend before we had BTRFS. I still do snapshotting alot when I Develop apps or to test something someone posted in these Forums. The problem with snapshots is that excessive numbers of them are supposed to degrade performance (don’t remember the reason why). Keep in mind also that if a disk becomes corrupted it’s far easier to recover a single file virtual disk. If you have a number of snapshots, you may have to integrate them before attempting a recovery, and I don’t know about trying to integrate when something is known to be corrupted.

Libvirt - It’s a virtualization management technology that supports a large number of virtualization technologies (far more than the KVM, Xen and LXC which YaST can set up). There are alternatives to libvirt, some people use Vagrant for example. Libvirt supports both graphical tools (vm manager and virt install) and commandline (virsh). The alternative is to use the regular KVM commands which is entirely commandline.

QEMU - Used to be a separate technology which uniquely supports fully emulating architectures different than x86. So, for instance you could run an old 8008 virtual machine. Or, a DEC Alpha. Or a x286. Or, nowadays very useful to run ARM. And supports numerous virtual I/O devices. About 2 years ago, QEMU was integrated into both KVM and Xen enabling both KVM and Xen to support paravirtualization and full virtualization (Hey, here is an example… Although I noted both KVM and Xen support both types of virtualization, actually each virtualization technology defines each terminology the opposite way. And is an example of where our most recent documentation is wrong (unless and until fixed)). Again, at least for KVM users it’s important to learn from the SLES 11 SP4 documentation. Xen documentation seems to be OK, even most recent.

IMO,
TSU

Just a FYI -
This thread gave me a reason to review what I wrote in those two previous posts describing documentation.

What I posted originally is still pretty accurate today(very few things I would change due primarily to advances in technology since the posts were written) and for a User new to virtualization should provide plenty of good stuff being introduced to a number of topics.

Some changes as of today that is different than what I originally posted…

  • Due to Hypervisor changes, full emulation today can benefit from hardware assist. This means for example you can run an ARM virtual machine on an x64 physical processor and achieve some pretty good performance. In the old days, this type of emulation would be possible but dog slow.
  • The latest versions of openSUSE replaced LXC management, the original was a YaST management tool. Today, it’s managed by libvirt, the same recommended tools used to manage KVM and Xen.

TSU

@tsu2:

  1. Although this is a different Linux distro,
    https://pve.proxmox.com/wiki/Windows_10_guest_best_practices
    here it advises to use the “paravirtualized” virtio drivers as a best practice…

  2. I thought the “best practice” way for making VM backups was this one (again, from this other distro):
    https://pve.proxmox.com/wiki/Backup_and_Restore
    Hence the term “snapshot” (although they use a totally different format with extension .vma.lzo, according to what I read…)

  3. So KVM, Xen and LXC themselves are “virtualization technologies”, just like Windows, Linux, MAC OS, etc are “operating systems”?
    If so, did QEMU use to be yet another separate virtualization technology on its own? But was now merged into KVM and Xen?
    And so, is libvirt just kind of a “virtualization manager” encompassing many virtualization technologies? If so, why does SLES 11 doc treats libvirt and QEMU as 2 different managment ways?

Yes, there is nothing I see in your reference that contradicts what I stated.
For <Windows> Guests, virtio drivers are considered “better.”
But, not for any other GuestOS.
Today, ordinary defaults are considered “better” than any old virtio drivers you can dig up.
It may be more clear when you actually install the virtio drivers <inside> the Windows Guest (procedure similar to installing Guest Tools or Additions).

I haven’t looked at Proxmox for awhile.
Looks like it’s evolved since when I looked at it before.
The procedures, tools and steps described look interesting and my impression is that it looks like an interesting management system to support more complex strategies supporting several types of backups (besides snapshots).
Note a probably important feature that a separate storage pool is created for backups, that might be important for performance as well as organization (I’d have to look more closely and possibly set up some monitoring to verify what I suspect). But, like I mentioned in my previous post this substantially increases the complexity of your “virtual disk” which might be comprised of numerous files. How easy would it be to recover a corrupted disk particularly if you don’t want to roll back to a backup?
The big test for any backup system is whether it works, the scenarios it supports and how quickly you can return to a running state. To my eye, the Proxmox system looks encouraging enough to consider and test.

Actually LXC is not virtualization although it’s often categorized as such and discussed with virtualization technologies because of some of its shared characteristics. LXC is isolation only running on unvirtualized hardware except for its use of Linux bridging devices to implement virtual networks.

Yes, up to a few years ago QEMU was its own, completely separate virtualization technology.
Note though the limits of its integration within KVM and Xen… certain minor parts seem to be a part of “normal” KVM and Xen, the major parts that support full emulation are still completely separate and are invoked using different commands than more normal, regular commands. With continuing changes to the hypervisors, maybe eventually this separation will disappear and there will be only one mode but today full emulation is separate from normal modes in which the GuestOS architecture must be similar to the HostOS architecture (generally x86 or x64 and I/O devices designed for this architecture).

Especially if you want to think of libvirt’s more versatile capabilities, libvirt should be considered completely separate from the underlying virtualization technology whatever it might be. And, although in openSUSE libvirt is generally always installed to manage KVM, Xen and LXC, the more senior Administrator should always keep in mind that the native commands can be used instead of libvirt’s virsh if something is wrong with libvirt. But, for normal use virsh should always be used instead of the native command set because defaults and behaviors will be designed to work with how libvirt organizes your virtualization assets.

Your question about QEMU and libvirt is the first time I remember anyone noticed this documentation organization… :slight_smile:
It’s not in the SLES documentation (a long time ago I posted about this as an important missing part) that it’s possible to configure libvirt to connect to a “QEMU server” instead of or side by side with a “KVM server.” So, although it’s not described in the KVM documentation… yes, you can set up libvirt’s vm manager (at least) and possibly virt install (I haven’t tried this) graphical tools to manage QEMU virtual machines. Who knows, maybe someone a long time ago made a decision to avoid a lot of possible misunderstanding if people can’t remember that KVM and QEMU virtual machines despite seeming similar in many ways are actually very different technologies and would have to go through a complete conversion process to be run and managed as the other.

So, based on the existing documentation you can easily be misled that a libvirt vs QEMU comparison can be made… but that’s not the case. Instead, think of libvirt as an agnostic management system and that the better comparison is KVM vs QEMU.

HTH,
TSU

Whoa… thanks very much again mr tsu2.

Actually I once saw Proxmox in action. From what I could see, an important advantage of its backup system is the backup (or snapshot) auto-compressing; it did create a vma.lzo file which was compressed up to certain point in function of the used space the virtual hdd had, not its total size. For example, if it was a Windows 8.1 with virtual hdd total size 30 Gb, but only 15 Gb were used, the resulting backup file was circa 6 Gb.
I initially thought this could be a good “friendly” general standard for KVM backing up, but seemingly it’s a Proxmox thing…

As a different doubt, was KVM -or possibly QEMU as well- designed from the very beginning to have a server-client architecture? Was it originally thought for network purposes or the kind? This would certainly differ from VirtualBox… this one indeed feels more like an “application” instead of a “server”.

Finally, SLES doc advises to use Yast to properly install KVM, just like you said. But, by chance isn’t there a way to do it from command line?

Again thanks very much beforehand.

Nowadays storage is so cheap, unless you’re transporting bytes across a bottleneck compression might only be an extra risk factor. Or, you can consider ways to utilize less disk space… so for instance while I favor the “golden image” strategy to create clones, one could also create what is sometimes called a “link clone”… similar to snapshots, the clone doesn’t stand alone but is “added” to another base image. The base image can be updated, which affects all linked clones as well and the linked clones only track changes from the base image. But, given a choice I always favor simplicity and reliability over saving a few hundred megabytes at a time.

Also, you can minimize excessive virtual disk free space by using sparse files(growable disks) and zero-ing out and compressing the disk as needed (very rare. Disk usage naturally grows and rarely shrinks).

When you’re talking about a client/server architecture, you need to be specific about what you’re talking about. Just the fact that today’s virtualization runs on hypervisors suggests a client-server architecture although not a true one. When you’re talking about libvirt management, libvirt implements a client-server paradigm that makes it easier to understand management architecture, flows and methods. Only as an additional benefit, it’s common for practically all virtualization to support remote connections, and can be considered a security feature. When you deploy a dedicated HostOS, you want to minimize attack surface and promoting or allowing local logins is always considered a major security issue. By restricting to network access, the system is able to apply far better security(always the case).

There is nothing that prevents someone from installing any virtualization <not> using YaST particularly if you don’t intend to use libvirt (eg use Vagrant instead to manage), want to set up a different configuration. Just use zypper or YaST Software Manager instead of the “Install Hypervisors…” module.

In the case of KVM, as I described you can use KVM commands instead of virsh. And, you can create your own Linux bridge devices manually or through another tool (BTW - I found if you create bridge devices using different technologies like Virtualbox, VMware, libvirt, etc, because they are all the same things all these virtualization technologies can use devices created by another technology. But never run more than one virtualization at a time).

TSU

Thanks sir.
Though I meant how to install the “entire KVM suite” according to SLES doc (which would mean libvirt indeed) but through command line instead of Yast.

And finally, I know this may be a bit subjective, but, could KVM in general have some certain advantages over VirtualBox?

Doing small “bump”…

A default KVM install and architecture generally complies with Production deployment principles.

Virtualbox does not.
A shortlist of issues off the top of my head…

  • Last time I installed VBox on a Linux box (admittedly awhile ago, today I install VBox on Windows HostOS primarily), some files were installed into User directories, not all in the main application directory. If this has changed, then it’s one less issue.
  • VBox defaults to being invoked by a logged in User. There are commands to start Guests on boot, but even then it was my impression that the Guests would still run in a User-mode context instead of “unattended standalone” in their own security context. Part of this might be attributable to the next…
  • Perhaps the biggest issue is how VBox is often installed with the group vboxusers enabled and a User account a member of that group by default. Although convenient for Users, should be considered a substantial security risk especially on Production systems where no one is supposed to log in locally to the HostOS (or rarely).

There are additional reasons which wouldn’t be considered serious but would be significant.
One of the biggest differentiators between all virtualization technologies are the User-mode tools for managing the technology… Otherwise nowadays in this hardware-assisted virtualization world there is generally little performance and security differences between any virtualization (except Xen. Xen stands alone from everything else). The VBox VM manager is designed primarily for the less sophisticated everyday User. Libvirt’s vm-manager makes things like storage pools and virtual networks more visible.

The same is true for other technologies as well, eg
VMware sells a wide product line, each product focused on specific use, feature sets and budget.
Microsoft’s Hyper-V in recent years also has split its technology offerings in fewer pieces than VMware, but still allows installing a bare hypervisor with minimal OS, its management, or combined.

TSU

So if I understood correctly, KVM complies better with production environments “best practices” in general. But, does it use to have some few advantages for desktop/final users as well?

Finally, reiterating the other question, SLES doc advises to use Yast to properly install KVM, just like you said. But, by chance isn’t there a way to do it from command line? Meaning how to install the “entire KVM suite” SLES doc mentions (which would mean libvirt indeed) but through command line instead of Yast.

Thanks again.

For the most part when you break down the various virtualization technologies to their most basec functionality, you have the “platform” which is the hypervisor and you have the “User Tools” which are the various commands and possible graphical apps used to perform various needed functions like install, configuration, organization and general management.

There are relatively few differences between hypervisors because nowadays, everything is “hardware assisted” ie CPU extensions. There are some differences how the hypervisors hook into the CPU, but the end result generally results in the same performance.

This applies generally to all non-Xen virtualization no matter what the vendor.
But, although Xen is very different from everything else, it’s also very good.

So, from that starting point you can begin to explore virtualization in general.

  • If you use any virtualization other than Xen, you can apply what you’ve learned to any of the other virtualization technologies consistently. Terminology and architecture are generally the same.
  • If you choose Xen, then you’re committed solely to this at least in the beginning and later need to keep flipping back and forth to make sure you’re keeping things straight.
  • If you are building virtual machines that will eventually be ported to the Cloud, each Cloud Provider will be based on a particular technology, so choosing the same technology personally can ease migration and porting issues. For example Amazon Web Servces is built on Xen so if your objective is to eventually deploy to AWS that is what you should choose.
  • If you have the freedom and flexibility, try each and explore. Determine the primary backers of each.
  • Many management functions might be considered common, but others like live migration are still considered leading or bleeding edge(KVM still experimental today) by some or not offered at all (eg VBox). If you’re supporting a large farm of virtual machines on a large number of hosts, maybe this feature is important to you. And virtual machine management is no different than many other complex, multi-function application, everyone has a different idea how to best deliver the proper User Experience to its Target Audience.

As for installing virtualization by other means than yast…
Well, if you have an idea what is happening or inspect the Pattern spec file, I imagine it wouldn’t be difficult to “build your own,” but it’s low on my personal list since what YaST does can be implemented by command line as well as the graphical tool. I usually don’t go out of my way to dig into how something works when it’s already doing a flawless job until something happens… Or, I need the components to do something else. But, if this spikes your curiosity, then go ahead and nose around (again, I’d probably recommend the pattern spec file as a starting point, it’s typically found on obs or might be viewed in a text editor).

TSU

Do you happen to know how to pass through USB devices from openSUSE host to Windows virtual guest, and how to properly unmount them as well? Commonly USB drives, hard disks, and if possible other kind of USB devices…

Thanks.

You can do a search on “usb” in the documentation, states that USB devices in general are supported. Storage is the most common and most likely to support pass-through.

But, I wouldn’t generally advise doing a hardware pass through for any kind of storage.
If the HostOS recognizes the storage device, then that becomes an extension of the HostOS file system an so can be configured as a shared folder or a network share.
Remember, doing a hardware pass through of anything grants monopolistic control of the object, removing even the HostOS access. Sharing the file tree allows multiple Guests and the Host continuous access.

https://en.opensuse.org/User:Tsu2/virtfs#Overview

TSU

Also, shouldn’t be overlooked that besides creating a “shared directory” you can also create a virtual disk of the USB storage and <then> add it to your Guest as an additional virtual disk (which can then be mounted any way you wish, eg fstab, manual mount, etc).

Anything better than doing a hardware pass through.

TSU