Just a little guide
Congrats on creating a very attractive video with easy to follow steps.
Some comment:
Your “Wbat is KVM?” might appear to someone to be what you describe, but can be nit-picked technically. But, if someone only wants a very general description about what KVM might do practically, it’s OK and the nit-picking details can be overlooked.
Although I haven’t seen a technical analysis what makes Windows run faster in a VM, my speculation is different than yours. IMO based on my observation it’s
- Reading from a compressed (generally), compact and unfragmented file. In your video, the real physical blocks comprise at most 50MB (rather than the full disk size) and a lot less if you created a growable disk. So, my speculation is that it’s a lot simpler and faster to locate the physical disk blocks despite a virtual disk file layer.
- CPU and memory today is “direct access” due to the Intel and AMD CPU virt extensions. Compared to older technologies, this no longer is the handicap virtualization once was.
- Although I don’t believe that virtualized I/O likely provides the benefit you speculate, they have become very good. Unlike the CPU extensions though, virtualized device I/O very much depends on the virtualization technology you’re using (KVM, VMware, VBox, etc)
Your 2 GPU requirement is an interesting requirement because, of course you want to deploy a Windows Gaming platform and providing a dedicated GPU with Pass-through performance is good.
Your CPU configuration is interesting, and I’d be interested in knowing the reasoning behind your choices. Be aware that Linux and Windows are similar in that they are SMP kernels which generally means that tasks are automatically assigned to processes which will likely run across all available CPUs and cores, the distribution based on their own algorithm. Besides SMP processes at the physical level, there is also no relationship(ie affinity) between the virtual CPU configuration and what actually happens at the physical level. So, generally what this means is that <ordinarily> there is no advantage or effect configuring multiple virtual CPU sockets and cores. The only exceptions I can think of are application requirements… So, for instance if a gaming app specifically requires and looks for a multi-CPU/multi-core configuration or is designed for better parallelism based on what it sees, then… maybe. But again, keep in mind that what is seen virtually still won’t affect what is happening physically.
Unless something has changed libvirt calls its GUI “vm manager” and not “virt manager” – just quibbling.
I strongly recommend placing your ISO install files in their own directory and not in the same directory as a Guest.
Organizationally, this means a same install resource for all VMs you create… So for instance I noticed in your video you had created a previous Windows VM before creating the one for your video. Did you really have two copies of your ISO in each Guest directory? Better to just place all your VMs in a common place and maybe even deploy them in a network share if you have vms running on different machines.
Disk format is an important choice. The default qcow2 is a good performance choice. But, you may want to make a different choice for considerations of not just performance but also auto fault tolerance, recoverability, portability, or universality (use with different virtualization technologies). No choice is bad unless you do it blindly.
Win10 doesn’t like Haswell?!
Well, that would be a really interesting issue if it were true… Since practically everything made by Intel has been Haswell at least for the last 1.5 years. I am aware that even today there are a number of Haswell-specific extensions which are either buggy or have taken a longer time to be used by software, like what is now supported in the latest version of VMware (Workstation 11) which delivers improved performance. Just speculating that if true, maybe Win10 is/has had a problem using these new extensions as well but IMO that would likely be temporary and either removed or addressed by the time Win10 is launched in late July 2015
IMO,
TSU
](https://forums.opensuse.org/member.php/2578-tsu2)**Thanks TSU, Those are good points.
On the Pass Through I have notice that I gcan get a 500hz Mouse polling rate on Windows 10 by setting it as such in Linux. Windows 10 at the moment doesn’t have a good way of changing polling rate right now. It runs better on a software by software basis becaus you can set the CPU topoloigy to be cutom tailored for the porgram being used. I.E. Use however many threads the program is optimized for.
As of right now Windows 10 won’t 1st boot in KVM unless you set the architecture to core duo. Looked at both KVm mailing list and Windows Preview board and no one seems to know why lol. I was running this with the stock 13.2 kernel (3.16) so after the first boot Haswell may work with 4.0+. But Sandybridge gave the best overall stability for me. It is nice to take advantage of BTRFS’s error checking for a Windows system and its compression shrinks down the Windows image, which is super neat.
I will have to do some more reading to do on container types
My Test Rig:
Intel i7 5820k overclocked to 4.5Ghz
Asrock x99 WS
Crucial Ballistic Sport @2600Mhz (8GBx4 Quad Channel)
AMD Radeon r9 280 on the host system
AMD Radeon HD 4550 on the VM (soon to be fury )
Crucial MX200 500GB for root and Home.
Samsung 550 PRO for where the VMs are stored
I plan on doing some benchmarking when I get the Fury
**
All good stuff. My guess is that these odd bugs in Win10 will be figured out before its official launch currently set for end of July.,
BTW - I read my previous post and I guess people will notice I mixed up “MB” often when I meant “GB” – I assume people will understand the absurdity of a size when it’s obviously wrong…
TSU
I have followed the steps in the video (for Windows 7 guest) but after adding the PCI passthrough for my NVIDIA GTX680 (and it’s HDMI audio) and run the guest - both monitors go blank and power off. At the same time I can see there is disk activity (obviously the guest is booting) but the whole interaction with the computer is gone. The only way out is to press the Reset button on the PC box.
My hardware is:
Intel i7-3770 3.4GHz, 8MB cache, Quad Core, Hyper-Threading
ASUS P8Z77-V /Z77/ 1155, Intel Socket 1155, Intel® Z77
Chipset up to 32GB DDR3 1333/1600, 4 x SATA 3.0Gb/s, 2 x
SATA 6.0Gb/s, Gigabit LAN, 8-Channel DTS Audio, 2 x PCIe 3.0
x16, 1 x PCIe 2.0 x16 (x4), 2 x PCIe 2.0 x1, 2 x PCI, 1 x DVI, 1 x
D-Sub, WiFi
1x NVIDIA GTX 680
In BIOS setup I have primary display set to PCIE
iGPU Memory = Auto
iGPU Multi-Monitor = Disabled (Enabled doesn’t change anything)
Boot option filter: UEFI and Legacy
Launch Video OpROM policy: UEFI first
I have 2 monitors and I tried plugging in the second one to the intel DVI output but it stays blank all the time. It is powered on only if I set the primary display to iGPU but then the other one turns off and booting into Tumbleweed doesn’t power it on (I guess because it uses the NVIDIA driver).
Another thing I noticed at first - if I add a passtrough for the mouse as you explain (Logitech M100) - booting the guest takes full control over it and after the guest is powered off the mouse pointer stays frozen in the host. The only cure for that is either reboot or going through init 3, then init 5. So I continued without that when installing the guest.
Can you help to get this thing running?
Just to add more info: As soon as I added the iommu kernel parameter, journalctl is flooded with those messages:
Jul 04 18:45:31 i7 kernel: dmar: DRHD: handling fault status reg 3
Jul 04 18:45:31 i7 kernel: dmar: DMAR:[DMA Read] Request device [07:00.0] fault addr ffff0000
DMAR:[fault reason 06] PTE Read access is not set
@heyjoe,
For starters,
You should take a look at the other actively posted thread
https://forums.opensuse.org/showthread.php/508317-KVM-vfio-passthrough-to-linux-guest-problem
You need to also run lspci to determine whether the videocard has been installed into the Host using the vfio driver.
If so, then you need to follow the instructions in my post that describe how to shift from running KVM to running QEMU and manually configure your Guest.
If you have questions or if you verify you’re not using vfio for your video card, re-post.
TSU
Thanks @tsu2.
Some time ago I read a lot on the topic and here are the steps which I followed:
As you can see, it includes a lot of reading. And as it lead to no result at that time (about 2 months ago), I simply gave up.
But today I saw the video in this thread and decided to try again. So I simply followed the steps in it as it looked quite simple. It doesn’t mention anything additional as vfio etc and things obviously work?
The question is how do I make it work too. I am using Tumbleweed, updated today, you know the hardware and BIOS settings too.
I would be very grateful if you can provide a simple step by step procedure which is proven to work as there are lots of articles here and there but the info is not for openSUSE and perhaps only experts understand all the specifics.
The essence of the other thread is that if the detected videocard is installed using vfio (It’s an automatic decision, I haven’t researched whether the videocard can be mounted without vfio but there are reasons why vfio is better), then the videocard (or any other device using vfio) cannot run under KVM.
Read my post
https://forums.opensuse.org/showthread.php/508317-KVM-vfio-passthrough-to-linux-guest-problem?p=2718019#post2718019
Is why I took pains to write a longer post, laying out the situation.
The video posted by @lessershoe worked for him, likely because the videocard he was using isn’t installed using vfio
So, you need to run lspci to see if you’re subject to the same issue in that other thread.
TSU
Ok, @tsu2.
I read everything again + followed the steps in the VFIO section in the SLES doc:
i7:~ # lspci | grep -i "nvidia"01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 680] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
i7:~ # readlink /sys/bus/pci/devices/0000\:01\:00.0/iommu_group
../../../../kernel/iommu_groups/1
i7:~ # ls -l /sys/bus/pci/devices/0000\:01\:00.0/iommu_group/devices/0000\:01\:00.0
lrwxrwxrwx 1 root root 0 Jul 5 12:05 /sys/bus/pci/devices/0000:01:00.0/iommu_group/devices/0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0
i7:~ # echo 0000:01:00.0 | tee /sys/bus/pci/devices/0000\:01\:00.0/driver/unbind
0000:01:00.0
i7:~ # lspci -n -s 01:00.0
01:00.0 0300: 10de:1180 (rev a1)
i7:~ # modprobe vfio-pci
i7:~ # echo 10de 1180 | tee /sys/bus/pci/drivers/vfio-pci/new_id
10de 1180
i7:~ # ls /dev/vfio/
1 vfio
i7:~ # qemu-system-x86_64 /home/heyjoe/vm/win7test.qcow2 -device vfio-pci,host=01:00.0,id=video0
qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=video0: vfio: error, group 1 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=video0: vfio: failed to get group 1
qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=video0: Device initialization failed
qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=video0: Device 'vfio-pci' could not be initialized
So now there are 2 questions:
- How do I boot into the win7test guest which I created following the steps in the video using QEMU
- How should my monitors be connected (currently both connect to the 2 DVI outputs of the NVIDIA card)
Just to add: I just repeated the same procedure for the HDMI audio too (to have all devices in the iommu_group). Then used the same command:
qemu-system-x86_64 /home/heyjoe/vm/win7test.qcow2 -device vfio-pci,host=01:00.0,id=video0
This time there was no error message. For a few seconds nothing happened then just like before - both screens turned blank, then turned off and the only way to get out of this black hole was hardware reset of the machine. Obviously - the exact same result as by following the video.
I wonder what to do.
Impatience!
I recommended identifying the features and driver of your videocard using lspci as the next step, not rushing ahead and configuring vfio… Unless you did as I recommended without saying you verified first.
In other words, if your problems are caused by something else then configuring vfio isn’t likely going to solve your problem (except by co-incidence).
TSU
I already pasted the lscpi output and followed exactly the steps in the SLES doc you showed in the other thread. You also know that I the NVIDIA GTX680 is using the nvidia driver as per my earlier post.
What impatience and what is “something else”?
Sorry,
You posted individual lspci commands which makes reading the device information hard to read. See what the OP in the other thread posted, the device block which is easy to read… and complete. By running individual commands I can’t be as sure you’ve collected all relevant information, you might be missing something or returning misleading information… which is what you did. By making a mistake very early on, many of your succeeding bits of information is invalidated.
Your mistake:
This is incorrect
i7:~ # readlink /sys/bus/pci/devices/0000:01:00.0/iommu_group
Should be
i7:~ # readlink /sys/bus/pci/devices/0000:01:00.0/iommu_group
After that, your belief that you should be using IOMMU Group 1 is both a mistake and everything based on that assumption fails.
Also,
I don’t agree with your approach querying and probing for vfio because there is no likely way to verify any of your results are relevant to your video card. You have to look at your lspci device results (the device block) or hwinfo like what is described in the documentation to return information certain to be valid and relevant.
IMO,
TSU
There seems to be some misunderstanding. Let me try to clarify:
Is this what you asked for? (I am currently without intel_iommu=on in grub)
i7:~ # lspci
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04)
00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04)
00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04)
00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04)
00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4)
00:1c.1 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 (rev c4)
00:1c.3 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 4 (rev c4)
00:1c.4 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c4)
00:1c.7 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 8 (rev c4)
00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Z77 Express Chipset LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04)
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 680] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)
03:00.0 PCI bridge: PLX Technology, Inc. PEX 8114 PCI Express-to-PCI/PCI-X Bridge (rev bc)
04:04.0 SCSI storage controller: Adaptec ASC-29320ALP U320 (rev 10)
05:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
06:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
07:01.0 Multimedia audio controller: Creative Labs SB Audigy (rev 03)
07:01.1 Input device controller: Creative Labs SB Audigy Game Port (rev 03)
08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
i7:~ #
i7:~ # lspci -v -s 01:00.0
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 680] (rev a1) (prog-if 00 [VGA controller])
Subsystem: Gigabyte Technology Co., Ltd Device 353c
Flags: bus master, fast devsel, latency 0, IRQ 40
Memory at f6000000 (32-bit, non-prefetchable) [size=16]
Memory at e8000000 (64-bit, prefetchable) [size=128]
Memory at f0000000 (64-bit, prefetchable) [size=32]
I/O ports at e000 [size=128]
[virtual] Expansion ROM at f7000000 [disabled] [size=512]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [b4] Vendor Specific Information: Len=14 <?>
Capabilities: [100] Virtual Channel
Capabilities: [128] Power Budgeting <?>
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Capabilities: [900] #19
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia
i7:~ # lspci -v -s 01:00.1
01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
Subsystem: Gigabyte Technology Co., Ltd Device 353c
Flags: bus master, fast devsel, latency 0, IRQ 17
Memory at f7080000 (32-bit, non-prefetchable) [size=16]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
Not sure why this is a mistake. The first one is what the console gave me when pressing TAB key. I just ended it with .0/iommu_group
Could you explain please?
This is not my approach, I really don’t have one. I am simply following the steps from the SLES doc link you gave, for the VFIO. When I asked for a complete step by step verified procedure (for non-experts like me) what I understood from your reply was that your links in the other thread give exactly that. So I followed them. I hope that clarifies.
Can you help please? Let me know if I should give any additional info by running certain commands. I really hope to get this working.
Thanks.[/size][/size][/size][/size][/size][/size]
The above is what you want. Compare it with the output from the other Forum thread, there isn’t a reference to vfio, so more than likely you shouldn’t need to do the special vfio configuration, you’re using the nvidia kernel driver.
So, your current problem is due to something other than what has been posted in any Forum thread I’m aware of.
All we know at this point is that there is no certainty that your videocard has been set up using IOMMU yet or if that is even recommended.
But, the SUSE documentation does suggest a followup command that normally should have been used to verify the existence of the video card (if it was installed) by inspecting the contents of any IOMMU group, the following is what you should run
ls -l /sys/bus/pci/devices/0000:01:10.0/iommu_group/devices/0000\:01:00.0
You may want to compare output running the command without the slash. Maybe that’s just the way things are on your machine. Maybe it’s not. But comparing the outputs from the two commands can provide some guidance moving forward.
OK, with all this interest in passing a dedicated display/videocard to a Guest, I decided to take a look at what might be recommended. The following summarizes of what I found which in no way can be considered a recipe for success. It is only a jumping off point to uncertainty because it’s the limit of what is known today.
The first thing to note is this is really living on the bleeding edge of virtualization, but has been around long enough that a number of patches and configurations have already been implemented in the kernel so that avoids all sorts of recipes related to adding IOMMU and vfio support. Without describing all the gory details, just ensure you’re running kernel >= 3.15 (current openSUSE 13.2 default as of this writing is 3.16)
One Approach
One interesting approach I found is the following on a Fedora system which has a great many similarities to an openSUSE system
http://www.firewing1.com/howtos/fedora-20/create-gaming-virtual-machine-using-vfio-pci-passthrough-kvm
You can skip all the way down to and start considering implementing the script starting with the line
cat << EOF > /etc/sysconfig/vfio-bind
This script attempts to re-bind video cards to vfio and sets up a systemd Unit file (service) to manage these devices.
Read the whole article including Reader comments. More than likely all the steps described in this article should work on an openSUSE machine because of the similarities of Fedora and openSUSE subsystems but of course this is all very experimental.
Another Approach
Another article which looks interesting but is on Ubuntu looks like a worthwhile read and different approach, but because the subsystems are so different than openSUSE, only someone who feels up to creating openSUSE comparable commands would want to try this
https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/
HTH, and good luck living on the bleeding edge…
TSU[/size]
Tsu,
Re. back slashing:
i7:~ # readlink /sys/bus/pci/devices/0000\:01\:00.0/iommu_group
../../../../kernel/iommu_groups/1
i7:~ # readlink /sys/bus/pci/devices/0000\:01:00.0/iommu_group
../../../../kernel/iommu_groups/1
Not really:
i7:~ # ls -l /sys/bus/pci/devices/0000:01:10.0/iommu_group/devices/0000\:01:00.0
ls: cannot access /sys/bus/pci/devices/0000:01:10.0/iommu_group/devices/0000:01:00.0: No such file or directory
Correct:
i7:~ # ls -l /sys/bus/pci/devices/0000:01:00.0/iommu_group/devices/0000\:01:00.0
lrwxrwxrwx 1 root root 0 Jul 6 10:43 /sys/bus/pci/devices/0000:01:00.0/iommu_group/devices/0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0
I already mentioned I am using Tumbleweed where the kernel is 4.0.5 already. Sorry to say but I don’t have the time to try out every distro and every article out there. I have tried to do it 2-3 months ago and gave a link with the unfortunate result. Obviously it makes no sense to repeat all that and rely on hope and luck. That’s why I asked for a:
- simple
- verified
- working
- step-by-step solution
- for openSUSE
- for non-experts (like me)
(just like the nice video by the OP)
Do you have such solution which you have gone through yourself?
For those requirements, you’ll probably have to wait a bit longer. I noticed your reference to TW, but I generally try to post info that is applicable to all who read my posts.
According to the general timeline based on Google results…
- People tried to start implementing vfio only about a year and a half or so ago(2013?), but those early pioneers found that the dependencies and configurations were extremely complex.
- As recently as about 6 mths ago, kernels were released with integrated iommu, but vfio still needed to be configured by hand. Only within the past 6 mths were kernels released with both integrated iommu and vfio released. This means that only within the past very few months have experimenters begun to toy with these new features, and people will report their successes and failures. This is nowhere close to a certain stable solution as of yet… but those who wish to follow can be hopeful since foundation building blocks are now in place.
Recommend you evaluate the state of things by simply doing a Google search periodically.
Also, for anyone out there experimenting,
It seems to me that from its description vfio is a solution to implement KMS in a secure User Mode. Although using vfio is probably the preferable approach, openSUSE has long supported the “no KMS” setting on bootup to implement graphics drivers in User Mode. I don’t know if it’s still functional with latest kernels and it might subject the User to some security corner issues, but it could be worth a try, particularly for GPUs with known extra difficulties today like nVidia. Anyone trying this should still test whether iommu can be implemented for its memory management benefits, but it may or may not work.
TSU