So, after tinkering around with this topic for about the whole 2nd half of 2020 I came to the conclusion: I won’t go any further down this road.
What happened:
After I finally managed to get the kvm running with my main GPU passed thru I got extreme bad performance.
As a test I used Half-Life 2 - a game so demanding even my toaster has enough electronics in it to run that game at high settings flawlessly - and running it “native” on the host OS, no matter if on Leap 15.2 as Linux version or the original Windows version on Windows 7, my main GPU pushes solid 200+ frames a second with all settings maxed out. But running it in the kvm I only get about 20 to 60 frames - even with low to mid settings - something I remember when I played it first back in 2005 or so on a friends’ system.
Don’t get me wrong - I don’T want to blame anyone except myself and my hardware - as many highly skilled programmers worked for hears to make it even possible - but that underwhelming performance is the final nail in the coffin - I finally got to the end to transform my current system into a big, hopefully not-so-power-hungry ZFS cluster and build a new system. Until I have the money to do so I just have to hope that my hardware doesn’T die before I got all data migrated into the planed 8 drive raidz2.
As for the issue I mentioned in one of my other threads about some “applications” my cause issues with network shares: I guess I have to use iSCSI instead of regular SMB to trick my OS into think the storage is attached locally instea of remote - all I have to figure out is how to make the zfs pool available as an iSCSI target - but that’S for another topic for another time.
Just out of curiosity: Does anyone have any idea why the performance is so bad when running in a kvm with a passed thru gpu? Isn’T pass thru meant to provide the hardware to the vm as if it would run on the bare metal instead of “emulation magic”?
board: asus crosshair v formula-z
cpu: amd fx 8350
main gpu: asus amd r9 290x
2nd gpu: zotac nvidia gt 1030
fun fact: even HL2 is a toaster game these days - the gt 1030 doesn’T have the required power to run it - only with single-digit frames a second
It took me quite some time and work to get even working at all - although I wasn’t able to get it done the very way I wanted to - and I even had some additional plans I wanted to get done and a few related topics - but as it seem to make such a huge impactful difference in something like HL2 I don’T even tested my main games like GTA V or GhostRecon Wildlands - as I imagine they will fail even harder.
Maybe it’s a mis-understanding by me about pass thru - as the kernel still needs to run a driver which enables the user land calls made inside the vm to be translated into system calls to be actually executed on the hardware - but as I’ve seen many videos on YouTube where people build expensive and powerful rigs so they can run even multiple VMs with way better performance than when I run Windows native on the hardware I thought I may can pull off a similar stunt. I guess it all gets back to my initial thought: My hardware doesn’t seem to be fully compatible with virtualization the way I wanted to use it.
Hi
Hmmm, Son runs HL2 (on winX though) with 2xGT710’s gets around 70FPS @ 1080P (Same motherboard as I use, Intel DQ77MK but only the Xeon 4 core cpu)?
You should just run steam and render offload to run steam with the GT1030 on Leap? I have a 1030 SC now, works a treat… I play around with minecraft, that gets ~140FPS @ 1080 (It’s in a PXIeX4 slot) with the GT1030.
Only the vfio-pci module is used for my Passthrough devices…
As I couldn’t believe the 1030 to not be able to run HL2 as it’s WAY better than ANY setup we had back in the mid-2000s I gave it anohter test run on Windows - and, to my surprise, or for the lack of a better term: as expected, it did a pretty solid job of at least getting into the low 3-digit range.
I only know this behavior from one very specific issue: Using Windows VESA drivers instead of installing from AMD/nVidia - so, I suspect the even worse performance of the 1030 as I tried the native Linux version to be a similar issue as I didn’t installed nVidia drivers but running this noveau (or how ever it’s spelled) driver. Just out of curiosity I’ll try to see if installing a nVidia driver results in at least HL2 running as it should.
But, nevermind, quess try to keep using my system by layering a KVM died with what I experience yesterday.
Just as a reply: Yes, after installing the nVidia driver HL2 actually did run with quite solid performance - but it didn’t had any impact to the still bad performance using the passed thru GPU in VM, don’t know if this is some limitation of the vfio driver.
/etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT options added;
intel_iommu=on,igfx_off
cat /etc/modprobe.d/10-kvm.conf
options kvm ignore_msrs=1
options kvm report_ignored_msrs=0
cat /etc/modprobe.d/11-vfio.conf
alias pci:v000010DEd0000128Bsv000019DAsd00005360bc03sc00i00 vfio-pci
alias pci:v000010DEd00000E0Fsv000019DAsd00005360bc04sc03i00 vfio-pci
alias pci:v00001B4Bd00009215sv00001B4Bsd00009215bc01sc06i01 vfio-pci
options vfio-pci ids=10de:128b:19da:5360,10de:0e0f:19da:5360,1b4b:9215:1b4b:9215
options vfio-pci disable_vga=1
I have way better performance with the extra SATA card allocated and the SSD, even if it’s an older model (3.0Gb/s) SSD running winX.
Also note the GT710 for passthrough is a PCIeX1 model in an adapter for power, needs additional power for the 19W needed the slot it’s in only provides 10W.
I’m assuming your card is in the main PCIeX8/16 slot? Note the 1030 is only a 64bit card (PCIeX4).
As for the zotac nvidia 1030: I did noticed that, although the physical card-edge-connector is a full x16, the card only takes up 4 lanes. As far as I know pci-e is specified to basically use any number of lanes, but in real world the most used numbers seem to be powers of 2 (4, 8, 16) and the special x1 which is specified to be mandatory. My board has a total of four physical x16 slots, but can be used in only a limited number of specific configurations I don’t know from the top of my head.
As my uefi doesn’t offer a way to specify which card should be used as main card yes, the 1030 has to be placed in nearest-to-cpu slot. My main gpu, an asus amd r9 290x is located in the second slot that supports full x16 electrically and which has to be used in a dual-gpu setup according to the board manual. Both, the uefi spd info page and special software written to read out these information, show both cards get the max number of lanes they support.
As for additional kernel-options: I only have amd_iommu=on and iommu=pt set - although it doesn’t seem to make any difference if I set them or not - like I’m re-setting the already default values.
As for if I’ve set legacy or uefi: I don’t know as I wasn’t asked for that option - but as the harddisk image is formatted as mbr instead of gpt it looks like it’s using legacy. The real hardware is also set to legacy mode as I use the provided fakeRaid - which is only available in legacy mode. Also: although windows 7 is a uefi capable os I never bothered to take the effort to build a uefi install media. As guest os I also use windows 10.
Hi
Windows 7 up are all UEFI capable direct from the iso image, just boot in UEFI mode…
I would look at creating a new test image, assuming your using the GUI to create your vm’s? In there you should be able to select the pc-q35-4.0 option? Note you will need the qemu vars/code defined (copied somewhere), again hopefully you can do that in a GUI?
About Windows 7: Well, although the OS itself was designed to be UEFI compatible - when the first few discs filled the shelves back in 2009 someone screwed up the master images and forgot to add the required UEFI bootloader. So, in order to boot Windows 7 in UEFI mode you needed to copy over the UEFI bootloader from some WinPE 3.0 and fix up the image yourself (or just use WinPE as a small boot image and start the setup from it). I don’T know exactly - but it took M$ quite some time to get new masters done and release new discs which then were able to boot into UEFI - can’T remember if this was with the release of the Win7SP1 master or at some other point.
Anyway - as said: I’m done with it. I made the final decision to not try using virtualization any more but rather to save up to buy new hardware and build a whole new rig while converting my current system into a storage server. There’re many reasons I keep using my current setup I just recently learned about a possible solution - but this will require some more learning and testing and maybe a lot of asking here about various topics.
As for anything about virtualization: Yea, cool it’s a thing - but with my current hardware and the current available software it still doesn’t seem a way I see myself going. Sure, they’Re maybe some proprietary stuff out there - but before I start investing in some expensive software just to figure out it still won’T work with my hardware I’d rather invest in new hardware and stay on the free lane.