Step-by-step help for VGA passthrough?

Hi,

I am looking for a way to run Windows 7 guest with VGA passthrough on openSUSE 13.2 host.

My hardware is:

ASUS P8Z77-V /Z77/ 1155, Intel Socket 1155, Intel® Z77
Chipset up to 32GB DDR3 1333/1600, 4 x SATA 3.0Gb/s, 2 x
SATA 6.0Gb/s, Gigabit LAN, 8-Channel DTS Audio, 2 x PCIe 3.0
x16, 1 x PCIe 2.0 x16 (x4), 2 x PCIe 2.0 x1, 2 x PCI, 1 x DVI, 1 x
D-Sub, WiFi

Intel i7-3770 3.4GHz, 8MB cache, Quad Core,Hyper-Threading
Technology 77W

Corsair 32GB( 4X8GB) DDR3 1600MHz
CMZ32GX3M2A1600C10

Gigabyte GTX 680 GPU, N680OC-2GD, 2048MB GDDR5 ,256
bit,DVI-I 1,DVI-D1, DisplayPort1,HDMI

I already have KVM with the Win7 guest installed.

I know there are lots of pretty advanced guides around the web for Arch and other distros but I am looking for a simple step-by-step verified method for openSUSE, as I am not an sysadmin or expert. I simply need to be able to get rid of the dual-boot for my secondary OS (Win7). I have asked about that some time ago in a thread started by someone else, but we really got nowhere and I was told to wait a bit longer. So I waited a few months and here I am again :slight_smile:

So what am I to do to make this work?

I don’t know that anyone has put the work into exploring KVM VGA passthrough on SUSE/openSUSE.
But, there has been plenty written about this topic on other distros which provide the nuts and bolts to exploring how it might be done on openSUSE
Note though that the extensive discussions on other distros provide plenty of information but not necessarily successful results.

I responded on this topic not that long ago
https://forums.opensuse.org/showthread.php/508317-KVM-vfio-passthrough-to-linux-guest-problem?p=2718019#post2718019
If you want to explore, this one extremely long (234 pages) Archlinux forums thread which eventually closed recently amid a storm of name-calling and accusations. Still it’s frequently referenced as providing a lot of basic information.
https://bbs.archlinux.org/viewtopic.php?id=162768
The following Debian guide is entirely applicable to exploring on openSUSE except that you would install KVM using YAST instead of the Debian install command
https://wiki.debian.org/VGAPassthrough
The following KVM.org articles are relevant
VGA device assignment - KVM
How to assign devices with VT-d in KVM - KVM

Note that your own hardware description is incomplete or insufficient.
When you do any kind of device passthrough to a Guest, the Guest is given sole access to that device. This means that typically you would not passthrough your default videocard, you need to install a 2nd videocard and pass that one to your Guest.

If you want to explore and run into difficulties, you can post your issues here and we’ll try to help out.
But otherwise I don’t think that openSUSE is much different than other distros in that there is no certain guide that ensures a successful result.

TSU

TSU,

Some time ago we talked in this thread in which the OP showed quite a simple straight forward working case. At that time I was running Tumbleweed and you advised that I wait more, so I waited… a few months :slight_smile:

However seeing others using the same video card and similar chipset and CPU made me question if I need to wait (i.e. the software is not quite ready) or if I am missing something.

Last night I was lucky to get in touch with the author of http://vfio.blogspot.com/ who kindly explained that it is possible to assign the Intel Graphics to the host OS and passthrough the external GPU. However it seems impossible to be able to switch back the external GPU to the host (unassign it) because of some big instabilities due to a closed NVIDIA driver bug makred as WONTFIX. So it seems the only way to have good 3D acceleration on both host and guest is to buy a second GTX680. But then the question would be: what is better - 1 video card per OS or SLI for double performance.

My need is to use 3D apps in both host and guest in parallel and be able to exchange data between them (through files) without having to use a dual-boot.

If you want to explore and run into difficulties

No. I am looking for a layman solution - similar to the video from the linked thread (which seems deleted now). The person who shared it showed how things work in a matter of minutes, without having to read tons of scattered info around the web. Unfortunately for me the result was a blank screen and a hung up system. I guess my problem might be the inability to use both the internal and external GPUs?

In the end, it may not be possible to identify what configuration delivers the best performance without building each configuration and benchmarking, and I suspect that it may also depend on whether you favor better performance on one display or want both displays to perform as well as possible equally.

It’s highly likely that SLI will deliver the best overall performance, but that is based on various assumptions…

  • SLI is particularly good at distributing loads
  • The hardware bridge between the GPU chips means that both GPUs can be banging away extremely efficiently as loads change.

But, if using typical virtualization solutions implementing SLI suggests that one display will be “native” (integrated GPU likely the Host) and the other (2nd GPU likely the Guest) will be displayed through a method which involves various latencies (remote transport protocol like XDRP, re-rendering).

If you implement SLI and your supreme objective is display performance, I’d likely recommend you eliminate from consideration any solution that involves any kind of emulation/virtualization and consider only solutions that might provide isolation (and maybe not even that).

Passing a GPU directly to a VM means dedicating that one GPU only to supporting that OS instance (HostOS or Guest). It is less efficient than SLI, but delivers exactly the GPU’s capabilities to its OS. As described and you seem to know, dedicating a GPU means that another OS has no access to it.

Although I can’t point you to any existing research or methods, logically in your position I would consider using the following technologies…

  • If, and it may not even be necessary to run completely separate machines, choose something that does only isolation like chroot, systemd-nspawn, UML, OpenVZ, docker and lxc. Each of these runs an isolated instance of the Linux OS in either a chroot or Linux Container. They run on hardware without emulation so should perform the same as running in an OS on bare metal.
  • Linux supports multiple displays which can be attached to various processes, an example is screen. By running a native display, you get pure hardware performance without the latencies involved with common methods implemented by virtualization managers. Virt managers don’t use screen because then you would be restricted to viewing only locally on the machine and they want to enable remote connections by Admins and Users.

Note that the technologies I recommend won’t likely make up any currently described configuration/solution on the Internet today, but that is because your objectives (ultimate display performance from multiple OS on the same hardware) is fairly unique.

TSU

Considering I hear for the first time in my life the things listed by you, I wonder how to use the theoretical information you give. I think you are missing one of the most important sentences of my question:

I am looking for a simple step-by-step verified method for openSUSE, as I am not an sysadmin or expert.

Have you personally gone through the things you suggest (verified) and can you provide the necessary information (steps) for practical realization? Sorry to ask this way but I am interested in actual solutions, not theoretical.

I’ll try to be more distinct then TSU.

There is no simple step by step instruction you are treading new untested in openSUSE methods. You would first need to remove the SLI bridge so the cards worked as separate cards .SLI make to cards work as one. You need to make both cards work as separate cards. But beyond that I can’t help. What you want to do is technologically tricky. I doubt all that many people have tried it

I guess I am doomed then.

On Wed, 07 Oct 2015 17:36:01 +0000, heyjoe wrote:

> I guess I am doomed then.

Well, you could take the approach of working to learn what you need to
know. That’s generally how users of Linux approach problems: not with
“give me step-by-step instructions” but “help me understand how to solve
this problem”.

Don’t take it personally - learning stuff won’t hurt you. :wink:

Jim

Jim Henderson
openSUSE Forums Administrator
Forum Use Terms & Conditions at http://tinyurl.com/openSUSE-T-C

Hey Jim, no worries at all!

Actually as I mentioned I have been fighting with that for many nights without success. That’s why I decided to ask if anyone has done it before and could share an easier path.
The problem is that I can’t find a useful “manual” which explains basics first. The info on the topic is really scattered around the web and it seems mainly exchanged between experts who talk to each other in some pretty advanced code-language :slight_smile:

Surely I will research more in-depth when I have time and hopefully if I find something I will share it here.

Thank you friends.

I went through some of the links, noted how complex and difficulty achieving this could be.

Perhaps you may want to consider something like I did here. My main work desktop (a modest i3 16GB) is connected to two 24" monitors. Recently an older i3 4GB desktop got redundant, but it still has a lot of grunt in it, so I put it alongside the main box, in the same network, hooked it to secondary monitor DVI (HDMI is connected to the main box), and installed synergy in both machines to share kbd and mouse. Works a treat, and there’s synergy also for windows, so one box could be windows native (mine are both openSUSE), and files are shared through the network.

Later I installed a third smaller monitor I had in storage, both to avoid the source switching at the second monitor and to put the monitor in use, as they tend to fail if not used for long periods.

There’s the cost of the second box, of course, but perhaps you have parts around you could reuse, so the outlay wouldn’t be too painful. For example, my main box had 24 GB RAM, but I switching 8 GB to the second box had no perceived impact in the main box performance - in my use case, of course.

Just an idea, good luck!

Thanks for sharing brunomcl. I have thought about that too.

Indeed that might be easier from software viewpoint but surely not financially suitable. For my needs I need a lot of RAM and CPU power, so assembling a second box from spare parts is not an option (and I don’t have them anyway).