You just downloaded the rpm and no conflicts found, or something more? I’m afraid that might break my X if it does not work. Bt that would be a great solution already!
Unfortunately, such performance drop is a no go for me: I do need CUDA due to its performance for some computationaly expensive calculations (gromacs and gamess), and these is the benchmark for future CPU/GPU acquisitions for several people here… But thanks, anyway, it would be a much cleaner solution (than mixing 15.1 and 15.2 version files)!
I do hope that solving it for optirun paves the way to make offload works too…
For me this is easy because I have many machines to pass it back-and-forth. For you, you should install VirtualGL from openSUSE Software from official repos and be sure not to keep the repository for LEAP 15.1 After that, copy your libvglfaker with
then be sure that LEAP 15.1 repository is no longer in your zypper/yast list then update which will then upgrade VGL to the broken version. From there on you can do
I think the long term solution is that VGL2.6.4+ will fix this and/or an official developer will fix it in the next update. It seems that bumblebee repository has their own VGL 2.6.4 but I cannot tell you if that would work.
To be exactly clear, I do not recommend this but you can copy and paste and safe the following as vgl151.ymp
<metapackage xmlns:os="http://opensuse.org/Standards/One_Click_Install" xmlns="http://opensuse.org/Standards/One_Click_Install"> <group distversion="openSUSE Leap 15.2">
<repositories>
<repository recommended="true">
<name>openSUSE:Leap:15.2</name>
<summary>openSUSE Leap 15.2</summary>
<description></description>
<url>http://download.opensuse.org/distribution/leap/15.1/repo/oss/</url>
</repository>
</repositories>
<software>
<item>
<name>VirtualGL</name>
<summary>A toolkit for displaying OpenGL applications to thin clients</summary>
<description>VirtualGL is a library which allows most Linux OpenGL applications to be
remotely displayed to a thin client without the need to alter the
applications in any way. VGL inserts itself into an application at run time
and intercepts a handful of GLX calls, which it reroutes to the server's
display (which presumably has a 3D accelerator attached.) This causes all
3D rendering to occur on the server's display. As each frame is rendered
by the server, VirtualGL reads back the pixels from the server's framebuffer
and sends them to the client for re-compositing into the appropriate X
Window. VirtualGL can be used to give hardware-accelerated 3D capabilities to
VNC or other remote display environments that lack GLX support. In a LAN
environment, it can also be used with its built-in motion-JPEG video delivery
system to remotely display full-screen 3D applications at 20+ frames/second.
VirtualGL is based upon ideas presented in various academic papers on
this topic, including "A Generic Solution for Hardware-Accelerated Remote
Visualization" (Stegmaier, Magallon, Ertl 2002) and "A Framework for
Interactive Hardware Accelerated Remote 3D-Visualization" (Engel, Sommer,
Ertl 2000.)</description>
</item>
</software>
</group>
</metapackage>
then double click on it to execute and be sure to be not stay subscribed to the repository to downgrade VirtualGL.
I’ll try this solution as soon as possible! Let’s hope.
From what I gather on your discussions with Malcolm, this loading/unloading would be the reason behind the fact that offloading and primus are not working here? I tried to stop the bumblebeed service for that matter, but ir didn’t work: What would actually possibly be needed to make it work?
Anyway: Malcolm, in case my output commands were missed (previous page), I’m reproducing them below:
optirun ./nbody runs with proper graphics and astonishing ~300-400Gflop/s with 5120 bodies (same parameters asking for the cpu led to only 1 Gflop/s). Doubling the number of bodies reaches ~800Gflop/s! rotfl!
I’ll reboot and see if everything keeps working just in case, and after run the other tests. I’ll come back here to inform!
P.S.: Now that we now that bumbleblee loads and unloads the device, it was to be expected that the offloading still wouldn’t work. Still, tested, and got the same error.
[/FONT][/FONT]
Working as a charm now with bumblebee and optirun (top speed of ~1.4Tflops apparently). rotfl!rotfl!
I would really like to give primus offload a try (if, of course, it does not break the actual installation).
SJLPHI, one question: how did you find out that there was some sort of issue in the virtualgl? I was expecting something on the line of “out of bounds” for the error you previously described for the “smoke test” (which also works beautifully btw), and unless I’m mistaken there is no message in my outputs complaining loudly about any library (specially that one specifically): so, how?
Thanks a lot you all!
And Malcolm, I’m still open to make some attempts on primus offload (as long that do not risk the rest of the installation) if you are still willing to (however, probably only after Xmas now).
I am glad that you got it all sorted out. To answer your question… I did a lot of reading online looking at error throws seen from
sudo systemctl status bumblebeed
then I got to check porting nvidia-settings
optirun -vv nvidia-settings -c :8
which ended up returning saying that libvglfaker.so has undefined symbol glXGetProcAddressARB
similar to https://www.gitmemory.com/issue/VirtualGL/virtualgl/139/690349348
(by the way, VGL_VERBOSE=1 does work for nvidia settings but for cuda graphics porting, it does not work) which had me asking the dependencies on the libvglfaker.so
Long story short, you basically stumbled upon a problem I learned to work-around so I decided to do some research to fix it and well… we have a temporary solution until VGL gets patched.
Hi
I have a HP Pavilion laptop with a nVidia GeForce 8400M GS card.
I’ve installed OpenSuse Leap 15.2 with KDE.
I would like to install the nvidia driver but I see 2 options, which one should I choose, 400 or 600?
palomin:~ # zypper se x11-video-nvidiaG0*
Cargando datos del repositorio…
Leyendo los paquetes instalados…
E | Nombre | Resumen | Tipo
–±--------------------±--------------------------------------------------------±-------
| x11-video-nvidiaG04 | NVIDIA graphics driver for GeForce 400 series and newer | paquete
| x11-video-nvidiaG05 | NVIDIA graphics driver for GeForce 600 series and newer | paquete
Chance of breaking things are low risk at this point. Just make sure that after upgrade, nothing creates something in /etc/xorg.conf.d/ that has to do with Nvidia, else just refer to the original instructions I’ve compiled. In the worst case, feel free to create a new thread on it.
Yes, you are absolutely right. Sorry, I have been sloppy at it lately. Overwhelmed under work load. Also for your information, your computer will still function even if you delete everything in
/etc/X11/xorg.conf.d/
you may just have to re-configure a couple of things here and there as a result.
I have a USB 3.1 NVMe SSD “traveling” linux stick and when I change from computer to computer, I typically have to erase all contents of that directory and reboot.