nVidia + Cuda + Bumblebee + Codecs: "Recipe" not working on openSUSE 15.2

Hi guys.

I’m having issues in making the “Cuda + Nvidia + bumblebee + codecs ‘safe’ way”](https://forums.opensuse.org/showthread.php/537906-Cuda-Nvidia-bumblebee-codecs-quot-safe-quot-way) recipe work in a fresh install. :frowning:

(unfortunately I’m also not sure for how long I’ll be able to be keep on doing fresh installs, since this notebook is “off duty” for too long: I’ll have to put it back to work soon)

I followed the instructions to the letter on my openSUSE 15.2 installation to have CUDA and bumblebee working with my GP107M (GTX1050M).

My commands outputs follows:

optirun --status
Bumblebee status: Ready (3.2.1). X inactive. Discrete video card is off.
optirun glxgears
  182.087121] [ERROR]Cannot access secondary GPU - error: Could not load GPU driver

  182.087150] [ERROR]Aborting because fallback start is disabled.
optirun glxspheres
  189.007213] [ERROR]Cannot access secondary GPU - error: Could not load GPU driver

  189.007258] [ERROR]Aborting because fallback start is disabled.
sudo lspci  |grep -i nvidia
01:00.0 VGA compatible controller: NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] (rev a1)
glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2) 
OpenGL core profile version string: 4.6 (Core Profile) Mesa 19.3.4
OpenGL core profile shading language version string: 4.60
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 19.3.4
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 19.3.4
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:

Worst: “lsmod | grep nvidia” gives no output, and both “/proc/driver/nvidia/version” or any “/dev/nvi*” exist! :’(

Can anybody please help me here somehow? :slight_smile:

P.S.: Just to have a bit of “context”, I’ve had both bumblebee and nvidia (without cuda) working on this very machine up to last Monday, when I made some huge mistake in some update and decided (had) to reinstall it from scratch after a /home backup.

Hi
You have such a new card, if you install either suse-prime or switcheroo-control as well as the nvidia drivers from the repo you should get it working just fine.

https://en.opensuse.org/SDB:NVIDIA_SUSE_Prime

https://gitlab.freedesktop.org/hadess/switcheroo-control/

Or just prime render offload https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html

note: should not need to configure any xorg, just use the commands…

Hi Malcolm.

I decided to not use suse-prime because, from what I could understand, it would need logins and logouts. :frowning:

From what I could gather, switcheroo-control would suffer from the same drawback, am I rignt?

Prime render offload, on the other hand, is a solution that I was unaware of, and not only seems to not suffer from this drawback but also to be even easier than bumblebee: do you know if it works with CUDA softwares? Also, is there some sort of tutorial more specific for the case of opensuse and/or cuda?

Thanks a lot! lol!

What’s the driver version you’ve installed? Typically G04 has been giving grouble and basically following my instructions then installing G05 package (which uninstalled G04 and “cuda”) fixes the problem. Also, the cuda end up staying and working.

Also, bumblebee does work with CUDA, you just operate it with “optirun” before executing the cuda-compiled executable. You don’t need anything anything special for compiling with NVCC

Can you also show

dmesg |grep -i nvidia

to see if there are any complaints on Nvidia drivers on boot? Also, you need to make sure that your kernel version is matching with bumblebee.

There may be a simple issue with the load state in the configuration in your /etc/modprobe.d/50-bbswitch.conf

options bbswitch load_state=-1 unload_state=1

try load state 0 as well. Default is 0, -1 works for all of my Lenovos+Alienware.

Hi
With suse-prime only requires logout/login, switcheroo and offload no … In the GNOME DE, switcheroo offers a menu option to use the discrete card as well as command line… yes it works with cuda if installed/present…


 switcherooctl launch glxinfo | grep "OpenGL renderer"
OpenGL renderer string: GeForce GT 1030/PCIe/SSE2

glxinfo | grep "OpenGL renderer"
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics P4000 (IVB GT2)

__NV_PRIME_RENDER_OFFLOAD=1 __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep "OpenGL renderer"
OpenGL renderer string: GeForce GT 1030/PCIe/SSE2


switcherooctl launch nvidia-settings

https://forums.opensuse.org/attachment.php?attachmentid=958&stc=1

https://forums.opensuse.org/attachment.php?attachmentid=960&stc=1

Screenshot from 2020-12-20 08-46-27.png

Screenshot from 2020-12-20 08-51-18.png

Hi SJLPHI

[quote="“SJLPHI,post:4,topic:143591”]
What’s the driver version you’ve installed? Typically G04 has been giving grouble and basically following my instructions then installing G05 package (which uninstalled G04 and “cuda”) fixes the problem. Also, the cuda end up staying and working.[/QUOTE/

It is G05, certainly:

zypper se nvidia
Carregando dados do repositório...
Lendo os pacotes instalados...

S | Nome                           | Resumo                                                                | Tipo
--+--------------------------------+-----------------------------------------------------------------------+-------
  | libnvidia-nscq-450             | NVSwitch Configuration and Query library                              | pacote
  | libnvidia-nscq-455             | NVSwitch Configuration and Query library                              | pacote
  | libnvidia-nscq-460             | NVSwitch Configuration and Query library                              | pacote
  | nvidia-computeG04              | NVIDIA driver for computing with GPGPU                                | pacote
i | nvidia-computeG05              | NVIDIA driver for computing with GPGPU                                | pacote
  | nvidia-diagnosticG04           | Diagnostic utilities for the NVIDIA driver                            | pacote
  | nvidia-fabricmanager-450       | Fabric Manager for NVSwitch based systems                             | pacote
  | nvidia-fabricmanager-455       | Fabric Manager for NVSwitch based systems                             | pacote
  | nvidia-fabricmanager-460       | Fabric Manager for NVSwitch based systems                             | pacote
  | nvidia-fabricmanager-devel-450 | Fabric Manager API headers and associated library                     | pacote
  | nvidia-fabricmanager-devel-455 | Fabric Manager API headers and associated library                     | pacote
  | nvidia-fabricmanager-devel-460 | Fabric Manager API headers and associated library                     | pacote
  | nvidia-gfxG04-kmp-default      | NVIDIA graphics driver kernel module for GeForce 400 series and newer | pacote
  | nvidia-gfxG04-kmp-preempt      | NVIDIA graphics driver kernel module for GeForce 400 series and newer | pacote
i | nvidia-gfxG05-kmp-default      | NVIDIA graphics driver kernel module for GeForce 600 series and newer | pacote
  | nvidia-gfxG05-kmp-preempt      | NVIDIA graphics driver kernel module for GeForce 600 series and newer | pacote
  | nvidia-glG04                   | NVIDIA OpenGL libraries for OpenGL acceleration                       | pacote
i | nvidia-glG05                   | NVIDIA OpenGL libraries for OpenGL acceleration                       | pacote
  | nvidia-texture-tools           | NVIDIA Texture Tools                                                  | pacote
  | pcp-pmda-nvidia-gpu            | Performance Co-Pilot (PCP) metrics for the Nvidia GPU                 | pacote
  | skelcd-EULA-NVIDIA-compute     | EULA for media                                                        | pacote
  | x11-video-nvidiaG04            | NVIDIA graphics driver for GeForce 400 series and newer               | pacote
i | x11-video-nvidiaG05            | NVIDIA graphics driver for GeForce 600 series and newer               | pacote

I was expecting that for running since I’ve already made bumblebee work in 15.1 (without cuda however). I’ll have more questions when compilation is needed…:stuck_out_tongue:

It gave me the following output:

dmesg | grep -i nvidia
   23.172452] audit: type=1400 audit(1608399795.195:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=1170 comm="apparmor_parser"
   23.172453] audit: type=1400 audit(1608399795.195:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=1170 comm="apparmor_parser"

Have tried in previous installations, but I will give it a try again (as soon as the /home restoration from backup ends) and come back at you.

Thanks for the instructions on switcheroo, have no idea it would work so much better than suse-prime.

Those snapshots are almost a tutorial… :wink:

Do you know where I could find similar instructions for offload? The link previously provided seems a bit “nvidia inner workings”, and I could not find anything as good on searches.

Thanks a lot!

Hi
Have a read here (There are some more posts, use advanced forum search, offload in the last three months)

https://forums.opensuse.org/showthread.php/540415-ASUS-A15-TUF-Gaming-506-IU-Laptop-Ryzen7-4800h-–running-linux-on-it-amp-hybrid-graphics-(AMD-Nvidia)

What I did was create an alias to start from the command line, but no reason it can’t be scripted and create a desktop menu item to start a particular application. So many ways to do things in linux :wink:

Some applications should just use the card if it’s there, eg blender(?) I just start it here and it sees the card.

One note though, I run Tumbleweed, this is a desktop, not a laptop, my laptop with dual graphics is all AMD…

I’m pretty sure that there is a mismatch between your bumblebee/kernel/nvidia version right now. You should make sure that you install nvidia g05 as the nvidia repository demands using zypper in -r nvidia-g05…

As malcolmlewis recommended, bumblebee is unsupported and outdated, it’s more or less just me trying to get it to cotinue working for my personal taste. Most advanced machine I have is Lenovo T480 with Nvidia m150X.

If you want me to work with you to get bumblebee setup to work with your computer. I’m happy to work with you but for future reference suse-prime is probably the way to go as bumblebee doesn’t get very much support… it’s just me right now. I personally choose to use bumblebee still because it is the best option for my laptops with intel CPU + Nvidia dGPU and some of the display connections are ported through the iGPU/dGPU and I need them both working at the same time, which is what bumblebee is best at. Not to mention, it’s also useful when I want to do something very specific with dGPU for cuda.

Also, does

optirun glxspheres

run while logged in as su by any chance? There was an issue that I had to resolve due to power management.

Check out


/etc/modprobe.d/09-nvidia-modprobe-pm-G05.conf
#options nvidia NVreg_DynamicPowerManagement=0x01

Try to run optirun commands with -v verbose option.

Hi
I do wonder with the newer hardware the user has, bumblebee won’t cut it any more?

I cannot really answer that. MX150 is apparently slightly newer than GTX 1050 (https://www.notebookcheck.net/GeForce-GTX-1050-Mobile-vs-GeForce-MX150_7503_8000.247598.0.html) so I am not sure if that is a problem.

Well that was interesting!
I installed suse-prime and ran “prime-select nvidia” as su - . I managed to really mess up the displays somehow, but eventually I got things so that I had stable displays on the HDMI and USB-c monitors.

The laptop lcd monitor did not display, but it was “there”; fortunately, most of the Primary screen attributes were shunted over to the HDMI screen.

I had read the SDB:Nvidia_suse_prime page but did not check on the requirements in the procedure diligently enough. There was no xorg.conf but there was 20-displaylink.conf and 90-nvidia.config xorg.conf.d that both have “ServerLayout” “Device” and “Screen” sections in them. Should I rename them to *.tmp? If I do, is there anything else I should do?

Two steps forward and one sideways…
Brad

Please ignore the previous post…wrong thread.
Brad

Ok, I’ve got good news from the “bumblebee front”! rotfl!

Some context: since the recipe installation takes long and have many reboots, I usually went to the kitchen to get an extra glass of water (summer here) in that mean time.

The problem is that such interval was just enough for me not see some scandalous blue screen at boot time (Windows BSOD shade), that was something new for me: MOK! :stuck_out_tongue:

(I would suggest that it should be included in the recipe)

I’ve read about that on https://en.opensuse.org/SDB:NVIDIA_drivers when looking for a solution. Just tried the gigantic “mokutil” command line, rebooted, Emroll MOK, Continue, Yes, and rebooted again. And now:

optirun --status
Bumblebee status: Ready (3.2.1). X inactive. Discrete video card is off.
optirun glxgears
22411 frames in 5.0 seconds = 4482.138 FPS
22366 frames in 5.0 seconds = 4473.068 FPS
22500 frames in 5.0 seconds = 4499.950 FPS
optirun glxspheres
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
GLX FB config ID of window: 0xad (8/8/8/0)
Visual ID of window: 0x21
Context is Direct
OpenGL Renderer: GeForce GTX 1050/PCIe/SSE2
418.996853 frames/sec - 467.600488 Mpixels/sec
420.243253 frames/sec - 468.991471 Mpixels/sec
343.473713 frames/sec - 383.316664 Mpixels/sec
308.200603 frames/sec - 343.951873 Mpixels/sec

Strangely, “glxinfo | grep OpenGL” yields the same output:

glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2) 
OpenGL core profile version string: 4.6 (Core Profile) Mesa 19.3.4
OpenGL core profile shading language version string: 4.60
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 19.3.4
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 19.3.4
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:

Also, “/proc/driver/nvidia/version” still non-existent, and “lsmod | grep nvidia” gives me no output. :frowning:

However, looking for “/dev/nvi*”:

ls -la /dev/nvi*
crw-rw----+ 1 root video 238, 1 dez 20 18:38 /dev/nvidia-uvm-tools

At least one file, but I remember that there should be more files. Also, “dmesg |grep -i nvidia” now gives much more output:

dmesg |grep -i nvidia
    1.426305] integrity: Loaded X.509 cert 'Local build for nvidia-gfxG05 460.27.04 on 2020-12-19: 5f561852256111ea49325f56d3536fbc4df207c7'
   20.651208] nvidia: module license 'NVIDIA' taints kernel.
   20.737335] nvidia-nvlink: Nvlink Core is being initialized, major device number 241
   21.000232] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
   21.115737] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  460.27.04  Fri Dec 11 23:35:05 UTC 2020
   24.176150] audit: type=1400 audit(1608500338.218:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=1160 comm="apparmor_parser"
   24.176153] audit: type=1400 audit(1608500338.218:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=1160 comm="apparmor_parser"
   25.426946] nvidia-uvm: Loaded the UVM driver, major device number 238.
   26.700306] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  460.27.04  Fri Dec 11 23:24:19 UTC 2020
   27.553518] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
   27.553519] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 1
   30.915209] [drm] [nvidia-drm] [GPU ID 0x00000100] Unloading driver
   30.946439] nvidia-modeset: Unloading
   31.412181] nvidia-uvm: Unloaded the UVM driver.
   31.435948] nvidia-nvlink: Unregistered the Nvlink Core, major device number 241
   31.564567]  cfg80211 mei_me acer_wireless acpi_pad intel_lpss_pci intel_lpss rfkill intel_pch_thermal mei btrfs libcrc32c xor raid6_pq dm_crypt algif_skcipher af_alg hid_logitech_hidpp hid_logitech_dj hid_generic usbhid uas usb_storage sg dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua i915 crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel rtsx_pci_sdmmc mmc_core i2c_algo_bit aesni_intel drm_kms_helper syscopyarea xhci_pci sysfillrect sysimgblt aes_x86_64 fb_sys_fops xhci_hcd glue_helper crypto_simd drm cryptd usbcore serio_raw rtsx_pci i2c_hid wmi video pinctrl_cannonlake pinctrl_intel button dm_mod bbswitch(O) efivarfs [last unloaded: nvidia]
  180.619878] nvidia-nvlink: Nvlink Core is being initialized, major device number 241
  180.620119] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=none:owns=none
  180.735604] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  460.27.04  Fri Dec 11 23:35:05 UTC 2020
  181.252933] nvidia-uvm: Loaded the UVM driver, major device number 238.
  181.308877] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  460.27.04  Fri Dec 11 23:24:19 UTC 2020
  181.380779] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
  181.380785] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 1
  201.324156] [drm] [nvidia-drm] [GPU ID 0x00000100] Unloading driver
  201.346922] nvidia-modeset: Unloading
  201.880964] nvidia-uvm: Unloaded the UVM driver.
  201.911652] nvidia-nvlink: Unregistered the Nvlink Core, major device number 241
  212.572188] nvidia-nvlink: Nvlink Core is being initialized, major device number 241
  212.572423] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=none:owns=none
  212.687959] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  460.27.04  Fri Dec 11 23:35:05 UTC 2020
  213.206148] nvidia-uvm: Loaded the UVM driver, major device number 238.
  213.259467] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  460.27.04  Fri Dec 11 23:24:19 UTC 2020
  213.330913] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
  213.330914] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 1
  224.716655] [drm] [nvidia-drm] [GPU ID 0x00000100] Unloading driver
  224.751746] nvidia-modeset: Unloading
  225.325689] nvidia-uvm: Unloaded the UVM driver.
  225.356886] nvidia-nvlink: Unregistered the Nvlink Core, major device number 241

Later I’ll try a test compilation for cuda. :wink: Also, will see if it keeps on working with some reboots.

Two questions still however:

Since this huge change in scenario (sorry for my mistake), and looking at those outputs, should I look at something else to make certain that bumblebee is working properly?

Moreover, I’ll give the prime offload a try later too: it should not have any incompability issues with bumblebee configurations, am I right?

Thanks a lot anyway already!

It looks like you compiled your setup and have it working. Yes, you need to register the keys from mokutils when installing kernels with Nvidia/bumblebee. Maybe I should add that into my instruction set as you are recommending.

If you want glxinfo of your Nvidia card, run

optirun glxinfo |grep OpenGL

I don’t know about version, but try grep -i nvidia because it may be Nvidia in lsmod.

No, optirun glxsphere should be enough to show that bumblebee is working correctly. Also you can download cuda samples assuming the path is set and you installed cuda 11.0

cuda-install-samples-11.0.sh

, compile and execute.

Be warned though that some packages such as smokeParticles compile but the default memory allocation for GPU is out of bounds and will not run without modification to the example source code.

I cannot say anything about prime offload since I have 0 experience with it.

Hi SJLPHI.

Since the bumblebee method seems to be working and it appears that the offload method would possibly need its removal, I went forward with it.

deviceQuery seems ok:

CUDA Device Query (Runtime API) version (CUDART static linking) 

Detected 1 CUDA Capable device(s) 

Device 0: "GeForce GTX 1050" 
  CUDA Driver Version / Runtime Version          11.2 / 11.2 
  CUDA Capability Major/Minor version number:    6.1 
  Total amount of global memory:                 4040 MBytes (4236312576 bytes) 
  ( 5) Multiprocessors, (128) CUDA Cores/MP:     640 CUDA Cores 
  GPU Max Clock rate:                            1493 MHz (1.49 GHz) 
  Memory Clock rate:                             3504 Mhz 
  Memory Bus Width:                              128-bit 
  L2 Cache Size:                                 524288 bytes 
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) 
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers 
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers 
  Total amount of constant memory:               65536 bytes 
  Total amount of shared memory per block:       49152 bytes 
  Total shared memory per multiprocessor:        98304 bytes 
  Total number of registers available per block: 65536 
  Warp size:                                     32 
  Maximum number of threads per multiprocessor:  2048 
  Maximum number of threads per block:           1024 
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64) 
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535) 
  Maximum memory pitch:                          2147483647 bytes 
  Texture alignment:                             512 bytes 
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s) 
  Run time limit on kernels:                     No 
  Integrated GPU sharing Host Memory:            No 
  Support host page-locked memory mapping:       Yes 
  Alignment requirement for Surfaces:            Yes 
  Device has ECC support:                        Disabled 
  Device supports Unified Addressing (UVA):      Yes 
  Device supports Managed Memory:                Yes 
  Device supports Compute Preemption:            Yes 
  Supports Cooperative Kernel Launch:            Yes 
  Supports MultiDevice Co-op Kernel Launch:      Yes 
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0 
  Compute Mode: 
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > 

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.2, CUDA Runtime Version = 11.2, NumDevs = 1 
Result = PASS

However: The samples never run when a graphical interface is to be shown. :frowning:

Tried compiling twice (first “optirun make”, then just “make”). The best testing example seems to be “NVIDIA_CUDA-11.2_Samples/5_Simulations/nbody”: it works flawlessly with flags -cpu (obviously out of the gpu) and -benchmark (which is not supposed to start any window). 900 times faster is a very good start…

Unfortunately, whenever I try it without either flag, be it directly “./nbody” or “optirun ./nbody” it just flashes a window and fails:[FONT=monospace][FONT=monospace][/FONT]

[/FONT][FONT=monospace]optirun ./nbody 
Run "nbody -benchmark -numbodies=<numBodies>]" to measure performance. 
        -fullscreen       (run n-body simulation in fullscreen mode) 
        -fp64             (use double precision floating point values for simulation) 
        -hostmem          (stores simulation data in host memory) 
        -benchmark        (run benchmark to measure performance)  
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)  
        -device=<d>       (where d=0,1,2.... for the CUDA device to use) 
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation) 
        -compare          (compares simulation results running once on the default GPU and once on the CPU) 
        -cpu              (run n-body simulation on the CPU) 
        -tipsy=<file.bin> (load a tipsy model file for simulation) 

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. 

> Windowed mode 
> Simulation data stored in video memory 
> Single precision floating point simulation 
> 1 Devices used for simulation 
GPU Device 0: "Pascal" with compute capability 6.1 

> Compute 6.1 CUDA device: [GeForce GTX 1050] 
CUDA error at bodysystemcuda_impl.h:186 code=219(cudaErrorInvalidGraphicsContext) "cudaGraphicsGLRegisterBuffer(&m_pGRes*, 
m_pbo*, cudaGraphicsMapFlagsNone)"

[FONT=monospace][FONT=monospace]
More interestingly: played around a bit with the optirun flag -b. Options “auto” and “virtualgl” fails in the same (standard) way above. Option “none” changes the error slightly:[/FONT][/FONT]

[/FONT][FONT=monospace]CUDA error at bodysystemcuda_impl.h:186 code=999(cudaErrorUnknown) "cudaGraphicsGLRegisterBuffer(&m_pGRes*, m_pbo*, cudaG
raphicsMapFlagsNone)"

However, option “primus” opens a failed window (showing nothing) and gets locked until a CTRL+C:

[/FONT][FONT=monospace]Run "nbody -benchmark -numbodies=<numBodies>]" to measure performance. 
        -fullscreen       (run n-body simulation in fullscreen mode) 
        -fp64             (use double precision floating point values for simulation) 
        -hostmem          (stores simulation data in host memory) 
        -benchmark        (run benchmark to measure performance)  
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)  
        -device=<d>       (where d=0,1,2.... for the CUDA device to use) 
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation) 
        -compare          (compares simulation results running once on the default GPU and once on the CPU) 
        -cpu              (run n-body simulation on the CPU) 
        -tipsy=<file.bin> (load a tipsy model file for simulation) 

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. 

> Windowed mode 
> Simulation data stored in video memory 
> Single precision floating point simulation 
> 1 Devices used for simulation 
GPU Device 0: "Pascal" with compute capability 6.1 

> Compute 6.1 CUDA device: [GeForce GTX 1050] 
X Error of failed request:  BadMatch (invalid parameter attributes) 
  Major opcode of failed request:  152 (GLX) 
  Minor opcode of failed request:  11 (X_GLXSwapBuffers) 
  Serial number of failed request:  40 
  Current serial number in output stream:  41 
primus: warning: dropping a frame to avoid deadlock 
primus: warning: timeout waiting for display worker 
primus: warning: recreating incompatible pbuffer 
^C[60107.986480] [WARN]Received Interrupt signal.

Does anybody has any idea on this one (while seemingly becoming really OT now)?

P.S.: for some reason, pymol is also suffering and does not run with optirun… :([/FONT]****

Hi
Try offload instead? Or just without optirun and see if it finds the gpu.


./nbody -benchmark

Run "nbody -benchmark -numbodies=<numBodies>]" to measure performance.
    -fullscreen       (run n-body simulation in fullscreen mode)
    -fp64             (use double precision floating point values for simulation)
    -hostmem          (stores simulation data in host memory)
    -benchmark        (run benchmark to measure performance) 
    -numbodies=<N>    (number of bodies (>= 1) to run in simulation) 
    -device=<d>       (where d=0,1,2.... for the CUDA device to use)
    -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
    -compare          (compares simulation results running once on the default GPU and once on the CPU)
    -cpu              (run n-body simulation on the CPU)
    -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Pascal" with compute capability 6.1

> Compute 6.1 CUDA device: [GeForce GT 1030]
3072 bodies, total time for 10 iterations: 2.926 ms
= 32.258 billion interactions per second
= 645.152 single-precision GFLOP/s at 20 flops per interaction


switcherooctl launch ./nbody

or

__NV_PRIME_RENDER_OFFLOAD=1 __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0 __GLX_VENDOR_LIBRARY_NAME=nvidia ./nbody

Shows;

https://forums.opensuse.org/attachment.php?attachmentid=962&stc=1

Screenshot from 2020-12-22 16-35-07.png