Move from Bumblebee to suse-prime


I have the following laptop:

  • Model: Acer Aspire VN7-792G
  • Processor: i7-6700HQ
  • Gfx: Geforce GTX-960M 2GB

Operating System:

  • Leap 42.1
  • Kernel: Standard 4.1.20

Currently I have Bumblebee installed which is almost working fine except I get some dmesg messages during use.
Now, I have heard from the suse-prime project and that suse-prime could be more suitable for playing games with steam.

If this is the case, I would appreciate if someone could guide me how to move from Bumblebee to suse-prime in order to avoid that I miss some steps which could break my system.

These are the steps I did to get Bumblebee installed:

  • zypper in -t pattern devel_kernel
  • zypper ar Bumblebee
  • zypper ref
  • zypper in bbswitch bumblebee dkms nvidia-bumblebee nvidia-bumblebee-32bit primus VirtualGL-devel VirtualGL-32bit bbswitch-kmp-default powertop
  • systemctl enable dkms
  • systemctl enable bumblebeed
  • usermod -a -G video,bumblebee user1
  • edit /etc/bumblebee/bumblebee.conf ; replace auto -> bbswitch
  • mkinitrd && reboot
  • systemctl status bumblebeed

I had to add the following kernel parameters in order to avoid freezes due to missing Skylake support:
nouveau.modeset=0 elevator=deadline i915.preliminary_hw_support=1 i915.enable_rc6=0


  • What are the major differences between Bumblebee and suse-prime ?
  • How do I start an application using my discrete gfx-card ?
  • Is it needed to start nvidia-settings in order to setup Antialiasing for instance ?

Hi, a lot of questions… so let’s start with pointing you to this guide

Please note that while bumblebee on Leap is rock solid in my experience, suse-prime still might have stability issues and maybe was not extensively tested on skylake chips: be prepared to tinker with your system to have it working (I found it safer to setup a new test partition with a new test install for prime while preserving my bumblebee-enabled setup for day to day work).
Please be aware that you have to uninstall bumblebee before installing suse-prime (but keep the file you find in /usr/src to avoid downloading it again for suse-prime).

Main difference: bumblebee has a latency of about 0.8 ms redrawing the display because of the need to copy the video buffer from the rendering GPU to the integrated graphics driving the display. That may be noticeable in games where sometimes little graphics computation in actually done but fast redrawing is needed.
Other than that I saw no clear advantage of using suse-prime instead of bumblebee: some heavy-graphics benchmark show better results with one, some with the other: no clear winner when redrawing frequency is somewhere below 1000 FPS, while suse-prime can reach 7000 FPS or so with very light benchmarks (and I guess that your setup with a GTX 960M is not far from mine…).

To select the Nvidia GPU you have to run “select nvidia” as superuser, then log out and then back in.
Then, before shutting down or rebooting, you have to run “select intel”, since you are likely to boot to a black screen if shutting down in “nvidia mode”.

You have to start “nvidia-settings” to tweak the graphics setup, although I saw no great effect in my limited tests so far.

Please write back if you need more details.

Thank you very much for the quick response.

I was thinking about suse-prime and its better performance because the game Doom4@Steam will have heavy performance requirements.

I used to keep my installation safe and clean and try to avoid to add any 3rd party repos and tinker with my system.
I already had a lot of trouble to workaround with the Skylake issue which freezes my system and also with that stupid wifi-card (QCA6174).

However, I do not have any concerns giving suse-prime a try, except that I have always to logout and login again as soon as I need the nvidia card enabled. This is very uncomfortable and it looks more like a hack.

The log-out log-in cycle is due to Xorg not allowing to switch the render device on the fly.
When you are not on battery you could leave the Nvidia engaged all the time and possibly insert the “select intel” command in some boot up and shutdown scripts just to be sure to reboot in “intel mode”.
Another trick is (before even logging in to the desktop) go to the console (CTRL+ALT+F1), login as root, issue the “select nvidia” command, then switch to graphical login (CTRL+ALT+F7) and login with your normal user. Remember to switch back to intel on the console before shutting down…

Hi OrsoBruno,

thanks for the tricks. I will test it.
But I’m wondering that selecting nvidia in F1-console is working although Xorg is already running in F7 graphical target.
I thought that Xorg needs to be restarted.


Going forward, I would like to move from Bumblee to suse-prime. But I’m not sure if I have to remove all packages which I have installed for Bumblebee or just a few of them prior to install suse-prime.
At that time I installed the following packages:

  • bbswitch

  • bumblebee

  • dkms

  • nvidia-bumblebee

  • nvidia-bumblebee-32bit

  • primus

  • VirtualGL-devel

  • VirtualGL-32bit

  • bbswitch-kmp-default

I guess that I have to uninstall the following packages:

  • bumblebee
  • nvidia-bumblebee
  • nvidia-bumblebee-32bit

Afterwards the NVIDIA drivers:
Then, suse-prime from

Is there anything I have to consider ?

Hi, you have to **uninstall “primus” **as well; bbswitch and bbswitch-kmp-default are not strictly needed but should do no harm.
Copy /usr/src/ to a safe place if you don’t want to download it again (it should be deleted uninstalling nvidia-bumblebee); then copy it back to /usr/src before installing nvidia.ymp

I have uninstalled primus and bumblebee and installed nvidia.ymp and suse-prime.
I created the two scripts posted from Brunolab at

But unloading the nvidia modules did not work even executing by “prime-select intel” in advance.
As a result it is not feasible to turn off the discrete gfx-card by /proc/acpi/bbswitch.

# modprobe -r nvidia_uvm nvidia
modprobe: FATAL: Module nvidia is in use.
# lsmod | grep nvidia
nvidia_modeset     749568   3
nvidia                10096640  80  nvidia_modeset
drm                      385024  6    i915,drm_kms_helper,nvidia

However, I got the following results using glxspheres:
2222.1 frames/sec 2480Mpixels/sec

Congratulations, these are fake numbers IMHO but they confirm you are using Nvidia by prime; check glmark2 for something more meaningful, available on software.opensuse.orghere thanks to malcolmlewis.

Something changed in the Nvidia drivers in the 361.xx and 364.xx series; maybe the guide was written by Brunolab for the 352.xx series and should be updated; I didn’t check that myself, sorry.

Have you changed the file /etc/X11/xdm/Xsetup accordingly?
After calling prime-select intel, did you log out from KDE and log in again in order to restart the X-Server?
This is the necessary procedure to activate the intel or nvidia graphics after calling prime-select intel | nvidia.
Only with restarting the X-Server after calling prime-select intel the nvidia kernel module gets out of factual use,
can then be unloaded and switched off.

You are right about some changes in the nvidia module 361.42.
Now apparently together with the nvidia module not only the nvidia_uvm module, but also the nvidia_modeset are getting loaded.
Therefore the nvidia_modeset must also be unloaded before being able to unload the nvidia module and switching off the nvidia card altogether.
So the script now should look like this:

modprobe -r nvidia_modeset nvidia_uvm nvidia
tee /proc/acpi/bbswitch <<<OFF

Right after I start “glmark2”, I get a “Segmentation fault”.
I noticed that there are some bugreports on regarding this issue.

It is a real shame that I cannot compare the performance between Bumblebee and suse-prime.
I think I will go for Bumblebee because it is more comfortable.
If a game is still not running smoothly, I will go for suse-prime.

glmark2 works here on Leap (Bumblebee + Nvidia 361.28) and Tumbleweed (suse-prime + Nvidia 364.15).
You might have better chances with GpuTest:

I have installed GpuTest and tested with suse-prime and also with bumblebee.
Despite the fact that I still got the error message “Segmentation fault” after each test, here are the results:

Antialiasing: off
Nvidia settings: Default
Nvidia version: 361.28
All tests run in fullscreen 1920x1080

PixMark Piano…7…429
PixMark Volplosion…21…1285
TessMark X32…212…12731

Test command: sh <test>.sh

PixMark Piano…7…430
PixMark Volplosion…20…1215
TessMark X32…116…7007

Test command: optirun sh <test>.sh
PMMethod: bbswitch

Conclusion: the results of the first 4 tests are only slightly different but the difference of the last 3 ones are significiant.
I do not know if this will really impact the performance in terms of playing games.

Similar results here on the GTX 960M. On Bumblebee you may get higher scores by:

vblank_mode=0 primusrun

or similar and find surprises like score: 15310 (255 FPS) for plot_3d.
Bumblebee is not that bad after all… and still a good choice for general purpose use.

primusrun does not work at all on my Skylake system.
Although I get a “segmentation fault” using optirun, this is the only message I get from primusrun.
This applies to fullscreen and windowed tests.

Make it sense to use nvidia-settings in order to setup Antialiasing and/or Anisotropoc Filtering ?

Hi OrsoBruno,

I did it.

  • uninstalled everything related to Bumblebee/suse-prime and bbswitch.
  • installed the stable standard kernel which is currently version 4.5.2.
  • installed bumblebee, dkms, bbswitch, bbswitch-kmp, nvidia stuff
  • due to different kernel to bbswitch-kmp, I was installing the corresponding srcpackage
  • used dkms to build and install the kernel module bbswitch.ko
  • mkinitrd
  • reboot

Everything works as expected but unfortunately only with optirun: approx 325 frames/sec - 370 Mpixels/sec
But primusrun is only having 60 frames/sec - 66 Mpixels/sec
According to the ouput of optirun and pimusrun, both commands are using the discrete gfx-card and /proc/acpi/bbswitch is “switched on”.

What do you think ?

primusrun normally goes in sync with the monitor refresh rate, hence what you see: 60 frames/sec - 66 Mpixels/sec
To have it run at full speed for testing purposes try:

vblank_mode=0 primusrun glxspheres

You should see something between 300 and 400 frames/sec if all is working as expected.

Yes, that’s it !
“vblank_mode=0 primusrun /usr/bin/glxspheres” shows 415 fps - 460 Mpx/s.