NVIDIA Optimus without bumbleebee

It’s getting better: I remembered something !!!
This makes the cursor move around in both screens ( more later ):


Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
**    Option "ConstrainCursor" "off"
**EndSection

This way I was able to move the cursor around on both connected monitors on the plasma5 desktop. By maximizing and grabbing Systemsettings on the second screen I was able to drag it to the first, and used Systemsettings - Display & Monitor to check this here:
Running the desktop on the Intel shows one connected monitor, i.e. “Laptopscreen”, running the desktop on the NVIDIA this shows two connected monitors, i.e. “Laptopscreen” and “VGA-0”, the latter being the one the NVIDIA sends it’s output to. If I disable VGA-0 in there, logout and relogin the desktop works OK:


knurpht@linux-zs4n:~> glxinfo | grep direct
direct rendering: Yes
    GL_AMD_multi_draw_indirect, GL_ARB_ES2_compatibility, 
    GL_ARB_direct_state_access, GL_ARB_draw_buffers, 
    GL_ARB_draw_indirect, GL_ARB_draw_instanced, GL_ARB_enhanced_layouts, 
    GL_ARB_indirect_parameters, GL_ARB_instanced_arrays, 
    GL_ARB_map_buffer_range, GL_ARB_multi_bind, GL_ARB_multi_draw_indirect, 
    GL_EXT_depth_bounds_test, GL_EXT_direct_state_access, 
    GL_NV_bindless_multi_draw_indirect, 
    GL_NV_bindless_multi_draw_indirect_count, GL_NV_blend_equation_advanced, 
    GL_AMD_multi_draw_indirect, GL_ARB_ES2_compatibility, 
    GL_ARB_direct_state_access, GL_ARB_draw_buffers, 
    GL_ARB_draw_indirect, GL_ARB_draw_instanced, GL_ARB_enhanced_layouts, 
    GL_ARB_indirect_parameters, GL_ARB_instanced_arrays, 
    GL_ARB_map_buffer_range, GL_ARB_multi_bind, GL_ARB_multi_draw_indirect, 
    GL_EXT_depth_bounds_test, GL_EXT_direct_state_access, 
    GL_NV_bindless_multi_draw_indirect, 
    GL_NV_bindless_multi_draw_indirect_count, GL_NV_blend_equation_advanced, 
    GL_EXT_gpu_shader5, GL_EXT_map_buffer_range, GL_EXT_multi_draw_indirect, 
knurpht@linux-zs4n:~> glxinfo | grep OpenGL
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 610M/PCIe/SSE2
OpenGL core profile version string: 4.4.0 NVIDIA 352.55
OpenGL core profile shading language version string: 4.40 NVIDIA via Cg compiler
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.5.0 NVIDIA 352.55
OpenGL shading language version string: 4.50 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 NVIDIA 352.55
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
knurpht@linux-zs4n:~> 

sddm still lives off-screen though

Am I wrong in thinking that we need to tell the NVIDIA not to use the VGA-0 display before sddm starts?

FYI: there is no actual second monitor, just the laptop and it’s two GPUs.

No, you’re not wrong. Your adjustments in “Systemsettings - Display & Monitor” are simply a workaround applied to the KDE environment. You desire proper behaviour on a system wide basis … and to get that, you need the nvidia kernel driver to behave properly (which it isn’t) … Note that you should be able to gloss over the problem via nvidia-setting and saving a display server config that doesn’t use the phantom display devices … however, that is once again a workaround, and the real issue is the nvidia driver not working properly in the first place … (so the correct solution is to file a bug with them and get them to fix it … it being the detection of devices that don’t exist (VGA-0) … this not a new problem to them, so they should be able to track it down and patch).

I got that far meanwhile. Still I think it should be possible to use xrandr or xorg.conf.d/90-nvidia.conf to stop this behaviour, even though it’s still a workaround.

it is

  • with xrandr, you could set the lvds as the primary, or
    turn off the phantom vga output - with xorg.conf, you could config the primary as the lvds

This works for avoiding the second output:

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "ConstrainCursor" "off"
**    Option "UseDisplayDevice" "none"**
EndSection

But, still no sddm showing.

Try putting



# Do not move up. Only now xrandr shows the outputs
lvds=$(xrandr | grep -i -e "lvds" -e "edp" | head -n1 |cut -d " " -f 1)

xrandr --output "$lvds" --off
xrandr --output "$lvds" --auto

In the end of /etc/prime/prime-offload.sh

Bo

I did not look to see at which point you have your script run, but in order for the desired behaviour on boot to take effect, the workarounds suggested have to be implemented before the greeter launched by the display manager starts up. You have to do that by either configing the display server itself (i.e. X in this case, and via a xorg.conf type file ) or using the appropriate xrandr commands in an appropriate script run by the display manager.

In regards to the xrandr commands suggested, I don’t see how that would take care of the phantom VGA-0 output that the nvidia driver is creating and deeming connected, and worse, directing output to. Rather, I’d simply run "xrandr --outuput VGA-0 --off

The lines origin from nvidia-prime. I think maybe it has an effect to turn off LVDS and on again.

Would be nice if somebody would fix it and do a pull request on github. I only have a TV at home as external monitor (so HDMI).

Bo

My thoughts exactly, except for the outuput :slight_smile:
It didn’t work, trying

xrandr --output VGA-0 --off

at the end of the offload script.

Nice initiative, I haven’t tested it yet, my optimus system is legacy and I’m still stuck at 13.1.
However, I looked into the prime-select script and I’m wondering if it really will work as intended, here are my points.

cat <<< '/usr/X11R6/lib64
/usr/X11R6/lib
      ' > /etc/ld.so.conf.d/nvidia-gfxG04.conf

I would at the beginning of the script make sure that there never exists a file /etc/ld.so.conf.d/nvidia-gfxG*, note that it will be recreated on a driver update so better check every time. Besides this also works for G03 and one day there will be a G05.

if  -e /etc/ld.so.conf.d/nvidia-gfxG0*.conf ]; then    rm /etc/ld.so.conf.d/nvidia-gfxG0*.conf
fi  

and instead create the symlinks like this:

ldconfig /usr/X11R6/lib
    ldconfig /usr/X11R6/lib64    

Removing nvidia-gfxG04.conf and running ldconfig will not remove the intended symlinks, you have to remove them explicitly.

rm -f /etc/ld.so.conf.d/nvidia-gfxG04.conf

      echo "Running ldconfig"
      ldconfig

Instead

if  -e /usr/X11R6/lib/libGL.so.1 ]; then    rm /usr/X11R6/lib/libGL.so.1
    rm /usr/X11R6/lib64/libGL.so.1
fi

Hope this is understandable.
Mange takk!

This package is still experimental, so I decided only to have it working for newer optimus cards (which I have) to begin with. I think we are still going for the ld.so.conf.d entry since nvidia relies on that on default install. I will improve the script hopefully in the weekend. Will let you know when it should be ready for legacy.

Bo

I made an update, it should work with legacy drivers. (At least I hope, I have no way of testing).

Hope this is understandable.
Mange takk!

Selv tak :slight_smile:

Bo

Prime-select is working quite well here on a fresh LEAP 42.1 installation and the Nvidia performance is great,
but there is one drawback:
When switching back to Intel graphics, the Nvidia kernel modules (nvidia, nvidia_uvm) are not unloaded and the Nvidia card is not powered off, like with bumblebee.
Therefore the power consumption remains high (~32 W), compared to bumblebee (~17W).

Even manually removing the nvidia modules does not reduce power consumption, since the switching off of the Nvidia card is not implemented yet.
If one wants to use the Nvidia card exclusively, prime-select is the way to go.

We could use bbswitch to turn the discrete card off. Ubuntu has some magic in the gpu manager. But it is not that stable though. Would be great if nvidia would implement it.

Bo

Situation still the same: desktop runs on the NVIDIA, sddm off-screen
Could this be a systemd issue, i.e. sddm being started before the prime-offload script runs, hence using the wrong output?

I tried bbswitch, installed the version from the main repositories together with bbswitch-kmp-default.
And it worked, though some manual input is necessary:

I changed the default settings of bbswitch with
nano /etc/modprobe.d/50-bbswitch.conf
to:

options bbswitch load_state=1 unload_state=-1

I don’t know, whether this is necessary, but with bumblebee otherwise my GTX 670MX would not get initialized correctly.

When in the intel state of prime-select:

modprobe -r nvidia_uvm nvidia
tee /proc/acpi/bbswitch <<<OFF

Control with:

cat /proc/acpi/bbswitch

It should return OFF.

Power consumption is going down to about 18W from 32W. Intel graphics is used, Nvidia card is switched off.

To get the Nvidia card powered on again:

tee /proc/acpi/bbswitch <<<ON
modprobe nvidia nvidia_uvm
prime-select nvidia

Log off, log in again.
→ Nvidia is active and the performance is better than with bumblebee, glxspheres going wild.
Power consumption about 45 W.

Back to intel graphics:

prime-select intel

Log out , log in.
Continue with

modprobe -r nvidia_uvm nvidia
tee /proc/acpi/bbswitch <<<OFF

, see above, when in intel state of prime-select…

Power consumption measured with an external power meter with the PC in almost idle mode.

Glad you got it working! However, after suspend the module doesn’t function that well… NVIDIA should really
take some othership on that module or do something themself in their kernel module.

It would be possible to do something automatic like Ubuntu does with their gpu-manager, however I am just a bit
concerned about the stability of the system. Switching between intel and nvidia gpu is more stable on openSUSE than
Ubuntu. Maybe due to less magic stuff.

BTW To quote NVIDIA engineer [1] on doing proprietary modules:

“While this may not please everyone, it does allow us to provide the most consistent GPU experience to our customers, regardless of platform or operating system.”

and

“Supporting Linux is important to Nvidia, and we understand that there are people who are as passionate about Linux as an open source platform as we are passionate about delivering an awesome GPU experience,”

He must be joking! It is really a nightmare getting an Optimus machine running stable.

Bo

[1] Nvidia Responds to F-Bomb From Linus Torvalds | WIRED

After switching several times from Intel graphics to Nvidia graphics and back with prime-select nvidia | intel,
I have to state that this is a sensible way to get most of both graphics cards.

The manual input needed is uncomfortable, as is the logging out and in again for restarting the X-server, but
overall it is a stable solution and the graphical performance of the Nvidia card is much better than when using bumblebee!
Helicopter flight in X-Plane is a very smooth experience now.

Thanks for introducing prime to OpenSUSE !

Nice to know! Would you like to share some numbers or benchmarks?