I would like to install the CUDA toolkit with zypper such that I always have the latest version compatible with the NVIDIA driver installed. I also want to resolve the issue GTK4 crash with install libvulkan-intel - #13 by kryzet without using the workaround suggested there, because on my laptop that causes increased fan noise. How can I do that?
@kryzet Hi, there are numerous ways… Use the Nvidia CUDA Leap 15 repo, or use the run files cuda and driver (this is what I use).
For the GTK issue, use the Nvidia GPU for Vulkan by setting the MESA_VK_DEVICE_SELECT environment variable for your GPU and remove the libvulkan-intel package.
Would this give me a version that’s always in sync with the NVIDIA driver installed?
Is this only an issue with GTK?
How do I do that? I’m not particularly familiar with Vulkan. I’m also curious if I can rollback Vulkan or Mesa such that once the bug is fixed, whichever of them was pinned is automatically unpinned and are automatically updated as usual.
Would this give me a version that’s always in sync with the NVIDIA driver installed?
AFAIK it adds the cuda repository;
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=OpenSUSE&target_version=15&target_type=rpm_local
Then stays in sync with that repository, so what ever driver is in use would be the one to use. Likewise you would need to remove the openSUSE Nvidia repo and lock that and the open driver so it doesn’t install.
Is this only an issue with GTK?
AFAIK, yes…
How do I do that? I’m not particularly familiar with Vulkan. I’m also curious if I can rollback Vulkan or Mesa such that once the bug is fixed, whichever of them was pinned is automatically unpinned and are automatically updated as usual.
I wouldn’t… can you show the output from inxi -GSaz please so can tell you what to use in the /etc/environment file.
What about nvidia-open-driver-G06-signed-cuda-kmp-default? I also saw nvidia-drivers-insync-latest and didn’t know what it does differently from the open driver package.
> inxi -G
Graphics:
Device-1: Intel Alder Lake-S [UHD Graphics] driver: i915 v: kernel
Device-2: NVIDIA GA107BM / GN20-P0-R-K2 [GeForce RTX 3050 6GB Laptop GPU]
driver: nouveau v: kernel
Device-3: IMC Networks Integrated Camera driver: uvcvideo type: USB
Display: wayland server: X.org v: 1.21.1.20 with: Xwayland v: 24.1.8
compositor: gnome-shell v: 49.1 driver: gpu: i915
resolution: no compositor data resolution: 1920x1080
API: EGL v: 1.5 drivers: iris,kms_swrast,swrast
platforms: gbm,wayland,x11,surfaceless,device
API: OpenGL v: 4.6 compat-v: 4.5 vendor: intel mesa v: 25.3.0
renderer: Mesa Intel UHD Graphics (ADL-S GT0.5)
API: Vulkan v: 1.4.328 drivers: intel,llvmpipe surfaces: N/A
Info: Tools: api: eglinfo, glxinfo, vulkaninfo x11: xprop
@kryzet I’m not sure about the cuda versioned one… I think nvidia-drivers-insync-latest is just a meta pkg, there is no data in it…
It’s inxi -GSaz info, -G doesn’t provide PCI ID’s…
> inxi -GSaz
System:
Kernel: 6.17.8-1-default arch: x86_64 bits: 64 compiler: gcc v: 15.2.1
clocksource: tsc avail: acpi_pm
parameters: initrd=\opensuse-tumbleweed\6.17.8-1-default\initrd-fedc2ef701a0c73144a10e20910a3d4de2e7d08e
root=UUID=4ed6fe72-5f5a-45ea-ab61-58bae3a94612 splash
resume=/dev/system/swap quiet security=selinux selinux=1 enforcing=1
mitigations=auto rootflags=subvol=@/.snapshots/1/snapshot
Desktop: GNOME v: 49.1 tk: GTK v: 3.24.51 wm: gnome-shell
tools: gsd-screensaver-proxy avail: xscreensaver dm: GDM v: 49.1
Distro: openSUSE Tumbleweed 20251121
Graphics:
Device-1: Intel Alder Lake-S [UHD Graphics] vendor: Lenovo driver: i915
v: kernel alternate: xe arch: Xe process: Intel 10nm built: 2020-21 ports:
active: eDP-1 empty: DP-1, DP-2, HDMI-A-1, HDMI-A-2, HDMI-A-3, HDMI-A-4
bus-ID: 00:02.0 chip-ID: 8086:468b class-ID: 0300
Device-2: NVIDIA GA107BM / GN20-P0-R-K2 [GeForce RTX 3050 6GB Laptop GPU]
vendor: Lenovo driver: nouveau v: kernel non-free: 550-580.xx+
status: current (as of 2025-08; EOL~2026-12-xx) arch: Ampere code: GAxxx
process: TSMC n7 (7nm) built: 2020-2023 pcie: gen: 4 speed: 16 GT/s
lanes: 8 link-max: lanes: 16 ports: active: none
empty: DP-3,HDMI-A-5,eDP-2 bus-ID: 01:00.0 chip-ID: 10de:25ec
class-ID: 0300
Device-3: IMC Networks Integrated Camera driver: uvcvideo type: USB
rev: 2.0 speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 1-6:2 chip-ID: 13d3:54b6
class-ID: 0e02 serial: <filter>
Display: wayland server: X.org v: 1.21.1.20 with: Xwayland v: 24.1.8
compositor: gnome-shell driver: gpu: i915 display-ID: 0
Monitor-1: eDP-1 model: AU Optronics 0x7ead serial: <filter> built: 2023
res: 1920x1080 dpi: 142 gamma: 1.2 size: 344x193mm (13.54x7.6")
diag: 394mm (15.5") ratio: 16:9 modes: 1920x1080
API: EGL v: 1.5 hw: drv: intel iris platforms: device: 1 drv: iris
device: 2 drv: swrast gbm: drv: kms_swrast surfaceless: drv: iris wayland:
drv: iris x11: drv: iris inactive: device-0
API: OpenGL v: 4.6 compat-v: 4.5 vendor: intel mesa v: 25.3.0 glx-v: 1.4
direct-render: yes renderer: Mesa Intel UHD Graphics (ADL-S GT0.5)
device-ID: 8086:468b memory: 14.97 GiB unified: yes display-ID: :0.0
API: Vulkan v: 1.4.328 layers: 1 device: 0 type: integrated-gpu name: Intel
UHD Graphics (ADL-S GT0.5) driver: mesa intel v: 25.3.0
device-ID: 8086:468b surfaces: N/A device: 1 type: cpu name: llvmpipe
(LLVM 21.1.5 256 bits) driver: mesa llvmpipe v: 25.3.0 (LLVM 21.1.5)
device-ID: 10005:0000 surfaces: N/A
Info: Tools: api: eglinfo, glxinfo, vulkaninfo x11: xprop
I’ve found Stefan’s Installation of NVIDIA drivers on openSUSE and SLE to cover a lot of the options and practicalities such as what to install and what to lock. Plus Stefan also covers how to clear out what every is already installed before trying anything new.
I can’t find an answer to my question there.
@kryzet You need to add the following to the /etc/environment file;
## GeForce RTX 3050 as default Vulkan Device
MESA_VK_DEVICE_SELECT="10de:25ec"
## Intel Vulkan Mesa/GTK Bug workaround
## should be resolved in Mesa 25.3.1 release
## boo#1254121
GSK_RENDERER=gl
Edit: added bug reference.
@kryzet if you want full control of versions, then use the cuda run file (you can extract so use the driver from there) and after kernel updates rebuild the driver, it only takes a few minutes and a reboot or two. It’s what I do here…
If I understand correctly, the Tumbleweed proprietary repository’s drivers are in sync with Tumbleweed’s kernel, while the CUDA repository’s drivers are synchronized with the Leap 15 kernel. How exactly should I install the driver?
Not long ago, a user adapted the wiki to refelect the newest developement how to install CUDA. Did you already test this?
https://en.opensuse.org/SDB:NVIDIA_drivers#CUDA
That doesn’t provide the full toolkit.
What is the “Full Toolkit”, you mean with nvcc?
For me I ensure the following packages are installed: kernel-default-devel libglvnd-devel make gcc gcc-c++
Download the cuda run file, as my user chmod 0755 cuda...run switch to a tty, log in as root, switch to multi-user and run the file, follow the install options.
For the likes of nccl and cudnn, that’s a separate download and install…
For the driver after cuda install or if it’s updated or a new kernel, again just download the run file, chmod it and use;
./NVIDIA-Linux-x86_64-$RUN_VERSION.run \
--ui=none \
--no-questions \
--accept-license \
--disable-nouveau \
--no-install-libglvnd \
--no-cc-version-check
Hmm… It seems like I already have all of these packages installed somehow.
Should I install cuda-cloud-opengpu as well or should I only use the runfile?
@kryzet stick with the repo files…
@kryzet OK for example;
Leap 16.0 using the repositories (I use nvidia container toolkit)
zypper se -si nvidia
S | Name | Type | Version | Arch | Repository
---+-------------------------------+---------+----------------------------------------+--------+-------------------------
i | kernel-firmware-nvidia | package | 20250516-160000.2.2 | noarch | repo-oss (16.0)
i | libnvidia-container-tools | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i | libnvidia-container1 | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i | libnvidia-egl-gbm1 | package | 1.1.2-160000.3.2 | x86_64 | repo-oss (16.0)
i | libnvidia-egl-wayland1 | package | 1.1.20-lp160.51.1 | x86_64 | repo-non-free (16.0)
i | libnvidia-egl-x111 | package | 1.0.3-lp160.21.1 | x86_64 | repo-non-free (16.0)
i | libnvidia-gpucomp | package | 580.105.08-lp160.44.1 | x86_64 | repo-non-free (16.0)
i | nvidia-common-G06 | package | 580.105.08-lp160.44.1 | x86_64 | repo-non-free (16.0)
i | nvidia-compute-G06 | package | 580.105.08-lp160.44.1 | x86_64 | repo-non-free (16.0)
i | nvidia-compute-utils-G06 | package | 580.105.08-lp160.44.1 | x86_64 | repo-non-free (16.0)
i+ | nvidia-container-toolkit | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i | nvidia-container-toolkit-base | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i+ | nvidia-driver-G06-kmp-default | package | 580.105.08_k6.12.0_160000.5-lp160.44.1 | x86_64 | repo-non-free (16.0)
i | nvidia-driver-G06-kmp-meta | package | 580.105.08-lp160.24.1 | x86_64 | repo-non-free (16.0)
i | nvidia-gl-G06 | package | 580.105.08-lp160.44.1 | x86_64 | repo-non-free (16.0)
i | nvidia-modprobe | package | 580.105.08-lp160.20.1 | x86_64 | repo-non-free (16.0)
i | nvidia-persistenced | package | 580.105.08-lp160.2.1 | x86_64 | repo-non-free (16.0)
i | nvidia-userspace-meta-G06 | package | 580.105.08-lp160.24.1 | x86_64 | repo-non-free (16.0)
i | nvidia-video-G06 | package | 580.105.08-lp160.44.1 | x86_64 | repo-non-free (16.0)
i | openSUSE-repos-Leap-NVIDIA | package | 20250714.a450212-lp160.3.1 | x86_64 | repo-oss (16.0)
I have a check script…
./nvidia_check
libcudadebugger.so.1 -> libcudadebugger.so.580.105.08
libcuda.so.1 -> libcuda.so.580.105.08
libcuda is installed
ERROR: libnccl is NOT installed
ERROR: libcudnn is NOT installed
./nvidia_check: line 14: nvcc: command not found
Compare with the run files ( and extras) on Tumbleweed;
zypper se -si nvidia
S | Name | Type | Version | Arch | Repository
---+-------------------------------+---------+--------------+--------+-------------------------
i | kernel-firmware-nvidia | package | 20251018-1.1 | noarch | Main Repository (OSS)
i | libnvidia-container-tools | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i | libnvidia-container1 | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i+ | libnvidia-egl-gbm1 | package | 1.1.2-2.4 | x86_64 | Main Repository (OSS)
i+ | libnvidia-egl-wayland1 | package | 1.1.20-1.1 | x86_64 | Main Repository (OSS)
i+ | nvidia-container-toolkit | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
i | nvidia-container-toolkit-base | package | 1.18.0-1 | x86_64 | nvidia-container-toolkit
And my check script;
./nvidia_check
libcudart.so.12 -> libcudart.so.12.8.57
libcudart.so.12 -> libcudart.so.12.9.79
libcudart.so.13 -> libcudart.so.13.0.88
libcuda.so.1 -> libcuda.so.580.105.08
libcuda.so.1 -> libcuda.so.580.105.08
libcudadebugger.so.1 -> libcudadebugger.so.580.105.08
libcuda is installed
libnccl.so.2 -> libnccl.so.2.27.6
libnccl is installed
libcudnn_engines_precompiled.so.9 -> libcudnn_engines_precompiled.so.9.11.0
libcudnn_graph.so.9 -> libcudnn_graph.so.9.11.0
libcudnn_ops.so.9 -> libcudnn_ops.so.9.11.0
libcudnn.so.9 -> libcudnn.so.9.11.0
libcudnn_heuristic.so.9 -> libcudnn_heuristic.so.9.11.0
libcudnn_adv.so.9 -> libcudnn_adv.so.9.11.0
libcudnn_cnn.so.9 -> libcudnn_cnn.so.9.11.0
libcudnn_engines_runtime_compiled.so.9 -> libcudnn_engines_runtime_compiled.so.9.11.0
libcudnn is installed
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Aug_20_01:58:59_PM_PDT_2025
Cuda compilation tools, release 13.0, V13.0.88
Build cuda_13.0.r13.0/compiler.36424714_0
No. The proprietary drivers are compiled on the end-user system when you install the package and when kernel-default-devel (or corresponding flavor) package is updated. Technically it is similar to using DKMS.
If you mean drivers in NVIDIA openSUSE CUDA repository - they are exactly the same and are compiled on the end user system. So, they are not inherently coupled with any specific kernel.
I have successfully installed nvidia-open-driver-G06-signed-cuda-kmp-default and cuda-cloud-opengpu matching the version of the driver and wrote an /etc/environment with the following content:
The only exception is that the last line is prefixed with the pound symbol (#).
I see…
So I can use the openSUSE Leap repository to get the latest version of the toolkit, and it’ll continue to work correctly when updated as long as everything comes from that one repository?