@mchnz The only thing I would check is the Nvidia parameters with cat /proc/driver/nvidia/params | sort
Thanks. I presume this would be simple check to do instead of running applications such as digikam? Would I be looking for anything in particular, or just that the cat returns some output?
Here’s what I’m seeing:
cat /proc/driver/nvidia/params | sort
CreateImexChannel0: 0
DeviceFileGID: 0
DeviceFileMode: 438
DeviceFileUID: 0
DmaRemapPeerMmio: 1
DynamicPowerManagement: 3
DynamicPowerManagementVideoMemoryThreshold: 200
EnableDbgBreakpoint: 0
EnableGpuFirmware: 18
EnableGpuFirmwareLogs: 2
EnableMSI: 1
EnablePCIeGen3: 0
EnablePCIERelaxedOrderingMode: 0
EnableResizableBar: 0
EnableS0ixPowerManagement: 1
EnableStreamMemOPs: 0
EnableUserNUMAManagement: 1
ExcludedGpus: ""
GpuBlacklist: ""
GrdmaPciTopoCheckOverride: 0
IgnoreMMIOCheck: 0
@mchnz Think you missed some lines?
I have;
cat /proc/driver/nvidia/params | sort
CreateImexChannel0: 0
DeviceFileGID: 0
DeviceFileMode: 438
DeviceFileUID: 0
DmaRemapPeerMmio: 1
DynamicPowerManagement: 2
DynamicPowerManagementVideoMemoryThreshold: 200
EnableDbgBreakpoint: 0
EnableGpuFirmware: 18
EnableGpuFirmwareLogs: 2
EnableMSI: 1
EnablePCIeGen3: 0
EnablePCIERelaxedOrderingMode: 0
EnableResizableBar: 0
EnableS0ixPowerManagement: 0
EnableStreamMemOPs: 1
EnableUserNUMAManagement: 1
ExcludedGpus: ""
GpuBlacklist: ""
GrdmaPciTopoCheckOverride: 0
IgnoreMMIOCheck: 0
ImexChannelCount: 2048
InitializeSystemMemoryAllocations: 1
KMallocHeapMaxSize: 0
MemoryPoolSize: 0
ModifyDeviceFiles: 1
NvLinkDisable: 0
OpenRmEnableUnsupportedGpus: 1
PreserveVideoMemoryAllocations: 1
RegisterPCIDriver: 1
RegistryDwords: ""
RegistryDwordsPerDevice: ""
ResmanDebugLevel: 4294967295
RmLogonRC: 1
RmMsg: ""
RmNvlinkBandwidthLinkCount: 0
RmProfilingAdminOnly: 1
S0ixPowerManagementVideoMemoryThreshold: 256
TemporaryFilePath: "/var/tmp"
UsePageAttributeTable: 1
VMallocHeapMaxSize: 0
My /etc/modprobe.d/50-nvidia-tweaks.conf
blacklist nouveau
softdep nvidia post: nvidia-drm nvidia-uvm
options nvidia-drm modeset=1
##Power Management
## Disable runtime D3 power management features
##options nvidia NVreg_DynamicPowerManagement=0x00
## Allow the GPU to go into its lowest power state when no applications are running
options nvidia NVreg_DynamicPowerManagement=0x02
## For suspending, make sure not using tmpfs!
options nvidia NVreg_PreserveVideoMemoryAllocations=1
options nvidia NVreg_TemporaryFilePath=/var/tmp
## Enable the PAT feature
options nvidia NVreg_UsePageAttributeTable=1
## Support for CUDA Stream Memory Operations in user-mode applications.
options nvidia NVreg_EnableStreamMemOPs=1
I suggest you review the relevant README here https://download.nvidia.com/XFree86/Linux-x86_64/570.172.08/README/
FWIW I’m running the CUDA 12.9.1 with the 575.64.05 open driver, I also use cudnn and nccl which are further manual steps.
You’re right, I somehow edited out some lines.
Using the steps I took, nvidia-common-G06-575.57.08 was installed which automatically installed /usr/lib/modprobe.d/50-nvidia.conf
- which contains:
# Nouveau must be disabled to load the nvidia kernel module:
blacklist nouveau
# Add soft dependencies for extra modules as adding the module loading to
# /usr/lib/modules-load.d/*.conf for systemd consumption, makes the
# configuration file to be added to the initrd but not the module, throwing an
# error on plymouth about not being able to find the module.
# Ref: /usr/lib/dracut/modules.d/00systemd/module-setup.sh
# Even adding the modules is not the correct thing, as we don't want it to be
# included in the initrd, so use this configuration file to specify the
# dependency.
softdep nvidia post: nvidia-uvm nvidia-drm
# Enable complete power management. From:
# file:///usr/share/doc/packages/nvidia-common-G06/html/powermanagement.html
options nvidia NVreg_TemporaryFilePath=/var/tmp
options nvidia NVreg_EnableS0ixPowerManagement=1
options nvidia NVreg_PreserveVideoMemoryAllocations=1
# Nvidia modesetting support. Set to 0 or comment to disable kernel modesetting
# and framebuffer console support. This must be disabled in case of SLI Mosaic on X.
options nvidia-drm modeset=1
Using the above process on tumbleweed 20250730 resulted in installing nvidia-open-driver-G06-signed-cuda-kmp-default-575.57.08
with cuda rpms seeming to vary between 12.9 and 12.8.
Thanks for the links - if I have issues I’ll give them a look. Most importantly, cat pictures are being generated, so I’m just happy to have arrived at that.
This is likely a naive question, but does nvidia maintain their opensuse repos such that zypper will pick up new updates to packages?
BTW…I’m not really interested in fluffy cat pics; but if you know how to create cute, intelligent chocolate labrador retriever pics, I’m in.
Cheers
I guess this is somewhat rare, but the cudo repo has just done a major jump from G06-575 to the G06-580 driver.
This proved my understanding was incomplete, and my configuration of my system was unprepared. I did a zypper dup
and the fun began. Zypper, not finding nvidia-open-driver-G06-signed-cuda-kmp-default-580.*
, decided to apply cuda repo’s nvidia-open-driver-G06-580.65.06-1.noarch
. This resulted in both the 575 and 580 drivers being installed, the 575 opened-signed driver and the 580 from cuda driver:
rpm -q -a | grep nvidia-open-driver-G06
nvidia-open-driver-G06-580.65.06-1.noarch
nvidia-open-driver-G06-signed-cuda-kmp-default-575.57.08_k6.15.8_1-1.9.x86_64
This doesn’t seem to have caused any issues. After a reboot, the 580 driver is the one that is now active. However, it’s not what I had intended to use. Perhaps I should have set the cuda repo not to auto refresh, or use locks on the packages from cuda to prevent these unanticipated consequences.
If I want to stick with OpenSUSE’s open-signed cuda driver I need to give some thought about how best to achieve that. Perhaps I should have set the cuda repo not to auto refresh; or maybe use locks on the packages from cuda. Or maybe I should use the OpenSUSE non-cuda driver, plus the supporting stuff from the nvidia repo, and just add what’s needed to add cuda from the cuda-repo.
I suspect what I should do is, when a happy working/stable situation is achieved, I should apply locks to the driver and kernel (and related packages such as virtualbox), and every now and then consider what has changed and consider whether the time is right to unlock everything and allow a dup to roll me forward to the latest. That is what I was doing when using the nvidia-repo (before I got interested in having cuda on board).
I think I’m going to restore from backup and ponder all of the above.
@mchnz and cuda has moved to 13.0.0, hence the driver update…
Look at the run files as then you can control as required?
So you’re recommending using the run files from nvidia rather than anything OpenSUSE specific? It might be simpler than having one foot in the OpenSUSE repos and the other in the Nvidia repos.
I’ve been looking further into my old habit of locking down the kernel and drivers. I’ve written a script that reports back to me what is installed and what is available to be installed:
Running kernel version 6.15.8
Installed nvidia-open-driver-G06-signed-cuda-kmp-default driver: 575.57.08 built for kernel 6.15.8
Installed nvidia support packages for version: 575.57.08
Highest available kernel-default package: 6.15.8 (from repo-oss)
Highest available nvidia-open-driver-G06-signed-cuda-kmp-default driver: 575.57.08 (from repo-oss) (built for 6.15.8)
Highest available nvidia 575 support packages: 575.57.08 (from cuda)
Highest available nvidia support packages: 580.65.06 (from cuda)
Script attached, should anyone be interested:
#!/bin/bash
function zypper_highest_available() {
repo="$1"
package_name="$2"
required_major_version="$3"
extract_kernel="$4"
zypper --no-refresh se --repo "$repo" -s --match-exact $package_name | \
gawk -v mp="$required_major_version" -v ek="$extract_kernel" -F'|' '
in_body { split($4, parts, "[ _-]"); if (mp == "" || parts[2] ~ "^" mp "[.]") { val = parts[ek ? 3 : 2]; sub("k", "", val); print val }}
$1 ~ /---/ { in_body=1 }' | \
sort -V | tail -1
}
kernel_repo=repo-oss
driver_repo=repo-oss
support_repo=cuda
kernel_package=kernel-default
driver_package=nvidia-open-driver-G06-signed-cuda-kmp-default
support_package=nvidia-video-G06
driver_version=$(rpm -qa --queryformat '%{VERSION}\n' $driver_package | awk -F_ '{print $1}')
driver_kernel_version=$(rpm -qa --queryformat '%{VERSION}\n' $driver_package | awk -F_ '{sub("k","", $2); print $2}')
support_package_version=$(rpm -qa --queryformat '%{VERSION}\n' $support_package)
running_kernel=$(uname -r | awk -F- '{print $1}')
highest_available_kernel=$(zypper_highest_available $kernel_repo $kernel_package)
highest_available_driver=$(zypper_highest_available $driver_repo $driver_package)
highest_available_driver_built_for=$(zypper_highest_available $driver_repo $driver_package "" "extract_kernel")
highest_support_version=$(zypper_highest_available $support_repo $support_package)
driver_major_version=$(echo $driver_version | awk -F. '{ print $1 }')
highest_support_version_for_current_major=$(zypper_highest_available $support_repo $support_package $driver_major_version)
echo "Running kernel version $running_kernel"
echo "Installed $driver_package driver: $driver_version built for kernel $driver_kernel_version"
echo "Installed nvidia support packages for version: $support_package_version"
echo "Highest available $kernel_package package: $highest_available_kernel (from $kernel_repo)"
echo "Highest available $driver_package driver: $highest_available_driver (from $driver_repo) (built for $highest_available_driver_built_for)"
echo "Highest available nvidia $driver_major_version support packages: $highest_support_version_for_current_major (from $support_repo)"
echo "Highest available nvidia support packages: $highest_support_version (from $support_repo)"
@mchnz with the run file since there is no repository, one can control the products and versions installed, no need to lock files etc… just have to manually rebuild the driver on a kernel update, which is simplified with a script… then I do use nvidia-persistence, a config file and tweak boot options.
Yeah, mixing driver and supporting packages from across repos is a bit of sticky wicket.
However, given some folk have made the effort to create nvidia-open-driver-G06-signed-cuda-kmp-default
, I thought I’d make some effort to investigate whether I could work with it. But it is a bit nasty, especially in this kind of version bump situation.
To avoid many locks I modified version of my previously posted script to generate an rpm to restrict dependencies, the generate spec is:
Name: nvidia-driver-dependencies
Version: 1.0
Release: 1
Summary: Dependency constraints package for kernel and nvidia
License: MIT
BuildArch: noarch
Requires: nvidia-open-driver-G06-signed-cuda-kmp-default >= 575.57.0, nvidia-open-driver-G06-signed-cuda-kmp-default <= 575.57.999
Requires: kernel-default >= 6.15.0, kernel-default <= 6.15.999
Requires: nvidia-video-G06 >= 575.57.0, nvidia-video-G06 <= 575.57.999
Requires: nvidia-compute-utils-G06 >= 575.57.0, nvidia-compute-utils-G06 <= 575.57.999
Requires: nvidia-settings >= 575.57.0, nvidia-settings <= 575.57.999
Conflicts: nvidia-driver-G06-kmp-default
Conflicts: nvidia-driver-G06
Conflicts: nvidia-open-driver-G06-kmp-default
%description
Dependency package constraints for kernel and nvidia
%prep
%build
%install
mkdir -p %{buildroot}%{_datadir}/nvidia-driver-dependencies
touch %{buildroot}%{_datadir}/nvidia-driver-dependencies/.placeholder
%files
%{_datadir}/nvidia-driver-dependencies/.placeholder
%changelog
If I lock this generated rpm, I can leave everything else unlocked, and all repo’s active. Then I will only receive dup updates I feel comfortable with. I would then update this one rpm to allow things to step forward beyond that comfort range.
I’m not sure if I will run with this, but I thought I should give it a try.
@mchnz Once the 580 driver arrives you can update cuda to 13.0.
nvidia-smi
Thu Aug 7 20:40:36 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.65.06 Driver Version: 580.65.06 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
I wasn’t originally planning to move up the driver release branches, and definitely wasn’t thinking of the 580 driver, but I’m give it a shot.
I have no real need for a signed driver, I could switch to nvidia-open-driver-G06-kmp-default
, but I’ll stay the course just to see how good/bad my current approach might be.
Today (23 August 205) i tried to install CUDA on my Tumbleewd.
I have a msi GEFORCE RTX 5070 TI.
I installed G06, but, Blender doesn’t find CUDA.
I’m a little confused by the myriad of instructions you’ve written here in this post.
I’d like a step-by-step guide, so I can also tell you, “I stopped at step xxxx, with this error message,” if something doesn’t work.
Did you read the already linked article from Stefan Dirsch?
now I read this article, thanks a lot.
ok, when I try
zypper in nvidia-open-driver-G06-signed-kmp-meta
I have first problem:
no element provides ‘nvidia-open-driver-G06-signed-kmp = 570.172.08’ mandatory for nvidia
- do not install …
- install anyway …
Simply use following command instead
zypper in nvidia-open-driver-G06-signed-kmp-default