I’m currently using the “new feature branch” 575.64 drivers on my 750ti … to even get the nvidia_uvm driver to load automatically I need to have this line in my /etc/modprobe.d/90-nvidia.conf file (check out the first two lines)
# Modprobe won't load nvidia_uvm without this !?
softdep nvidia post: nvidia_uvm
# See "modinfo nvidia" for others
# Sleep/Hibernate
options nvidia \
NVreg_PreserveVideoMemoryAllocations=1 \
NVreg_EnableStreamMemOPs=1 \
# For Wayland
options nvidia_drm \
modeset=1 \
# Placeholder for nvidia_modeset
# See 'systool -vm nvidia_modeset'
# Placeholder - Unified memory
# For 575 drivers - not sure about 570
# use 'systool -vm nvidia_uvm' to see all settings
For now I also have a 100-nvidia-uvm.conf file … so I can load an unload it at will when I change options … it currently dosen’t “depend” on any other module … I’ll add them to the 90_nvidia.conf file when I’m satisfied that UVM is working properly
dart@windeath:~> cat /etc/modprobe.d/100-nvidia-uvm.conf
# Unified memory
# For 575 drivers - not sure about 570
# use 'systool -vm nvidia_uvm' to see all settings
options nvidia_uvm \
uvm_page_table_location=vid \
uvm_force_prefetch_fault_support=1 \
uvm_fault_force_sysmem=1 \
But still no shared memory showing up
dart@windeath:~> glxinfo -B
name of display: :1
display: :1 screen: 0
direct rendering: Yes
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 2048 MB
Total available memory: 2048 MB
Currently available dedicated video memory: 1157 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce GTX 750 Ti/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 575.64
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 4.6.0 NVIDIA 575.64
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 575.64
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
I’m sure I just have to wait for the 580 drivers to have any chance for this to work but I’m just curious if anyone else is seeing more total (system backed) memory for their card regardless of which generation it is?
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 2048 MB
Total available memory: 2048 MB
$ glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 12282 MB
Total available memory: 12282 MB
Currently available dedicated video memory: 10819 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce RTX 4070 SUPER/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 575.64
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 4.6.0 NVIDIA 575.64
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 575.64
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
Even though we still have no shared memory I did find that loading nvidia_uvm with the
uvm_page_table_location=vid
did seem to help a bit … maybe because I’m a member of the vid group … the other choice is “sys” but “vid” seems to work a bit better … after loading the nvidia_uvm I picked up a bit on my vkmark/Unigine Heaven
video and render group? I did that as well because @malcolmlewis kindly advised me to do that.
Maybe this?
$ vulkaninfo | grep -A 10 "memoryHeaps"
memoryHeaps: count = 2
memoryHeaps[0]:
size = 12878610432 (0x2ffa00000) (11.99 GiB)
budget = 11344216064 (0x2a42b0000) (10.57 GiB)
usage = 0 (0x00000000) (0.00 B)
flags: count = 1
MEMORY_HEAP_DEVICE_LOCAL_BIT
memoryHeaps[1]:
size = 24920580096 (0x5cd61e000) (23.21 GiB)
budget = 24920580096 (0x5cd61e000) (23.21 GiB)
usage = 0 (0x00000000) (0.00 B)
flags:
None
memoryTypes: count = 5
memoryTypes[0]:
heapIndex = 1
propertyFlags = 0x0000:
None
--
memoryHeaps: count = 1
memoryHeaps[0]:
size = 33227440128 (0x7bc828000) (30.95 GiB)
budget = 33227440128 (0x7bc828000) (30.95 GiB)
usage = 6523162624 (0x184cf9000) (6.08 GiB)
flags: count = 1
MEMORY_HEAP_DEVICE_LOCAL_BIT
memoryTypes: count = 1
memoryTypes[0]:
heapIndex = 0
propertyFlags = 0x000f: count = 4
MEMORY_PROPERTY_DEVICE_LOCAL_BIT
Nice command … I like that one … seeing the same thing but on a much smaller scale
… I’ve been messing with kernel commandline parameters, iommu, nvidia_uvm etc … but just can’t find the thing that tells linux (or nvidia) to link the them together (yet) … and ya @malcolmlewis is a veritable fountain of information
… BTW I tried Hyperland … worked fine for me with nvidia
1 Like
An unrelated FYI (sorta - it’s a parameter and I’m the OP) … I’ve seen posts saying that flickering/refresh issues might be helped with
nvidia_modeset.hdmi_deepcolor=N
on the commandline (“Y” is the default) … searching it will say “=0/1” but it’s actually a boolean now so “Y/N”
modprobe file would be
options nvidia_modeset \
hdmi_deepcolor=N \
@dart364 the nvidia_uvm module loads on demand, as in used when needed for an application…
Perhaps a read here may help? https://developer.nvidia.com/blog/unified-memory-cuda-beginners/
I’d suggest git cloning the demo suite and run some tests.
1 Like
@dart364 FWIW Intel ARC running the Xe driver at preset will use system RAM as aux device…
Device: Mesa Intel(R) Arc™ A380 Graphics (DG2) (0x56a5)
Memory info (GL_ATI_meminfo):
VBO free memory - total: 6088 MB, largest block: 6088 MB
VBO free aux. memory - total: 124970 MB, largest block: 124970 MB
Texture free memory - total: 6088 MB, largest block: 6088 MB
Texture free aux. memory - total: 124970 MB, largest block: 124970 MB
Renderbuffer free memory - total: 6088 MB, largest block: 6088 MB
Renderbuffer free aux. memory - total: 124970 MB, largest block: 124970 MB
Device: Mesa Intel(R) Arc™ A310 Graphics (DG2) (0x56a6)
Memory info (GL_ATI_meminfo):
VBO free memory - total: 4048 MB, largest block: 4048 MB
VBO free aux. memory - total: 29866 MB, largest block: 29866 MB
Texture free memory - total: 4048 MB, largest block: 4048 MB
Texture free aux. memory - total: 29866 MB, largest block: 29866 MB
Renderbuffer free memory - total: 4048 MB, largest block: 4048 MB
Renderbuffer free aux. memory - total: 29866 MB, largest block: 29866 MB
Thanks @malcolmlewis … I’ll check those links out and probably clone the git … always ready to compile something
… My first real game was Doom in the 90’s and I want my last game to be Doom as well … need that shared memory tho as well as a new computer … had to put a new radiator in my car but I did it myself and saved $1600 … still took a bit of a chunk out of my saved up computer money tho … 
Yup … Intel and AMD do it properly … it’s been getting a lot of attention on the Nvidia forums and has actually gotten some attention from Nvidia soooo 
@dart364 dang cars 
Well AFAIK there is a performance hit on using system RAM… Can’t beat a lot of vram.
Yup I know … I’ll get plenty on whatever card I buy … but it’s better than locking up the desktop and/or crashing when you run out … Nvidia is famous for skimping on VRAM on their cards
@dart364 Oh, and also make sure the Motherboard has Resizeable Bar support.
@malcolmlewis Will do … thanks for the heads up … just checked the board on my short list and yup … I’m good 
1 Like
So I went on the Nvidia boards and pointed out their complete disregard of this problem and a previous thread with 75 posts that include new cards (and windows) … I asked for a “Pre Alpha 580 driver” to test/address it ( I only have a 750Ti I don’t have much to lose) … I figure if I have to I can call enough attention to it when AMD and Intel handle it fine maybe something will get done … wish me luck
You need to enable Resizable Bar in BIOS. Possibly you need to upgrade BIOS to get it working.
To use Resizable Bar with Nvidia you need RTX 3000 series or newer.