Resolution in headless desktop when accessing remotely via x11vnc piped over ssh

I am curious if there was a better approach, for me to obtain a specific resolution of a remote headless workstation (running openSUSE LEAP-15.3), when displayed locally via x11vnc piped over ssh.

I currently have the desired resolution of 1280x1024, but the approach I successfully used is not what I consider ‘elegant’ nor ‘simple’ (and possibly my expectations are too high and I am limited by the specific method in which I employed vnc to access this remote PC).

The remote workstation (computer) is a very old PC with no workstation attached to its nvidia GTX-260’s two DVI ports. It is connected to my home LAN via ethernet.

I access it by opening two terminal sessions in a PC in a different room in my condominium where I live, sending:

ssh -t -L 5900:localhost;5900 oldcpu@ip-address-remote-PC 'x11vnc -localhost -nolookup -nopw -ncache 10 -noxdamage -display :0'

and then to get the remote PC displayed locally in the next terminal I send

vncviewer encodings "tight copyrect hextile" localhost:0

I am very happy with the above access method. What was difficult for me was to get the 1280x1024 resolution, as for the longest time (few hours) I could not get above a 1024x768 resolution, until finally I installed a custom /etc/X11/xorg.conf file in the remote PC.

The /etc/X11/xorg.conf file that eventually worked for me was:

Section "Device"
Identifier "VNC Device"
   Driver "dummy"
   VideoRam 256000

Section "Monitor"
   Identifier "VNC Monitor"
   HorizSync 5.0 - 90
   VertRefresh 5.0 - 90
   Modeline "1280x1024_60.00"  108.88  1280 1360 1496 1712  1024 1025 1028 1060  -HSync +Vsync

Section "Screen" 
   Identifier "VNC Screen"
   Device "VNC Device"
   Monitor "VNC Monitor"
Subsection "Display"
  Modes "1280x1024_60.00"

Yes … I could have split the above up into separate files and placed it in /etc/X11/xorg.conf.d … but it was easier to edit the one file while I was ‘hacking about’. I obtained the ‘modeline’ that I used in the ‘monitor’ section using ‘gtf’.

What is interesting to me, is if I use “nouveau” in the above instead of dummy, X fails to load (and I need to use ssh to remotely access the PC and from command line edit or remove the xorg.conf file). I also had to specify a VideoRam amount, else the dummy would not go for a higher resolution. So I simply picked 256,000 (which is 256,000 KBytes) out of thin air (ie arbitrary selection). By default, without that VideoRam entry, X was only assigning around 4 MBytes for video RAM (and refusing to provide higher resolutions).

If instead I simplify the xorg.conf (or if I do not use an xorg.conf file at all) I can use nouveau in the Xorg.conf, although the resolution is only 1024x768. If I look in the /var/log/Xorg.0.log file with the nouveau driver specified (or no driver specified), I can see an entry noting no monitors detected and hence a fall back to 1024x768 resolution.

What is also puzzling, is according to inxi -F, the nouveau driver is running:

System:    Host: oldcorei7 Kernel: 5.3.18-59.10-default x86_64 bits: 64 Desktop: KDE Plasma 5.18.6 Distro: openSUSE Leap 15.3 
Machine:   Type: Desktop Mobo: ASUSTeK model: P6T DELUXE V2 v: Rev 1.xx serial: <superuser/root required> 
           BIOS: American Megatrends v: 1108 date: 09/21/2010 
CPU:       Topology: Quad Core model: Intel Core i7 920 bits: 64 type: MT MCP L2 cache: 8192 KiB 
           Speed: 1604 MHz min/max: 1600/2668 MHz Core speeds (MHz): 1: 1604 2: 1604 3: 1606 4: 1609 5: 1604 6: 1603 7: 1604 
           8: 1604 
Graphics:  Device-1: NVIDIA GT200 [GeForce GTX 260] driver: nouveau v: kernel 
           Display: x11 server: X.Org 1.20.3 driver: nouveau note: display driver n/a resolution: 1280x1024~60Hz 
           OpenGL: renderer: llvmpipe (LLVM 11.0.1 128 bits) v: 4.5 Mesa 20.2.4 

Interestingly is the Xorg.0.log file indicates “DUMMY” instead of “NOUVEAU” … ie:

    34.764] (II) DUMMY(0): Modeline "1280x1024_60.00"x60.0  108.88  1280 1360 1496 1712  1024 1025 1028 1060 -hsync +vsync (63.6 kHz z)
    34.764] (**) DUMMY(0):  Default mode "1280x1024": 135.0 MHz, 80.0 kHz, 75.0 Hz
    34.764] (II) DUMMY(0): Modeline "1280x1024"x75.0  135.00  1280 1296 1440 1688  1024 1025 1028 1066 +hsync +vsync (80.0 kHz d)
    34.764] (**) DUMMY(0):  Default mode "1280x1024": 108.0 MHz, 64.0 kHz, 60.0 Hz
    34.764] (II) DUMMY(0): Modeline "1280x1024"x60.0  108.00  1280 1328 1440 1688  1024 1025 1028 1066 +hsync +vsync (64.0 kHz d)
.... etc ...

another observation (which possibly is harmless (?) < unsure > is an error in xrandr about 'size of gamma) ':

oldcpu@oldcorei7:/var/log> xrandr
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 320 x 175, current 1280 x 1024, maximum 1280 x 1024
default connected primary 1280x1024+0+0 0mm x 0mm
   1280x1024     60.00*   75.00  
   1280x960      85.00    60.00 
.... deleted remainder

Obviously this is a subject matter I do not know much about.

So I don’t need forum support to get the resolution that I want : 1280x1024 over vnc. I have 1280x1024 now.

But I am curious if there was a better way? Was this an unnecessary kludge on my part? I suspect so < unsure >

I have used Xvfb and x0vncserver with various parameters to specify a Framebuffer screen that I then access via VNC (Or you can use Xvnc which is the equivalent of the two I think). Specifically I do this with a WSL (Windows Subsystem for Linux) install on my PC and in this case there is no need to do any X11 configuration. I just found it very simple to configure.

Maybe I am just being a bit too simplistic but as I say it works for me.

So what is the end game for the remote system? Can’t just use ssh -X to login and use the GUI apps locally (or just cli)?

Does the card have HDMI out? I have some HDMI to VGA converters that makes a headless systems think there is a monitor attached, I also use one here on my 4 port HDMI card to sort out the numbering of the three other screens.

I need the GUI for the processing apps that I want to run.

I’ve used it in the past to run GUI based deep machine learning apps, analysing chess games (which can run for 8 hours or more analyzing a single game - dependent on depth setting). Yes … I could (and have) “ssh -X” in … and I was doing that a lot to come up with the xorg.conf file I created.

I have also used it in the past to run ffmpeg on home videos to stabilize the video, which dependent on the video length (and number of videos if I am running a ‘batch’ job of a directory of videos) can take quite a while … and I am pondering to use it for some video deep learning apps … where while I have no GPU, I can just let this PC run indefinitely. In the case of ffmpeg I can simply ssh in (with no graphics needed except when I want to test the stabilization - in which case I need to copy files back to my main PC to run locally) …

The monitor close to it accepts only HDMI and Display port (and my wife uses that monitor most the time with her laptop). And the old remote PC has only DVI graphics output … which was why I was considering a DVI to HDMI adapter … or a DVI to Display port adapter to my wife’s ultrawide monitor … but as long as I can run it headless, such is not needed.

And I confess … I could not resist the opportunity to see what I could do with vnc and a remote display of the kde desktop … although as you point out, there are alternatives (ie " ssh -X " ) .

Well perhaps consider a GPU upgrade in the box? Look at NVENC? or an AMD equivalent?

Looks interesting …

I will have to open up this old desktop PC and look at its ASUSTek P6T Deluxe V2 motherboard’s bus, to see what sort of video card (with GPU/machine learning support) that it will function with. I recall some time back looking at graphic cards for this old PC, and I noted the motherboard is getting old, and note that the selection is not great anymore.

… of course 1/2 the fun/battle is trying to make do with old hardware … One could just throw money at this and get completely new hardware via a new PC, but for me that would take away some of the fun. lol! lol!

6 RAM slots, triple channel RAM, 3 PCIeX16 slots, NVIDIA SLI or ATI CrossFireX, eSATA, PATA + 6 SATA; pretty nice. It doesn’t seem to me any PCIe GPU you want should be any problem, unless maybe some sort of incompatibility trying to use it with the 260.


I’m in ‘price ticker’ shock :open_mouth: at present, when I see how much these nvidia cards cost with a good GPU for machine learning. :open_mouth: The relatively high price may put me off.

Look at a Quadro T600 4GB, they are reasonable price all things considered,

Or a T400, may be more suitable for your hardware, less than US$130 here on Amazon…

Thanks. That’s interesting. I see here in Thailand a price not too distant (~US$ 134 equivalent) may be possible for a T400.

This is a very old PC … I can’t recall when I bought it, but it may have been 12 years ago ? The old Asus P6T motherboard in this PC only has 6GB RAM in it at present, and the PCI slots are PCI-2.0 (while T400 uses PCI-3). My understanding thou PCI-3 cards will work in PCI-2.0 motherboard chassis.

I note nvidia custom driver still works with the T400 … and hopefully running such ‘headless’ won’t prevent accessing the nvidia card GPU functions. In fact running the nvidia driver may be necessary to access GPU functions (for machine learning) ? < unsure >

Also, since this PC is very old, likely the power supply has degraded (with age) but I don’t believe the T400 demands lots of power (the spec for the Leadek Nvidia T400 card states only 30-watts required).

The 6GB of RAM I have on the Asus P6T motherboard may be too little to do very much (I may need to explore upgrading this - its been years since I looked at what this motherboard is capable of).

The 2GB on the T400 GPU may also be too little to do much (despite it being Turing architecture, I think) , but it might be a way to inexpensively explore what can be done with Machine Learning in a quicker fashion than playing on a PC with no such GPU).


Yes the T series is RTX, so all current :wink: It will work in the PCI 2.0 slot, a little slower though, but may be an improvement and has the encoder/decoder enabled. The other thought is using a cloud offering of some real Nvidia power?

Check your ffmpeg -buildconf output to see if enable-nvdec/nvenc are there.

Its a thought - I suspect < unsure > I may need to learn another language (python) or other apps to use such cloud offering.

Thanks for that reminder.

I read this morning that one needs to check nvdec and nvenc to see if ffmpeg built with such, but I totally forgot the ffmpeg command to check … I was about to go surfing to find the command - and you saved me the time.

ffmpeg -buildconf
ffmpeg version 3.4.9 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 7 (SUSE Linux)
  configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --incdir=/usr/include/ffmpeg --extra-cflags='-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g' --optflags='-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g' --disable-htmlpages --enable-pic --disable-stripping --enable-shared --disable-static --enable-gpl --disable-openssl --enable-avresample --enable-libcdio --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libcelt --enable-libcdio --enable-libdc1394 --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libzimg --enable-libzvbi --enable-vaapi --enable-vdpau --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-libx264 --enable-libx265 --enable-libxvid
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100

    --extra-cflags='-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g'
    --optflags='-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g'

The output on my main PC is above (which is a different PC running LEAP-15.2). I think the packman packaged version of ffmpeg (for LEAP-15.2) is not built with that (I still need to check the LEAP-15.3 build , where the PC in question runs LEAP-15.3). If ffmpeg not built with those options enabled, then I would need to rebuild ffmpeg with those options (or find a version already built with such). But no need to do that until after I get the hardware … and I am still contemplating … ensuring I know better the full scope of what I could be getting in to.

I note on the Packman packaged ffmpeg that “ffmpeg -codecs” yields:

 DEV.LS h264                 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m h264_vdpau h264_cuvid ) (encoders: libx264 libx264rgb **h264_nvenc** h264_v4l2m2m h264_vaapi nvenc nvenc_h264 )
 D.VIL. hap                  Vidvox Hap
 DEV.L. hevc                 H.265 / HEVC (High Efficiency Video Coding) (decoders: hevc hevc_v4l2m2m hevc_cuvid ) (encoders: libx265 **nvenc_hevc** **hevc_nvenc** hevc_v4l2m2m hevc_vaapi )

but I don’t think that means nvenc is enabled.

I’m on Tumbleweed, my build has it all enabled…

ffmpeg version 4.4.1 

Interesting … Is this a custom build you did for yourself?

Actually, I could update my this old PC (which I access remotely) to Tumbleweed … although the downside there is if I do go for the T400 (or T600) nvidia graphic card, I would likely need to install the proprietary nvidia graphic driver, which with (1) Tumbleweed’s constant kernel (and other) updates, and (2) the PC being remote headless, would add an addition layer of complexity in addition to the Machine Learning.

I’m still thinking about it … (its fun thou to consider such)

Yes, along with all the media stuff I need :wink:

Hi, nah, I run the latest (495.46) here along with cuda, all of a few minutes and an extra reboot (I boot to multi-user) to update after the kernel changes version, which since your on a headless system wouldn’t be an issue remotely.

Hmm … ok … this has me thinking of then updating the very old “headless” PC from LEAP-15.3 to Tumbleweed :

This is a bit off topic (albeit I may need to sort the resolution again after installing Tumbleweed), but for the update to Tumbleweed, I prefer to download all first, prior to installing … hence I am wondering if instead of

zypper cc -a && zypper ref && zypper dup --allow-vendor-change

I can instead use:

zypper cc -a && zypper ref && zypper dup --allow-vendor-change --download-in-advance

I think I will give that a try, and see if I get errors from the “–download-in-advance”.

Update to Tumbleweed underway. Last time I updated (from LEAP-15.2 to 15.3) this way on this old headless workstation, it ran for about 3.5 hours … so I anticipate something similar this time going from LEAP-15.3 to Tumbleweed.

I left my custom /etc/X11/xorg.conf file in place, so hopefully if update to Tumbleweed goes ok, that I will get the resolution that I want when I connect to the headless workstation via vnc (with this PC still using its old GTX-260 nvidia graphic card (driving no physical monitor), and the PC using ‘dummy’ in the xorg.conf file).