Run X without using the graphics chip?

Is it possible to do this? I’m just thinking maybe I can run KDE without using the GPU so that I can use the GPU for computation (CUDA).

The system I’m thinking of using is a laptop with Nvidia GTX260M video card.

I’ve seen people talk about frame buffer, but I don’t quite understand it because it seems that some people are talking about using frame buffer even when they have a video card on their system.

I don’t know how, or why, you’d want to do that. The operating system doesn’t know how to speak to the GPU, that’s the job of the graphics card electronics. And the GPU doesn’t speak the language of the system or it’s programs, so it wouldn’t know what to do with a simple command. They’re separate processing units, meant to do different tasks. One cannot substitute for the other.

I guess you could force fbdev or vesa (framebuffer drivers) to load by adding an entry for the driver within etc/X11/xorg.conf.d/50-device.conf

However, like udaman, I’m not sure about how you can use the GPU directly from userspace. That stuff is beyond me…

Well GPU’s can and are used for acceleration of calculations. It can be done. Is used in Super Computer all the time these days. But these are headless machines ie even though the may have a GPU installed they do not connect to a keyboard or video display. They are talked to via terminals. Note these well may be Xwindow server based terminals but separate display input devices never the less. So at minimum you would need 2 machines one to work as the compute engine/server and one to act as the terminal. It may be possible have a second graphics card/chip in a single machine one for display and one for computing. Note it take special programming (special versions of FORTRAN) and any old GPU may not be suitable and even if suitable any off the self consumer GPU is only going to do integer math.

So this is not exactly a weekend project :slight_smile:

It may be possible have a second graphics card/chip in a single machine one for display and one for computing.

I worked for a company that developed a Linux video servers for large LED screens (for sports and outdoor events). They developed an application utilising NVIDIA GPU’s for fast processing. The actual video data was sent out via a 1GB ethernet connection to a Cisco switch and distributed to the screen panels from there. There was also a user GUI display connected to the video card IIRC.

OK I guess it’s not possible. I’ll have to stick to text-mode when running CUDA program. Thanks for the inputs.

On 2010-09-30 23:36, fading wrote:
>
> Is it possible to do this?

No.

Any disply means using whatever graphic chip is connected to the output display.

> I’ve seen people talk about frame buffer, but I don’t quite understand
> it because it seems that some people are talking about using frame
> buffer even when they have a video card on their system.

It also uses the graphics hardware. That includes the graphics chip, even if it is the bare minimum
of it.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

This is what I was thinking about (referring to deano_ferrari post #3)…is there a guide to this? So if I can load the fbdev or vesa, all graphics processing will be done by the CPU right? It won’t touch the GPU?

I’m not worried about the GPU program. I know it will work, in CLI mode. But, if the GPU has to handle both the display and the computation, the computation will timeout if there’s heavy display processing. So when I want to use the GPU program I go init 3 and run the program. What I want to do is some processing of the data generated at the same time it’s running the GPU program, so I need gnumeric (or excel with wine or virtualbox), which requires X.

You can run X remotely on another machine. You could add a second card to do the display.

Yes, I agree, looks like those two are the only ways. Get a dual card system is the way I think I want to pursue. Thanks again!

But even if you were to use remote X, the console would be using VGA mode, which may or may not preclude the GPU from use, I don’t know.

I suppose you could make it use serial console and totally disable the console.

I run a full OpenSUSE 11.3 KDE desktop on a low end gt240, and I can run the CUDA examples at the same time - there’s no problem doing this. Or do you want to free up the GPU entirely? Perhaps you could just install two GPU’s.

With a basic desktop (there are at least a couple that are simpler than KDE
or Gnome), there shouldn’t be much taken away from the computing ability.
The time before the program times out can be adjusted as well. Only once
have I had to do as much as turn off the 3D effects to run a CUDA script in
KDE4 on a G210 card. I haven’t had to go as far as using the onboard video
for the display and the Nvidia video card for CUDA yet.
Have you had problems with it timing out?
Maybe a basic motherboard with a low-power CPU that has onboard video and a
slot for your graphics card would work out well as an inexpensive solution
if there are problems?

fading wrote:

>
> gogalthorp;2231106 Wrote:
>> You can run X remotely on another machine. You could add a second card
>> to do the display.
>
> Yes, I agree, looks like those two are the only ways. Get a dual card
> system is the way I think I want to pursue. Thanks again!
>
>

Apparently, anything that does not use the shader or a lot of memory should be OK for CUDA. That’s why CLI is OK. Turning off 3D effects should be OK too. But if I run something that uses 3D while running CUDA it’ll time out. Example programs are short programs, so it probably won’t do anything. Try running an MD simulation for a couple of days together with a few 3D effects and you’ll see what I mean. It does depends on the program, but the ones I’m using doesn’t have a way to adjust the times before they timed out, AFAIK.