I don’t know how, or why, you’d want to do that. The operating system doesn’t know how to speak to the GPU, that’s the job of the graphics card electronics. And the GPU doesn’t speak the language of the system or it’s programs, so it wouldn’t know what to do with a simple command. They’re separate processing units, meant to do different tasks. One cannot substitute for the other.
Well GPU’s can and are used for acceleration of calculations. It can be done. Is used in Super Computer all the time these days. But these are headless machines ie even though the may have a GPU installed they do not connect to a keyboard or video display. They are talked to via terminals. Note these well may be Xwindow server based terminals but separate display input devices never the less. So at minimum you would need 2 machines one to work as the compute engine/server and one to act as the terminal. It may be possible have a second graphics card/chip in a single machine one for display and one for computing. Note it take special programming (special versions of FORTRAN) and any old GPU may not be suitable and even if suitable any off the self consumer GPU is only going to do integer math.
It may be possible have a second graphics card/chip in a single machine one for display and one for computing.
I worked for a company that developed a Linux video servers for large LED screens (for sports and outdoor events). They developed an application utilising NVIDIA GPU’s for fast processing. The actual video data was sent out via a 1GB ethernet connection to a Cisco switch and distributed to the screen panels from there. There was also a user GUI display connected to the video card IIRC.
This is what I was thinking about (referring to deano_ferrari post #3)…is there a guide to this? So if I can load the fbdev or vesa, all graphics processing will be done by the CPU right? It won’t touch the GPU?
I’m not worried about the GPU program. I know it will work, in CLI mode. But, if the GPU has to handle both the display and the computation, the computation will timeout if there’s heavy display processing. So when I want to use the GPU program I go init 3 and run the program. What I want to do is some processing of the data generated at the same time it’s running the GPU program, so I need gnumeric (or excel with wine or virtualbox), which requires X.
I run a full OpenSUSE 11.3 KDE desktop on a low end gt240, and I can run the CUDA examples at the same time - there’s no problem doing this. Or do you want to free up the GPU entirely? Perhaps you could just install two GPU’s.
With a basic desktop (there are at least a couple that are simpler than KDE
or Gnome), there shouldn’t be much taken away from the computing ability.
The time before the program times out can be adjusted as well. Only once
have I had to do as much as turn off the 3D effects to run a CUDA script in
KDE4 on a G210 card. I haven’t had to go as far as using the onboard video
for the display and the Nvidia video card for CUDA yet.
Have you had problems with it timing out?
Maybe a basic motherboard with a low-power CPU that has onboard video and a
slot for your graphics card would work out well as an inexpensive solution
if there are problems?
> gogalthorp;2231106 Wrote:
>> You can run X remotely on another machine. You could add a second card
>> to do the display.
> Yes, I agree, looks like those two are the only ways. Get a dual card
> system is the way I think I want to pursue. Thanks again!
Apparently, anything that does not use the shader or a lot of memory should be OK for CUDA. That’s why CLI is OK. Turning off 3D effects should be OK too. But if I run something that uses 3D while running CUDA it’ll time out. Example programs are short programs, so it probably won’t do anything. Try running an MD simulation for a couple of days together with a few 3D effects and you’ll see what I mean. It does depends on the program, but the ones I’m using doesn’t have a way to adjust the times before they timed out, AFAIK.