One machine from multiples machine ?

Hi all,

I don’t know if it’s the right thread but I have an idea to build a virtual machine that use multiples real computers. I explain better, for exemple x4 I7 8 cores computers with 16 giga of ram became one virtual unit working as 32 cores with 64 giga of ram, without rebuilding applications, cause I can’t rebuild mental ray or others rendering engines. First, is low cost hardware, can be upgraded and no need to rebuild software.

Is it possible with opensuse12.2 x64 (or suse entreprise) using virtual system ?

Thank you in advanced and sorry for my poor english,

Matt

Operating systems are typically tied pretty tightly to a single “system”
so while virtualization is good at splitting one physical system into many
virtual, it is not typically (at least in the consumer land in which we
live) good at doing the opposite.

There are technologies to take advantage of multiple systems for a single
purpose but most of them involve an OS on every system and an application
made to be distributed across all of those and are considered more like
clusters. Yes, the applications are written for that purpose, so maybe
not what you’re after.

Good luck.

Hi,

Thanks for your answer. Yes the goal is do not modified the software

Matt

Haven’t touched 3D rendering for quite a while, but I remember there were quite some that had render clients running in a farm. This meant a couple of other computers were called to help rendering. IIRC Houdini had this working for linux. One would design on a workstation with f.e. 2 cores, render with 4 clients with 4 cores each, i.e. 2 + (4*4) = 18 “rendering blocks” visible in the render monitor window.

Here’s something on mental ray and distributed rendering on linux Softimage User’s Guide

Hi,

mental ray too but for standalone, not the one include by default in maya. Our internal render support multicor, but not other the network, so need to find a solution to made a cluster.

Houdini is good too, but working under maya and Guerilla render.

Matt

A cluster indeed is the term for what you’re looking for. Although very interested in clustering ( for completely other reasons ), I don’t know much more about it than knowing what it is, and that software to build a cluster is available. Don’t know if that will help if the versions are standalone…

Yes,
The OP seems to be describing compute clustering which is different than network “clustering” which is more commonly discussed.

When you’re talking about this type of clustering, there are many implementations…

  • Application level clustering, eg Hadoop and MapReduce to massively in parallel crunch data
  • Highly parallelized networked nodes which is what I think you are asking about here. The more typical bottleneck again is usually the pathways between nodes. At the last few computing conferences I’ve seen various vendors display multiple backplane servers where it’s possible to mix and match pathways, redundancy and identity within the same box holding numerous individual “servers.” The recent Chinese Supercomputer which had the world’s no 1 rating for awhile was built on multiple off the shelf ATI Radeon GPUs connected with a special custom network.
  • Grid Computing is a type of massive clustering that ordinarily doesn’t emphasize speed but “often disconnected” nodes, eg www.worldcommunitygrid.org/‎

So,
In other words, you don’t usually start with the infrastructure, you consider your problem, its objectives and <then> look for the most appropriate implementation,.

HTH,
TSU