I have a home server… ftp and mail… and instead of upgrading the hardware on this server with a lot of mess… I have bought a second server (much cheaper one) and I want to share the resources (processor, ram and especially HDD space!!!) of the second one to the first one… I am running openSuse 11 on both machines… and do not know if it’s a program that I should be using or simply a configuration… (sry if I have posted in the wrong thread…)
I’m going to guess not to many people are going to answer it seems a bit devoid of what you really want.
If you’re after clustering then a volunteer forum may not be the best place. I suspect you’re looking for a remote mount, with little to no investigation 2 come to mind sshfs or nfs. This would solve the shared hdd but I suspect would incur some loss in speed ymmv.
As for the other I had a quick google for clustering postfix and ran away quite quickly but it would seem possible as would a ftp cluster as for the setting up and how I’m pretty sure that went beyond a volunteer forum into a lot of self learning and specific mailing lists.
IMO I have to admit I would of thought a minimal headless/x-less home server with ftp and mail would need very little resources, perhaps migration would be for the best, or a reconfiguration of unneeded services on the server.
Diskspace could be added easily using NFS. Can be done using YaST and when you get stuck the Forums here are quite capable of helping with NFS.
Nertheless you should spend some thinking on what you want with the diskpace andd where you want to put what.
About the usage of the CPU/RAM (when not used for the NFS serving) I suppose, as feathermonkey suggests, that more inside knowledge about your shop and the posibilities of remote execution is needed.
As you can read from the previous posters, you have to be more clear about what you want to achieve and what for.
well the storage is needed for the ftp server… indeed nfs would be a solution… I have a lot of data on the current server… it’s set up with raid 1… and i’m not looking for speed but I do need the space (on the ftp server are 30 or so users <friends, family> and back-ups from my computer…), so trying to migrate the data to bigger hdd’s is a pain in the but, but i also want the extra cpu and ram… well, mostly cpu… and yes I was looking for someone that could help me with clustering… the extra cpu is going to be used for… let’s just say “compiling”… I tried to do something solo… but I got stuck and I am in need of a “tutorial”… crash course on the subject… If you don’t know how to help or for other reasons you can’t, I’ll be satisfied with some concrete documentation… I just can’t seem to find any… so… any material on the subject would be of use… Thank you for you’re help
Like said, you might be lucky if some cluster manager drops in (have seen some in the past), but this is a very specialized setup you’re looking for. I can help on NFS, like the others, but this is beyond my skills (yet)
As I said, any help would be appreciated… I know red hat has some sort of programs for the enterprise version that handle clustering, and i know a program for ubuntu and how to partially set it up… but i need something on suse and with documentation… so… thank you… and if you find someone willing to help please redirect them here… much appreciated
Was that “heartbeat” ? Maybe this helps:
Start the software installer, check the top 4 options, and search for “cluster”.
Clustering is not something to be done easily. First one should have some good knowledge about the subject (including hearbeat indeed, networking, client-server aspects). Then one must design ones solution taking into account the application (can it be split in application server and database server, is data to be mirrored, what should run where in normal day working, what should run were on failure, what should run where during maintenance) and the available hardware and tools. Which subnets to use where (seperate hearbeat network, loadsharing, etc., etc.
All sorts of solutions are to be found including very poor man’s clustering and take over.
Why do you think that clustering is needed?
Now far from all knowing and google is letting me down… But I did find this RedHat: RE: vsftpd beginner’s tutorial? which states
Memory is NOT important for a dedicated FTP server, about 256 MB RAM is sufficient for a very big server
Which ruled out ram that leaves cpu but I can’t honestly see that being cpu intensive.
So I’m lead to believe your postfix/mta is doing some major cpu eating i.e aggressive virus scanning and/or spam filtering. Surely this is a case of configuring postfix to run from the other one and the usage of remote mounts rather than looking at clustering.
In all honestly I would expect the bottle neck to be at the disks or the network over cpu/ram unless we’re talking ancient.
@FeatherMonkey As I said, I do not need the cpu for processes, I need it for personal use…
@Knurpht Yes I have tried to use heartbeat but I got stuck in the configuration part… I have done some heavy reading today but looks like not heavy enough…
@hcvv wow dude you are talking about load-balancing clusters and h-a clusters and data servers combined… I don’t have so many servers and I only want at least, for the moment, to cluster 2 servers… one server intel xenon E5220 with 4 Gb ram 6 hdd’s (500 Gb each) raid 1… second one intel xenon L5508 with 2 Gb ram and 4 hdd’s (1 Tb each) raid 1… I want to cluster them… the hdd space will be used for storage by the ftp and back-ups and the cpu will be used for personal compiling issues…
Well forgive me for offering an opinion, which may be totally inappropriate, but I have been trying to understand what you are want to achieve. From the criterea you have laid down, I would have done this:
-
Use Samba (smb) to share portions of the disks of server 1 with server 2 and vice versa. Samba does much the same job as NFS but has certain advantages and disadvantages. It is slightly faster than NFS but may be less secure. It is also much more flexible, since Windows systems can view the shares as well. There is also a wealth of experience of Samba at this site since swerdna, especially, has his own web site devoted to it.
-
Mount the share(s) for server 2 in server 1 using ‘mount.cifs’. This notionally and seamlessly grafts the server 2 share onto the file system for server 1. Do the same the other way round. When you have done that, server 1 can use any of its programs (e.g. compilers) on its own local file system and on the share(s) for server 2. And vice versa.
-
Let’s say server 1 is your front end (on your actual desktop) and you want to access additional storage on the server 2 share(s). No problem, just access that storage from server 1 as though it were a local resource. Suppose you want to compile a program on server 1 but you want to use the CPU on server 2. Just VNC or ssh -X to server 2 to get a remote desktop or window. Then access the server 1 share(s) from server 2 (so this is doubling back through the network in a sense) and compile using the server 2 CPU. This presupposes that you have compatible facilities on both server 1 and server 2 i.e. the same OS, the same compilers and the same link libraries. However, you may be able to work it so you only have one set of compilers and libraries but you would need to add the locations of remote (on the share(s)) libraries to the paths for ldconfig etc.
-
The only thing you won’t be able to do, with the above set up, is use both the server 1 and server 2 CPUs on the same compilation at the same time. I hope you were not wanting to do that.
-
All of the above uses bog standard techniques so it would be easy to get it working. Hence a better engineering solution, I feel.
Well those are my thoughts. Sorry if they were not germane to the discussion. Plodding off now.
Thank you for you’re opinion… for the moment i’ve put on the nsf, and it’s working ok (that was a priority…) now I’m gonna struggle with clustering, because I do want to use both servers on the same compilation :D… because there is probably going to be a third server in the future… maybe more… atm crisis is avoided… I am going to read more about clustering… and maybe post any of my relevant findings here… for anybody else to use… but it’s gonna take time… weeks, months… sooo… speak to you guys later…
On Sat, 12 Jun 2010 09:56:01 GMT, Devineavenger
<Devineavenger@no-mx.forums.opensuse.org> wrote:
>
>Thank you for you’re opinion… for the moment i’ve put on the nsf, and
>it’s working ok (that was a priority…) now I’m gonna struggle with
>clustering, because I do want to use both servers on the same
>compilation :D… because there is probably going to be a third server
>in the future… maybe more… atm crisis is avoided… I am going to
>read more about clustering… and maybe post any of my relevant findings
>here… for anybody else to use… but it’s gonna take time… weeks,
>months… sooo… speak to you guys later…
Well the last time i heard someone discuss clustering they recommended
identical or near identical boxes. I can see how that would be easier to
set up and balance. As for distributed compiling there are tools for
Suse, i just don’t know where to find them. Distributed compiling is a
lot less dependant on machine power/resources parity.
Hi
Beowulf? There is oscar, but you would need to build from the src rpms;
Else PVM/MPI, it’s been awhile since I had mine running, just a small
three node one, crunching SETI…
–
Cheers Malcolm °¿° (Linux Counter #276890)
SUSE Linux Enterprise Desktop 11 (x86_64) Kernel 2.6.32.12-0.7-default
up 8:14, 3 users, load average: 0.16, 0.14, 0.18
GPU GeForce 8600 GTS Silent - Driver Version: 173.14.25