Server redundancy


I have a Opensuse 13.1 server used as file server (samba and NFS), web server, mail server and NIS server.
I would like to “clone” it in order to be replaced quickly if something fails (let’s say motherboard crash)

I thought about using rsync or something similar to clone the whole disk, but there will be problems.

So i Think that maybe the correct approach may be High Availability I see there are a lot of options to reach it (none of them easy, of course, but I don’t mind, I will learn a lot in the process).

Could anyone give me any suggestions of which approach to use and which software use as a start point?


One solution that is ‘budget friendly’ (meaning only hardware costs and learning things) would be to setup Pacemaker + Corosync + GlusterFS (or Ceph).

The first two ones are the high availability tools (Corosync communicates between nodes to see when one fails and Pacemaker executes commands upon failure) and GlusterFS or Ceph are for distributed file systems.

I’d say setting all of this up in a very basic two node setup with absolutely no prior knowledge takes a day or two since there are good guides on setting things up. Planning in advance is a good idea.

I’d rather look at DRBD for two nodes HA setup.

That would require a complete re-partition at the old server though.

I’d generally recommend a move to virtualize your server.

You can deploy your server easily on any physical machine running that virtualization without hardware compatibility worries.
You can recover from a non-working scenario as fast as you can deploy or use a stand-by server running virtualization and pointing to your server copy.
You never have to worry about hardware upgrades or recovering a backup to new hardware.
If you want to continually update your backup copy, you can use technologies like DRBD to keep a system up to date continuously, but if your server files change infrequently, you may prefer instead to use a virtualization clone tool or simply copy files (cloning enables multiple copes running simultaneously by modifying the MAC address and more).

If you do this,
For whatever virtualization choice you make, there will be a P2V (Physical to Virtual) utility that should make your migration easy.


I should be done with the same OS version (13.1) or it will be possible to use leap 42.2 in the new node?


What virtualizing software do you reccomend?
I have used vmware and virtualbox.
I see that the approach you recommend is fairy straightforward, but I think both vmware and virtualbox reduce performance substantially


I would use something recent (such as 42.2 or the upcoming 42.3 (when it’s released)) because 13.1 did not receive the Samba security patch.

Also repositories no longer compile software for it so setting up things might be problematic - looking at the software you provide it shouldn’t be hard to clone it to a new instance and adjust configuration files from 13.1 to 42.2.

You should of course not start new things with an old unsupported version of openSUSE. What first comes to mind is the fact that you will need additional software that then is not available anymore for 13.1 because most repositories are gone already.

That depends entirely on your requirements and scale.
For a tiny solution supporting your personal use or a very tiny company, a Desktop virtualization like VBox or VMware Workstation can probably do the job.

But, if you find yourself needing to support a LAN with at least 3 servers providing Network Services, then I’d recommend you look at one of the Enterprise type virtualization technologies for the better management and monitoring features, and in some cases like KVM, Xen and VMware VSphere you also have live migration. For even bigger server farms, Linux containers should be considered.

As for reduced performance,
<If> you see that, there is usually a reason. Nowadays, with hardware acceleration it shouldn’t be noticeable if you have ample resources. A common cause of “poor performance” is that many of these technologies write memory to a backing file to protect against various possible corruption, and ensure recovery, but of course when you write RAM to disk, that can greatly slow performance. Disable and you won’t see that latency. Most non-Enterprise virtualization Users don’t bother to investigate optimization so never understand why things might be slow.


I have been reading a lot and trying things. It seems the alternatives that better fit for me are xen, kvm and Vmware(ESXi). The server I’m using is configured with a software raid10. ESXi dos not support software raid, xen and kvm does. Xen and kvm seem fine, I don’t know if there is a reason to select one or the other.
Live migration seems a fine tool, I think it will allow to have a backup copy up to date just in case the main server crashes be able to switch to the mirror. What I’m not sure is if the switch may be automatic … I don’t know if I explain myself right. Let’s say i have physical servers P1 and P2 and virtual machines A, B, C, D. It is possible to have a and B running on P1 and C and D running on P2 but using live migration to keep copies of A,B,C,D up to date in both P1 and P2 and having a sinchro mechanism that in case of failure of P1 or P2 switch all of the VM to the other P?

No, it won’t. Virtualization does not solve your original problem - how to replicate data between servers to have up to date copy. Virtualization is cool, but it does not mean it can solve everything. Virtualization makes the task of switching workload between servers slightly easier (on the expense of adding yet another layer to manage). It does not magically solve the task of replicating data between servers.

You may want to consider the docker way instead of virtualisation. (less overhead, less flexibility to chose the operating system)
Still you will need to chose some mechanism to setup the redundancy.
Here is a nice walk-through for setting up docker with Kubernetes:

  1. You should create a list of your solution requirements. A short list might start with something like the following, add or remove what you want
    Entire Server or application redundancy
    If application redundancy, are there static “presentation” parts which can be separated from the data?
    What is your downtime tolerance? - None? Less than an hour? Half a day? A day? Longer?
    Are your hardware resources limited or specified?
    Exactly what are you trying to protect against? Single machine catastrophe? Building Catastrophe? Regional Catastrophe? Fire? Earthquake? Hurricane? Power? Intrusion? Something else?
    Do you have budget restraints?

  2. From the above, you should then be able to get a start on creating your next list of possible strategies and possibly technologies.
    Although Live Migration might be helpful in some solutions, it <migrates>, which eventually deletes the original instance. Live Migration is more often helpful to shift loads between machines, not as a backup/failover solution.
    Static applications and data don’t need any kind of costly or difficult duplication, they can often be built or copied once and then maintained with little effort.
    Anything that dynamically changes, perhaps even constantly can be difficult to design and deploy… In general you might look at HA (High Availability) solutions, on Linux this often is based on DRBD which bit-copies from one machine to another. A simple way to avoid this is to deploy this on a single machine, but with extra internal redundancy built in… This strategy is often based on deploying a SAN or similarly a souped-up PC with extra CPUs, RAID10, maybe redundant NICs, etc. Major PC manufacturers sell “Server” models which typically integrate this extra hardware redundancy.

  3. After the above, you might then be ready to Google some solutions or ask specific questions in these Forums for exactly what you might want to do.

On the subject of using virtualization, Whether you decide on KVM, Xen, VMware or something else, <typically>(you should always verify for your chosen technology) the following is a short list of common features which are likely relevant to your needs…

  • Ability to snapshot (Save machine state at a specific point in time). This allows “one-click” rollback of the entire machine to any snapshot you choose.
  • Clone (Create a near exact copy of a virtual machine, but with changes that allow deploying on the same network). If there is no need to deploy the copy simultaneously on the same network, then a simple file copy of the virtual machine files can be done.
  • Virtual Networking (ability to isolate or allow communication between virtual machines or groups of), not restricted to physical hardware
  • Ability to over-provision, which the ability to store inactive virtual machines without utilizing available resources unnecessarily, and the ability to simultaneously run multiple machines that exceed available hardware resources if the individual virtual machines aren’t actively all allocated resources.