I’m not sure if this is the correct forum to post to but since it is about my virtual servers…
I have a webserver running SuSe 11 and I am setting up another SuSe server to function as backup server.
What I want to do is run a daily backup from the /srv/www/htdocs (main server) to /srv/www/htdocs on the backup server, so when one of the servers go down, I will have a backup running and there will be no downtime for the website’s.
What is the best way to do this?
(Let me know if you have any more questions or need more information)
Hello,
First, if you’re looking for a SLES solution, you’ll likely want to post in the SLES forums and not these openSUSE forums.
That said…
Your question actually has a lot to do with what type of fault tolerance and redundancy you want to implement to protect against certain types of failures, and the virtualization technology you are implementing.
So, for instance some technologies like VMware have the ability to migrate or copy bits “live” across the network (well, paid versions of VSphere).
Others may not support that type of live migration or cloning, or only a partial implementation (eg Hyper-V).
If you just want to protect against Host issues, you can do something like store your diskfiles on a shared resource like a SAN or iSCSI. It won’t protect against diskfile corruption but would protect you against a Host going down… You simply fire up another Host who has immediate access to the same diskfile source. Note the type of loads some shared file systems will place on your network. OS architecture will be a big issue unless you modify, eg openSUSE 12.3 and later like most very contemporary OS deploy most of their runtime mounts in RAM as tmpfs file systems while SLES may not do so by default. Minimal disk access means less effect on the shared file system.
You can also implement commercial software which support live migration of bits, the most well known is probably DoubleTake which as virtualization has taken off developed a big marketing and solutions pitch on their product (it’s very good).
If you’re on a budget and don’t have to be 24/7/365, of course you can always either shutdown and clone on a regular basis or simply replicate the necessary bits (eg files if a file server or database shipping if you’re running a RDBMS like MySQL). Your proposal falls into this category, just rsync the website files from one machine to the other during a very slow time (if it exists). You may not want to rsync the entire website, just the directories that contain files that change.
So, there are probably a zillion possible solutions to guard against various types of mal-functions, and just like backups you may want to implement a few strategies, not just one.