Hi
I want to make regular backups using rdiff and crontab to schedule. Can someone advise me what I should and should not include in the back up looking at the root directory. \i will be backing up to an external usb hard drive, as it is mounted /media i assume i must exclude this from the back up
I would welcome any tips advice in making this schedule.
It depends of course. User data in /home looks like a candidate for backup in most cases (exception might be a server only system without real user data).
Most of the system you can get back easy enough with a reinstall.
/etc contains mosts of your system configuration data. Thus when you have to reinstall, it is often nice to be able to see how it was (maybe not to simply restore, but to merge intelligent, same for going to the next openSUSE level with a reinstall)…
When you serve a website, the data is often in /srv.
MySQL put it’s data base in /var by default. In any case I found /var of value to have when restoring, thus if you have the space.
In any case, all these system directories are rather stable in nature. Thus, after a first backup, next backups (when using something like rsync) will take much less time.
A lot of suggestions of all types will follow those of mine.Success in deciding
On 2011-06-23 22:06, jimt123 wrote:
>
> OK thanks for the advice, I am just a home user but disc space is not an
> issue so I would prefer to backup more rather than less
Backup it all, except /dev, /proc, /sys, and the directory where your
external media is mounted. You could use “rdiff-backup”. By All I mean all,
yes, the installation binaries too. Saves time when restoring part or all.
–
Cheers / Saludos,
Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)
I wouldn’t bother backing up binaries. Those you can reinstall later. Mostly you want you personal files, data files, backup Server files like MySQL, Apache backups from /var and /srv, your /etc configuration files, /home/userid/ hidden and configuration files.
People backup /home/userid but miss those .mozilla directories, .ssh , .evolution directories that contain configuration and address books, mail memos, etc. Backup utilities like PartitionImage can backup all of those, it shrinks the backups and breaks the backups into multiple volumes if you want to use DVDs as external storage.
I second robin_listas: Backup everything, except the directories mentioned in post #5 and except the path where your actual backup is (recursive backups mean looking for trouble). You may exclude a few more:
Use rsync. It does exactly what it is supposed to do: create an exact copy with minimal effect of the system resources. rsync may have a slight hickup when backing up changing files on a hot server. But this is usually harmless. Using other (sophisticated?) tools will do nothing better. You want an exact copy of your data, but you do not want to copy existing file fragmentation.
Doing good backups is an art. It is much more than just making a copy of your valuable data. This holds especially true when you want automated backups triggered by cron. Imagine the following scenario: You have a valuable database stored under /var/lib/mysql/. Some stupid user (could be me) deletes from a table, without noticing it. Next night a backup is made and the table is lost in the backup copy too. Some other table may be backed up while it is in an inconsistent state. MySQL may not be able to read the table from the binary (hot) backup. Other apps have similar problems.
What is the point of all this? You need a backup strategy. Your backup script should make clean dumps of database tables before backing them up. When you make daily (=nightly) backups you should keep at least one week’s worth of complete backups. This enables you to go back when things have gone wrong.
Now someone might ask how to keep 7 or more copies of the complete system on the (limited) backup media? There is a trick involved. You make a rolling backup. Let’s say you have several directories to keep the backups of the last 7 days. Call them daily0/ daily1/ … daily7/. rsync always backs up to daily0/ but then you delete daily7/ and move (rename) daily6/ to daily7/, then daily5/ to daily6/ … and daily1/ to daily2/. This is a very quick operation. Then we copy the latest backup daily0/ to daily1/. And here is the trick: we use cp -al for this copy operation. We do not create a physical copy of the file (this would use disk space) but create a hard link. Our “hard link” copy will not use any disk space at all. Now comes rsync into play. It is incredibly clever. Next time when rsync is changing some file in daily0/ it will notice the hard link, create a real copy (old version) in daily1/ and write the new version of the file to daily0/. This is completely transparent. The user will see 7 complete backups, but the required space is the space for 1 backup plus the space for all files who changed during the last 7 days. Under most typical situations it is enough to have two hard disks of the same storage capacity, one for the backup and the other one for your system, when the system HD is up to 70% full.
The whole procedure is explained in detail in a link I gave some time ago in this forum post: