Application to backup my root partition prior to Tumbleweed

Hello. On my main work machine I have openSUSE 11.4 standard KDE on two partitions, standard root and standard /home. I’m going to add the Tumbleweed repos and follow the evolution of openSUSE until 11.5/12.0 is released.

But I cannot afford to have my main work machine off the air. So I want to back up the root parttiion each time I go a major upgrade from the Tumbleweed repos.

So I thought I would just image the root partition in compressed/reduced form to a USB drive prior to updating.

First I looked at Partimage but it doesn’t do the EXT4 filesystem.

Second I thought about Clonezilla but it doesn’t allow compression (it states that the target for the image must be at least as big as the source partition); thus DD is just as limited.

Third I looked at the System backup and restore facility in Yast but it seems to be undocumented (i.e. I can’t find it.)

Then I thought why not just use cp because the root filesystem of 11.4 for me is only occupying 6Gb ATM. I propose to use “cp -a -u -v” from a live CD to copy the root files to a USB drive with an EXT4 partition.

So two questions:

  • is there a flaw in backing up the system/root with “cp -a -u -v”
  • is there a better imaging software for a small job like this

Just create a compressed tarball of the top level directories


tar -czvspf /media/usbdrive/fullbackup.tar.gz /boot /bin /etc /home /sbin /tmp /usr /var 

When you need to restore just extract the tarball


tar -C / -xzvspf /media/usbdrive/fullbackup.tar.gz

Don’t try to backup /proc, /dev and /sys.

Good luck,
Hiatt

clonezilla does do some compression and the recommendation you read is just to be safe, it will be approx 1/2 of the working size using partclone’s “sorta tar.gz” compression. It’s speed will depend on your i/o to target, so another installed hard drive will be the fastest.

This afternoon i backed up my root partition using clonezilla to a folder 3.7 GB (approx 9GB on system partition). Took 11 min to another hard drive.

to achieve even more compression you’d have to use something like fsarchiver and it will be much slower doing bzip or lzma.

what i’d suggest is go here Clonezilla-SysRescCD - Wellcome

get the iso and burn it, you’ll have clonezilla and systemrescue (partimage & fsarchiver) to use and make up your own mind.

fsarchiver using the fastest lzma compression is about three times longer than clonezilla to generate a workable backup.

good luck :slight_smile:

Don’t try to backup /proc, /dev and /sys

I’d rather backup by booting into a live CD and mounting the root partition and mounting an ext4 USB drive to collect the tar-ed filesystem. So then I would want to back up all of the root filesystem including proc dev sys. Presumably copying those directories would be OK when the OS was not running. You see, if my OS breaks down after a Tumbleweed update, I would not want to reverse the update, rather I would delete everything on the root partition from a live CD and restore the tar-ed image of the complete root partition. Does that make sense?

clonezilla does do some compression and the recommendation you read is just to be safe, it will be approx 1/2 of the working size using partclone’s “sorta tar.gz” compression.

I’ll have a play with it and see how big an image it makes, thanks.

Both the cp and he tar methods will work OK (where tar offers the compression and yes the options to keep owner/timestamps are important).
Doing it from a live CD (or any rescue system or multi boot) thus having the file system not in use and consistent over the backup time is the very best.

Another option would be to dd it to another medium. This will certainly leave all of the file system exactly as it is, but to dd back you need a partition exactly the same size as before (what most problably is the case in your situation).

Since tar and cp can preserve timestamps, permissions, ownership, links and so on, I can’t see a point in making a dd-style image because isn’t it exactly as big as the original partition, despite 50% might be empty.

And thanks for confirming that you see no trouble with cp for taking an image.

Exactly. I mentioned it mainly because many people will read this and ask themselves if dd would be an option. But I could have laid more stress on the fact that the medium to save it on must be as least as big as he source.

The big differnece one should be aware of is that cp and tar (and brothers/sisters) copy files while dd copies disk block regardless of their contents.

On 03/22/2011 02:06 AM, swerdna wrote:
>
> thus DD is just as limited.
>

please! don’t drag me into this one also!! :wink:


DenverD
CAVEAT: http://is.gd/bpoMD
[NNTP posted w/openSUSE 11.3, KDE4.5.5, Thunderbird3.1.8, nVidia
173.14.28 3D, Athlon 64 3000+]
“It is far easier to read, understand and follow the instructions than
to undo the problems caused by not.” DD 23 Jan 11

Here is a backup script that I used on an old server. Just cut and paste.

#!/bin/bash

TodaysDateTime=date +%y%m%d-%H%M
TarFile=/raid/Backups/Xena/xenaRootBackup-$TodaysDateTime.tar.bz2
LogFile=/raid/Backups/Xena/xenaRootBackup-$TodaysDateTime.log.txt
ErrorFile=/raid/Backups/Xena/xenaRootBackup-$TodaysDateTime.error.txt
ExcludeFile=/raid/Backups/Xena/exclude.list
tar -X $ExcludeFile -cvjpf $TarFile / 1>$LogFile 2>$ErrorFile
md5sum $TarFile >>/raid/Backups/Xena/MD5SUMS

Directory Listing:

-rw-r–r-- 1 root root 605 Mar 5 2008 MD5SUMS
-rw-r–r-- 1 root root 154 Aug 27 2007 exclude.list
-rw-r–r-- 1 root root 653690652 Jan 4 2006 xenaRootBackup-060104.tar.bz2
-rw-r–r-- 1 root root 2383196609 Mar 6 2006 xenaRootBackup-060306.tar.bz2
-rw-r–r-- 1 root root 1113620307 Jun 15 2006 xenaRootBackup-060615.tar.bz2
-rw-r–r-- 1 root root 1408685932 Oct 30 2006 xenaRootBackup-061030.tar.bz2
-rw-r–r-- 1 root root 1936828068 Mar 6 2007 xenaRootBackup-070306.tar.bz2
-rw-r–r-- 1 root root 1089313113 Aug 10 2007 xenaRootBackup-070810.tar.bz2
-rw-r–r-- 1 root root 1474199175 Aug 21 2007 xenaRootBackup-070821.tar.bz2
-rw-r–r-- 1 root root 230 Oct 18 2007 xenaRootBackup-071018-1127.error.txt
-rw-r–r-- 1 root root 36771010 Oct 18 2007 xenaRootBackup-071018-1127.log.txt
-rw-r–r-- 1 root root 1909159212 Oct 18 2007 xenaRootBackup-071018-1127.tar.bz2
-rw-r–r-- 1 root root 335 Mar 5 2008 xenaRootBackup-080305-0818.error.txt
-rw-r–r-- 1 root root 37443564 Mar 5 2008 xenaRootBackup-080305-0818.log.txt
-rw-r–r-- 1 root root 2208016293 Mar 5 2008 xenaRootBackup-080305-0818.tar.bz2
-rwxr-xr-x 1 root root 401 Mar 5 2008 xenaRootBackup.sh*

Also you can pipe dd thru gzip to get a compress image. It has been a while and I need to look my notes on how I did it but it was something like this.

dd if=/dev/sda1 |gzip - Image-110322.dd.gzip

The dd command does work best when booted of a live cd like you said.

Dave W

I like when you publish scripts, etc. but please do so between CODE tags. It improves redability enormously: Posting in Code Tags - A Guide.

Sorry, I have never posted a script before. What do I need to use to play that video? It does not work in windows or mplayer.

Dave W

I think the pictures make it clear enough. It is not that difficult. Go for Advanced (lower right when you add a post) and then use the # button. Or simply type the tags yourself: [TAG] and [/TAG], but then with CODE instead of TAG.

On 2011-03-22 12:06, swerdna wrote:
> Since tar and cp can preserve timestamps, permissions, ownership, links
> and so on, I can’t see a point in making a dd-style image because isn’t
> it exactly as big as the original partition, despite 50% might be empty.

The advantage is that dd is way faster backing up or restoring - plus, it
includes the boot code.

The resulting image can also be compressed.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

It does not include the boot code if he copies a partition (as was his intention).

On 2011-03-22 11:36, hcvv wrote:
> Both the -cp- and he -tar- methods will work OK (where tar offers the
> compression and yes the options to keep owner/timestamps are important).

To be precise (pedantic :wink: )tar does not compress itself, the compression
is external. It has the problem that if there is a corruption, you can
loose the entire tar.

There is an alternate method that first compress each individual file, then
it archives in a tar the resulting compressed files. This is safer, a
compression error or corruption only damages a file or two. YaST uses this
method (although YaST backup is inappropriate in this case).

There are compression formats, like rar, that include damage control
(forward error recovery). Unfortunately, it is not free and I’m unsure if
it saves all Linux attributes.

Clonezilla I don’t what method it uses.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

I never had a corrupted compressed file in more then 20 years of using different compression tools on Unix/Linux. And it was/is such an automatic way system managers doing these things that imho the standard compression tool can be trusted.

Earlier tar did not have it’s compression option and one piped the tar output to the compression tool of ones choice (and vv). tar’s man page now gives two opeions (-z and -Z) to choose between gzip and compress, but of course it is still possible to use another one by piping.

rotfl…

@dwestf do you recall what sort of compression % you get on dd with gzip?

The question was not to me, but for what it is worth:
It would depend on the contents not only of the “used” part, but also of the "unused part of the file system.
E.g. when the partition is written with all zeros before the file system was created and since it’s creation not much happened (thuse leaving the empty parts zeros) and there is not much used by real data, you can imagine the compression rate! But when the file system is running allready a long time and all parrts of it are allready used for data one or more times, the compression rate will be the same as the mean rate of compression in general.