Opensuse 13.1: How to I backup my Btrfs partitions to a file on another filesystem?

I’ve gotten used to using Image for Linux, Clonezilla, and others. I attempted to backup my partition via Clonezilla
but I got a superblock error. I tried partclone.btrfs on a live cd and got the same error.

I’m new to the world of Btrfs so I’m sure that I am missing something for sure.

I experimented with “btrfs”, then went back to “ext4”. I’ll be looking at other responses in this thread.

I do my backups with “dar”. That’s a command line utility, a bit like “tar”. I backup to an external drive. You may need to install “dar” from the repos, since it is not part of a standard install. It backs up files and directories, rather than complete file systems. If you have to recover, you would probably need to go into rescue mode to reinstall grub. But, apart from that, it does the job. And since it is a file-level backup, it should not notice that you are using “btrfs”.

On 2013-12-22 23:26, klepto wrote:
>
> I’ve gotten used to using Image for Linux, Clonezilla, and others. I
> attempted to backup my partition via Clonezilla
> but I got a superblock error. I tried partclone.btrfs on a live cd and
> got the same error.
>
> I’m new to the world of Btrfs so I’m sure that I am missing something
> for sure.

My guess is that these utilities try to be clever skipping unused disk blocks, but btrfs is simply
not supported. You can instead do an image copy with simple “dd”. which copies everything byte by
byte. You need free space bigger than the partition. It is possible to compress it later with gz or
similar.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” (Elessar))

Yes dd could get the job done, I also saw that you could turn a usb device into a raid node and duplicate everything there.
I am probably never going back from Btrfs as my days of rsyncing and rdiff-ing are done. Copy on write is where it’s at, it reminds me of
good ole/bad ole days of shadow volume copy service from my Windows days. Btrfs is much much more robust and easier to manage.

There is also “btrfs send” which is something similar to old “dump”; as it is based on snapshots, it can also be use live and can also create incremental streams (between two different snapshots).

On 2013-12-23 02:16, klepto wrote:
> days. Btrfs is much much more robust and easier to manage.

Robust it is not. I know how to reliable crash it by just writing normal files.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” (Elessar))

You’ll have to tell me how that is done. I’ll settle for dd but do some research on other means that are less time consuming.
dd with bzip2 has to be soul crushing slow.

On 2013-12-23 05:26, klepto wrote:
>
> You’ll have to tell me how that is done. I’ll settle for dd but do some
> research on other means that are less time consuming.
> dd with bzip2 has to be soul crushing slow.

dd is fast. As fast as your hardware can go, it is not CPU bound.

Lets assume your btrfs partition comes from device /dev/sdXY, and your storage place is /backup. The
command would be:


dd if=/dev/sdX  of=/backup/image_of_sdXY_made_on_date bs=100M

or


cp /dev/sdX  /backup/image_of_sdXY_made_on_date

I don’t know of a way to compress it on the fly without creating first an uncompressed image. Or it
doesn’t occur to me.

I might think of a script to create several smaller chunks and compress them. Not today, but it is
doable. Maybe it exists already.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” (Elessar))


dd if=/dev/sdX bs=100M | bzip2 > /backup/image_of_sdXY_made_on_date

Replace bzip2 with your favorite compress program.

On 2013-12-23 14:26, arvidjaar wrote:
>
> robin_listas;2610838 Wrote:
>>
>>
>> I don’t know of a way to compress it on the fly
>
>
> Code:
> --------------------
>
> dd if=/dev/sdX bs=100M | bzip2 > /backup/image_of_sdXY_made_on_date
> --------------------
>
>
> Replace bzip2 with your favorite compress program.

Question. I don’t know how pipes are handled internally (Linux was not invented yet when I studied,
so there are things I don’ know :wink: ). I assume that the output of the first program is saved to a
file, and this is passed as input to the second program; but I don’t know if that file is the entire
huge file, or just a chunk of the file, which grows on one end and is being deleted on the other end
as soon as the other program picks it.

You understand what I mean? I don’t know if the whole process needs entire gigabits of temporary
disk space to work or not. I don’t have this clear.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” (Elessar))

No, pipes are handled entirely in memory. I do not know pipe buffer size in Linux; in other OSes it was relatively small (in order of several KB).

On 2013-12-23 15:26, arvidjaar wrote:
>
> robin_listas;2610843 Wrote:
>>
>> Question. I don’t know how pipes are handled internally (Linux was not
>> invented yet when I studied,
>> so there are things I don’ know :wink: ). I assume that the output of the
>> first program is saved to a
>> file, and this is passed as input to the second program
>
> No, pipes are handled entirely in memory. I do not know pipe buffer size
> in Linux; in other OSes it was relatively small (in order of several
> KB).

Ah, good. I was mistaken, then.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” (Elessar))