Hi everyone,
I am using rsync as a way of having duplicates of my files in an external hard disk (in case something goes wrong).
I would like to ask you though if there is a way to couple rsync with thar bzip2 in a way that all the copied files at the rsync destination are packed and compressed together to a single file.
On 2013-02-05 12:36, alaios wrote:
>
> Hi everyone,
> I am using rsync as a way of having duplicates of my files in an
> external hard disk (in case something goes wrong).
>
> I would like to ask you though if there is a way to couple rsync with
> thar bzip2 in a way that all the copied files at the rsync destination
> are packed and compressed together to a single file.
>
> Would that be possible?
You can compress the copy, of course, but then a second rsync as done to
refresh or update the backup would copy absolutely everything, the
compressed files or archives are not recognized.
So the answer is “no”.
If Linux had a compressed read/write filesystem (as is NTFS in Windows)
then we could use that transparently. But, quite unbelievably, Linux
does not have a compressed, r/w filesystem.
You can, on the other hand, use a program like “rdiff-backup”. The
current backup is an rsync, the old ones are rdifs, which should be smaller.
–
Cheers / Saludos,
Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)
On 2013-02-05 14:56, alaios wrote:
>
> Thanks a lot for the answer.
> Would a pipelining work like
>
> rsync -rav -e ssh /home/user/Documents/Documents-Sensitive/
> user@server:/home/user/Documents/ *| *tar -jvf myfile.tar.bz2 ?
No, nothing would.
Rsync needs to compare the source and destination files, and they have
to be exactly the same. At least same name and timestamp, and it can
also verify the internal checksum. You can not alter in any way the copy.
Only a transparent, compressed, r/w, filesystem would work. And it does
not exist in Linux, only in Windows. Maybe btfrs, perhaps.
–
Cheers / Saludos,
Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)
I guess though that I can split it with a bash script
with the first command doing the rsync and the second one taking the destination folder and compressing it to a single file.
alaios wrote:
> I guess though that I can split it with a bash script
> with the first command doing the rsync and the second one taking the
> destination folder and compressing it to a single file.
If you want to do that, why would you use rsync in the first place? Why
not just use tar?
Take a look at luckybackup. It has advanced options, where you can add a tar command.
But, have you considered using tar all the way?
tar -cjvf /media/Backup/`date +%d%m%y`.tar.bz2 /home/knurpht/Test
This generates a compressed tar file from the Test folder in my homedir with a datestamp in the generated tarfile. Luckbackup allows you to use a command like this one before and after the actual rsync operation.
If you want to stick to CLI, you could create a bash script that does the tarring before or after rsync.
> Hi everyone,
> I am using rsync as a way of having duplicates of my files in an
> external hard disk (in case something goes wrong).
>
> I would like to ask you though if there is a way to couple rsync with
> thar bzip2 in a way that all the copied files at the rsync destination
> are packed and compressed together to a single file.
>
> Would that be possible?
As Carlos has said, not really, no - because rsync depends on comparing
the source and destination files, so if you change the data in transit,
then the comparison will fail.
But it sounds like you’re focused on a “how” rather than a “what”. It
sounds like what you want to do is create a compressed backup. That is
something that can be done, but not with the tools you’re specifying.
A couple of possibilities:
Use the tar command to create incremental/differential backups with a
weekly backup schedule. tar can compress using bzip2 or gzip, and by
backing up incremental or differential backups periodically and a full
backup weekly (for example), you can accomplish this sort of thing.
The difference between incremental and differential has to do with how
much data is backed up - if the comparison is against the last full or
the last full+the last partial backup.
You could create a compressed filesystem using FUSE - something like
compFUSEd would do this - it creates a layer over an existing filesystem
that stores the files in a compressed format (from reading the brief
description, it sounds like what encfs does for encryption - you see the
files in the directories, but they’re stored compressed and need to be
accessed through the FUSE layer. That would be an efficient option for
storage, because otherwise you’d have to allocate a file and mount that
as a compressed filesystem using FUSE or a loop filesystem.
Duplicity incrementally backs up files and directories by encrypting tar-format volumes with GnuPG and uploading them to a remote (or local) file server. In theory many remote backends are possible; right now local, ssh/scp, ftp, rsync, HSI, WebDAV, and Amazon S3 backends are written. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Currently duplicity supports deleted files, full unix permissions, directories, symbolic links, fifos, etc., but not hard links.
On 2013-02-05 17:38, Jim Henderson wrote:
> 2. You could create a compressed filesystem using FUSE - something like
> compFUSEd would do this - it creates a layer over an existing filesystem
Compressed filesystems interest me a lot, so I googled it. I found the
home web page, and the last update was in 2007. I can’t consider it as a
viable alternative. Quite unfortunate
–
Cheers / Saludos,
Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)
the software search seems to be broken
I found both zfs-fuse and fusecompress ( FUSE-based filesystem with transparent compression ) in the main OSS repos when searching in YaST
I would not mess also with a compressed not continued file system
An other idea for my backups as my hard disk is quite small like 200Gb. is to have an external hard disk and syncing there my files and have just the files as duplicates.
My question then is which Filesystem for an external hard disk of 3 TB?
Sorry that I did not made it clear.
I only care being accessible from linux. These are linux files and I would not risk saving in ntfs or fat. I had serious problems with that in the past!!!
How I can, through Yast mount the hard disk always at startup and make it accessible only from root? I guess that is good to have the rsync running by root permissions writing directly on the hard disk.
On 2013-02-07 11:16, alaios wrote:
>
> vazhavandan;2525093 Wrote:
>> If you want to use this on other Oses you may need to use a compatible
>> file system like FAT
>> Otherwise we go with ext4?
>
> Sorry that I did not made it clear.
> I only care being accessible from linux. These are linux files and I
> would not risk saving in ntfs or fat.
I’ll just mention that if you use ext4 with usb pen drives, it is
recommended not to create a journal. Use this:
mke2fs -t ext4 -O ^has_journal /dev/sdf1
> How I can, through Yast mount the hard disk always at startup and make
> it accessible only from root? I guess that is good to have the rsync
> running by root permissions writing directly on the hard disk.
Just make the appropriate entry in fstab and a mount point under /mnt.
I’m assuming plugable external media, so use the option “nofail” too…
–
Cheers / Saludos,
Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)
> Yes that is an external western digital 3TB hard disk. Can you just
> give me a bit of background for the two options you are suggestins?
You are not using a pen drive, so the first suggestion does not apply.
The second one is just the traditional manual mounting of a filesystem
as explained in any Linux /Unix book. Create an empty directory, for
example /mnt/example, create an entry in fstab, for example: