I’m currently using zip to a USB thumb drive to backup my OS/2; that’s adequate but not ideal. I could extend the technique to using tar for my Linux partitions, but I’d like to take a look at more sophisticated options. Ideally, I’d like a package that could
Backup entire partitions at medium or long intervals, with compression
Do incremental backups at shorter intervals
Automatically figure out the correct incremental or full backup for restoring a file
Manage external backup media
Not require Internet access
Lend itself to scripting
A GUI would be nice but not essential
Backing up the OS/2 (FAT32, JFS and bootable JFS) partitions would be nice but not essential.
Deja Dup has some but not, AFAIK, everything that I want. Any advice or recommendations? Thanks.
Have a look at luckybackup. It supports various profiles, pre-backup scripts, post-backup scripts, incremental backups, selective restore etc. etc. It’s interface looks quite simple but the option to run plain bash scripts makes it a reail beast.
If scripting and, GUI optional then, take a look at “rsync”.
But, for the case of FAT32 it’s a bit picky – the problem is, the timestamp accuracy – “rsync” uses a 1 second time difference window – FAT has a 2 second timestamp window – you’ll have to make use of the “–modify-window” option to work around FAT file issues …
I can’t find any JFS issues with “rsync” – it may very well be OK …
“luckyBackup” seems to be using “rsync” «under the hood» …
“rsync” being called by scripts, works quite OK for backups on a local NAS via NFS and, for backups to a DVD-RAM …
The problems with luckyBackbut are that it expects the same target every time you run and that it doesn’t support compression. I want multiple generation off-site backup; that means that it has to know what files it wrote to a data key (DVD is too small) that it has no access to.
I tend to work on a high water mark basis; once I get used to something, it’s painful to revert to something less functional. In the mainframe world, you can recover from fat finger fumble by restoring a previous version of a file, and the backup software knows what tape volume contains each backup. I’m looking for equivalent functionality in Linux. I don’t need the ability to create duplicate backups, although that would be nice.
BTW, FAT really isn’t an issue; there are only two small partitions that I can easily zip or tar instead of taking incremental backups.
It uses rsync under the hood, which means that only what is needed is transferred (the first backup takes a lot of time, but the following ones normaly do not).
When it creates a new backup increment it does so by using cp -al. Thus all files are hard-linked to the already existing increment. Thus all file contents will only be once on de storage. After the rsync that follows, only those files that changed or are new are now taking real extra space.
While there is a tool to configure it (how many incremenets, and also e.g. daylt ibcrements inside a loop of weekly incremenst, etc.), I personaly took the essence and wrote my own script based on the priciples.
Edit: it seems that it is now rewritten in Perl. Which still means you can read it.
Edit2 : it is in the OSS repo.
I assume you’re talking about the IBM world – coming from the DEC world «mini-computer», yes, VAX/VMS and the 16-bit OS’s such as RSX-11/M(+) and RSTS/E also had backup and restore utilities which behaved pretty much as you described – with backup volumes … I suspect that, the 36-bit TOPS-10 and TOPS-20 OS’s also had similar utilities, which may well have influenced the 16-bit and 32-bit products.
But, we’re in the UNIX® world and therefore, the equivalent is “tar”, which also handles (tape) volumes and, compression but, not as nicely as the mainframe utilities …
On the other hand, disk space is currently somewhat cheaper than what it used to be and therefore, for personal/home use, why bother with (compressed) backup volumes?
Except of course, if you’re using magnetic tape (currently, AFAICS, only cartridges) as your backup medium.
DEC’s VAX/VMS and RSX-11M used “Files-11” as the file system which, featured file versioning – any file opened for writing created a new version of the file on closure – unless, one was heavily into RMS programming … «The file system had a built-in database functionality … »
Insecure users used to copy a working file to itself to always keep a copy of the file which, protected one’s self against accidental deletion – the default “delete” command only deleted the newest version of the file …
Up to 32,767 (215) versions of any given file could be held on disk.
Yes, my background is heaviest on MVS, although I’ve used other software from CDC, DEC, IBM and UNIVAC. The way VMS supported multiple generations of files was, IMHO, much cleaner than the way IBM did in OS/360 et al.
No, tar is the equivalent of various single volume dump programs, e.g., DFDSS, FDR, but it is not the equivalent of, e.g., ABR, DFHSM.
I’m currently using 32 GB USB thumb drives, and the compressed zip files take up 20 GB of that. I’m keeping off-site backups at two locations, and would like to do the same for Linux. I’d like to also keep incremental backups at both locations as well. So the media cost is more than for a simple mirror.
I don’t currently have tape drives, although if I could find supported (including creating AWS files) drives at a reasonable price to read old 3420 (open reel) and 3480 (cartridge) tapes I’d be interested. At today’s prices, USB thumb drives look more economical than tape for backup, at least for a home user, although tapes (and I don’t mean QIC-80) might be more reliable.
Can’t speak for your location but, around here 4 TB external HDD with USB 3.0 (WD, Toshiba, Seagate) are available for about € 100.
512 GB USB 3.0 sticks are between € 70 and € 150.
1 TB SD card: € 485 – 512 GB SD cards between € 100 and € 285.
The price of the external HDDs with USB are pretty much unbeatable – I have a cheap 500 GB one on the wall alongside and, plugged into, my DSL Router for amongst other things FAX reception – it’s been running 24X7 for about 8 or 9 years now – reliability doesn’t seem to be an issue …
My personal scheme for private, personal, “at home” backups follows these rules:
“The photographer’s rule
”: «Don’t delete the photos on the camera’s card until, there are verified copies on at least 3 physical disks.» 1. “My personal rule
”: «Make sure that, critical items are archived in files with a date stamp in the file name and, make sure that the archives are backed up on various external disks. Copies of non-critical items should exist on at least one external disk.»
For private use, I prefer “rsync” file copies rather than backup sets.