Looking for a backup soultion

In the 15.3 repositories, there are quite a few backup options, but I can’t decide on one. I used Duplicity for a while but it filled the drive with incremental files that required the original or full backup to be there. Which means I would have to clear the whole mess and start over every so often. I’m looking at storebackup but I’m afraid that it will disappear in the future as their web site is missing.

I have four computers to backup and have a Synology NAS to use. Synology claims their backup system supports linux, but by linux they mean ubuntu (which is not linux in my book). They do have a rpm file but zypper, yast, nor rpm will acknolodge it so it’s not a solution.

SUSE must have something they use. I wonder what it is?

So. I’m looking for recommendations.

Bart

Grsync (with SSH connectivity) might work for you. See this thread (post #13 onwards)…
https://forums.opensuse.org/showthread.php/559542-Using-Wine-for-NAS-synchronisation?p=3065169#post3065169

Looking at your space problem (which I do not understand complete), I use a method based on the fact that every backup taken, those files that are still the same are hard links to the same file in the other backup instances.

I, e.g. make up to 10 different backups. When a File is changed between everyone of them taken, there are 10 different versions of them (of course), but when a file didn’t change, there is only one version and it has 10 hard links, thus saving a lot of space.

Based on cp -al and rsync.

A ready made product is at https://rsnapshot.org, and can be installed from the standard OSS repo.

It is written in bash, thus easy to adapt when you want that (that is what I did), but ready for use.

Of course, the first backup made will take some time because each and every file must be copied. The next backup will first copy the directory tree, then make all the hard links, after hat rsync will copy only those files changed or new (and delete files that do not exist anymore, but remark, those file will still be on the older backup) and thus normally much quicker.

Synology, QNAP, & Co. tend to use a customized Linux on their boxes – which supports NFS.

  • You don’t mention which OS is running on the 4 boxes you want to backup – can you please indicate which OS(s) these boxes are running?

For the Redmond stuff, they have a Samba server in their boxes – for Apple, they have something else as a solution.
[HR][/HR]Back to Linux and, NFS …

  • What I tend to do is, to create a new directory tree in the NAS box, parallel to any user directories added by an administrator on the box.
  • A useable name for this new directory is “NFS” – owned by the NAS box’s administrator user – usually “admin” even though, that user is often disabled for security reasons.
  • The “NFS” directory on the NAS is added to the box’s NFS exports list – preferably as the one and only NFS export. It usually a good idea to setup the NAS box’s NFS Server to support all known NFS versions – makes the openSUSE auto-mounter behave properly during the handshaking when the NFS clients auto-mount the NAS box’s export.

I use this with a QNAP NAS and, the openSUSE auto-mounter usually negotiates an NFS v4 mount …

  • I then create per Linux user a directory for each user below the NAS NFS directory – disable root_squash in the NAS box’s “exports” file temporarily to allow root access from one of the NFS clients. Using root from one of the NFS clients, change the ownership, group and protections of the per user directories in the NAS box’s NFS export directory – the NAS box doesn’t need to know about these users – it simply accepts the UID and GID of the NFS clients – you just need to make sure that none of the NAS box’s users (needed for the Samba world) have a UID and/or GID which conflicts with the Linux NFS client’s users …
  • The NFS export of the NAS box should never be accessible by the Redmond/Samba world.

Once the NAS box’s NFS exports have been setup, simply NFS auto-mount from the Linux boxes to the NAS.

  • The Linux users can then simply copy their files to be backed up to the NAS box or, use rsync to automate the backups.

The rsync commands I use from a batch file tend to look like this:


#!/bin/bash

systemHostName=$(hostname --short)
effectiveUserID=$(whoami)

#
# Cannot use -a --archive: both imply -rlptgoD
# -g --group "preserve group" is possibly not supported by a QNAP TS-131P.
#

if  -d /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName ]]
then

  echo "** .config/:"
  /usr/bin/rsync -rlpto --specials --backup --update --8-bit-output --omit-dir-times --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.config/ /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/dot.config

  echo "** .local/share/:"
  /usr/bin/rsync -rlpto --specials --backup --update --exclude=gegl-0.?/*** --exclude=gvfs-metadata/*** --exclude=flatpak/*** --8-bit-output --omit-dir-times --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.local/share/ /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/dot.localShare

  echo "** .mozilla/:"
  /usr/bin/rsync -rlpto --specials --backup --update --8-bit-output --omit-dir-times --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.mozilla/ /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/dot.mozilla

  echo "** User Files:"
  /usr/bin/rsync -rlpto --specials --backup --update --exclude=.* --exclude=.*/*** --exclude=public_html --8-bit-output --cvs-exclude --omit-dir-times --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/ /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/home

  echo "** User .* Files:"
  /usr/bin/rsync -lpto --specials --backup --update --8-bit-output --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.bashrc /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/home
  /usr/bin/rsync -lpto --specials --backup --update --8-bit-output --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.profile /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/home
  /usr/bin/rsync -lpto --specials --backup --update --8-bit-output --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.signature /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/home
  /usr/bin/rsync -lpto --specials --backup --update --8-bit-output --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.emacs /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/home
  /usr/bin/rsync -lpto --specials --backup --update --8-bit-output --omit-link-times --one-file-system --whole-file --progress --stats --human-readable /home/$effectiveUserID/.vimrc /mnt/NAS-001/NFS/$effectiveUserID/$systemHostName/home
  echo ""

fi

If you really want to, you can also explicitly backup Vim’s backup and undo files – I usually place them in ‘~/.vim/tmp/’ …

For Digital Camera files (only need one copy per camera per NAS) I tend use an rsync command like this –


  echo "** Photos:"
  rsync -rlpto --specials --exclude=*.db --exclude=.directory --exclude=.dtrash/ --8-bit-output --cvs-exclude --omit-dir-times --omit-link-times --one-file-system --whole-file --progress --stats --human-readable **--ignore-existing** /home/$effectiveUserID/Bilder/ /mnt/NAS-001/NFS/$effectiveUserID/Photos

Having written all that, especially for the case of photos, I’m tending to head off in the direction of large USB drives connected only when backing up …

  • I’ve never, ever, backed up sensitive files – such as KWallet files or, Home Banking files – on a NAS …

Hi
Have you investigated running a docker container on the NAS?

WOW! So much to absorb! Thank you all!

@Malcomlewis: I had not thought about that! I could run a copy of ubuntu and load Synology’s ap but that would require me to SSH into each of my computers and then store the files on the NAS, right?. Never messed with Docker. I should.

@dcurtisfra: Now, this looks interesting! Written in bash, completely customizable, can set it up with cron so I don’t forget to backup. You brought up a point I had not considered though, not backing up the keepass files and banking stuff. Both of those would be super important to have. Can I not include encryption in the backups? I am looking at having the backups to my NAS backed up to Synology’s C2 cloud, encrypted of course.

Oh! All my computers are running openSUSE Leap 15.3 of course! (BIG grin)

Bart

Hi
Well the tools you have used in the past have ssh capabilities, so could trigger with a systemd service/timer? Have a look at minio, it can encrypt etc? I’m currently working with it for kubernetes backups, if I could just get my self signed certificates to work (grrr).

Locally I use cronopete on my desktop to backup direct to a USB device at present, I will probably just connect my StarTech NAS via USB, but that’s down the track with things to do :slight_smile:

Might be worth checking out rsnapshot considering the setup you have.

I’m using unison for many years and I’m quite fond of it. It’s not under active maintenance anymore but rockstable, got a GUI but is much faster in “batch mode”.

I have been using Unison for many, many years, backing up across the networks (if both machines have Unison, f.e., it is faster because the Unison Server on both ends runs at the same time probing each disk individually, then compares the two db files). Unison works using SSH. I just open Unison, click on the profile for the other machine, and let it do its thing. I presume you could also cron it.

Unison has NEVER failed me in all these years, and sure has been a “Wheww! It’s there!” source for when my fingers or system slips.

BTW: Shouldn’t this be in a different forum? Admins, once you check, you are permitted to delete this post if you wish. :stuck_out_tongue_winking_eye:

Moving to Applications…

I put it where I did as I wasn’t looking for support for a specific application. I thought I did good. I guess not.

Bart

@dcurtisfra

I’m pretty chuffed about your system and excited to try to create my own using the bases you provided. I will certainly learn a bunch of things.

The question I have though, is concerning the end result. What is the structure of the directory where the backup is stored? I assume there will be one big pile of stuff after the initial backup, followed by smaller files representing changed and added files.

What sort of maintenance should I expect concerning keeping these backups? Do all of the files after the first one have to be kept in order to be able to restore a single file? If I removed the original file and made a complete backup again, would that mean all the files after the original would be of no use?

And, what about restoring these files?

Bart

Yes chit-chat was probably ok for general discussion, but applications is appropriate too…lots of alternative solutions available here as you can see. :wink:

Hey there, Bart. Nice to see you are still here. And, no biggie, LOL.

The structure of the backup directory is a mirror of the user’s directory tree.

Because rsync creates a mirror of the client’s directory structure on the server, each file can be restored individually.
Please take a look at the rsync option “–backup” –

-b, --backupWith this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options.

Note that if you don’t specify --backup-dir, (1) the --omit-dir-times option will be forced on, and (2) if --delete is also in effect (without --delete-excluded), rsync will add a “protect” filter-rule for the backup suffix to the end of all your existing excludes (e.g. -f "P ~"). This will prevent previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g., if your rules specify a trailing inclusion/exclusion of ’’, the auto-added rule would never be reached).

It’s NFS – the UNIX® rule applies – “Everything is a file.”

  • The ‘/mnt/NAS/NFS/’ directory (mount point), is a directory exactly the same as any other directory on the user’s system – it ain’t a “shared drive
    ” – UNIX® users never, ever, “see” drives – they only “see” and use directories … - Only the UNIX® system administrators know about drives and other devices but, even they only ever write files to directories – never devices. Yes, yes, administrators have tools to deal with devices and, system programmers have system calls which can write and read directly to and from devices …

BTW – my personal view of small system backups –

  • Encryption is fine if, you’re worried about other people physically breaking into your location and attempting to access your files.

It’s OK if, and only if, you’re absolutely certain that, you’ll never, ever, lose the encryption key.
If the encryption key is lost, for what ever reason, the backup is worthless.

  • Adopt the large system approach to serious backup storage – physically distance your backup media from your system’s physical location.

Large systems often have backup media storage locations at a distance of at least 50 km from the system itself – if the system is destroyed, the company’s files are still available to keep the company up and running.
One can argue that, “Cloud” solutions provide this level of security but, what if the “Cloud” servers are actually located next door to your system’s location? «A possible argument for Cloud servers physically located on another continent … »

  • With the current price of rotating disks, large capacity (Terabyte) backup media is affordable for small systems – no current need to consider magnetic tape as a backup media for small systems.

There’s also, possibly, no need for compressed archives – except for those compressed file archives produced by – for example – e-Mail clients.
The Archive/Backup volumes (the term began with magnetic tape Archives/Backups) are the large capacity disks.

  • Is RAID a solution?

Yes, it protects against physical Disk failures but, it doesn’t protect against data/file loss – if a file is deleted or, corrupted or, encrypted then, that file property is mirrored across the RAID.

  • Is a NAS a viable backup solution?

A NAS is located on the network and, therefore, vulnerable.
A rotating disk in a USB enclosure is often only attached to the system during the backup procedure.