So I use a second ssd to mirror my main ssd, so I never lose root or home. Made a backup every couple weeks. Unexpectedly, the backup ssd went poof and disappeared. Now my openSUSE installation dumps me to the command line in recovery mode. Why so fragile. What do I need to do to make it boot. The main drive is fine, but I’ve got no backup, so I need to step carefully.
“A start job is running for dev-disk-by\x2duuid-…” (90 second timeout)
“Give root password for maintenance”
Crucial M4, if you’re curious. Apparently it has an earned reputation for pulling this stunt.
Whenever the default boot fails, the system will boot into backup. You may want to test by shutting down the system, unplugging one of the SSDs and booting again.
“Mirror” how? Literal (strict) mirroring creates clones of partition labels and UUIDs, which generally speaking is a serious problem. That boot went poof when the mirror croaked makes me think you might either had been running on the mirror while thinking you were running the original, or at least booting from the mirror. You might want to consider formal mirroring, RAID1.
I hope you see that this problem is only a disk (or one or more disk partitions) that was/were defined in /etc/fstab as to be mounted at boot, which did not succeed. Has nothing to do with the usage you make of them (some sort of backup).
Yeah I’m using the word “mirror” fast and loose, no actual drive mirroring. I dd dump first section of /dev/sda which covers gpt + efi + root through gzip into a file (while booted from a USB stick). On the fly w/system booted, I dump a tgz of home files. If shtf wrt a zypper dup upgrade, I simply dd the binary image onto /dev/sda directly (only had to do this once so far). Haven’t ever had an accident with /home but tarballs are old hat. I don’t care if I lose a couple weeks of stuff … just not everything else up to that point. Each root image is about 5.5GB, so I can save the last 4 images no problem. I never really thought about if the backup disk went poof. I lost my small backup/restore script! So I had to do all the math again on the sectors for dd. I don’t really trust complicated software backup solutions … I want low level get-er-done-with-stock-programs power.
If you are doing tar.gz files on your home directory then you are eating lots of space on full backups where you can actually save space using rsync with linking.
I will try to spin a couple of scripts from memory. So, test
I am going to create 90 days worth of full backups and backup /home every hour. 24 x 90 = 2160 backups in my example. I will assume the drive is mounted on /mnt/backups.
Make your backup directories. Only do this once:
mkdir /mnt/backups/rsync
cd /mnt/backups/rsync
for i in {1..2160}
do
mkdir backup${i}
done
You now have 2160 directories to backup to.
Now your rsync script:
#/bin/bash
mydir="/mnt/backups/rsync"
# Make sure flash driv is mounted
if ! -d ${mydir} ]
then
echo "Backup drive is not mounted!"
exit 1
fi
# Move backup directories up one
cd "${mydir}"
rm -r backup2160
for i in $(seq 2159 -1 1)
do
mv backup${i} backup$((i+1))
done
# Make a new backup1 directory
mkdir backup1
# Every time the backup runs the log will be stored in the backup_out.txt file
# If you want to exclude directories in /home use this syntax:
# --exclude='/home/username/path'
# You can also have a syntax like this if you have lots of users you want to
# exclude certain directories:
# --exclude='/home/*/path'
rsync -ivaAHRWX --no-motd --link-dest=${mydir}/backup2 /home ${mydir}/backup1/ 2>&1 >${mydir}/backup_out.txt
Because this uses links if one file is in all 2160 backups it only takes the space one time. The other advantage is if you delete a file from your main drive you still have that file for 90 days.
I know you didn’t ask for all that information, but I love rsync
Almost forgot: You of course need to cron the script to get it to run every hour.
Tons of variations on this theme. Backup every hour or once a week: at your service). Keeping 10 copies or 100, or a cycle of 24 within a cycle of 7, again within a cycle of …: at your service.
I use rsync like explained above (with links for unchanged files on the backup side) remote on an old system (not even need to have a GUI installed on it). Using another system has obvious the advantage that with a burned out system, you still have your data. And when possible put the backup system in another room or even another building.
You can also use the backup system to backup several systems. Just organise things.
BTW, I do not backup / as a whole (installing fresh is often faster then recovering). But I do backup /boot (for the config files, not really for the kernels), /etc (not for blindly restoring after installation but for reference), of course /home and from my web server system /srv.
First backup will take some time (everything has to be copied), but all following ones are done in a flash (except when you just downloaded a lot of pictures from your camera).
And, as for all backup policies: do not forget to test recovery before you go live. You would not be the first one who is clever enough to make backups, but when it then comes to recovery they have to ask for help here (specially those that backup with clones and not on file level as we talk about here). That help usually comes forward of course, but it takes time and you will always need recovery on a moment you are in a hurry.
The following unit invokes a script monitoring the configuration files of my collection of foto albums. Whenever a change occurs, a copy of these is saved and the whole storage is updated by copying new images and movies from their SSD to HDD:
karl@hofkirchen:~> systemctl status save-jalbum-settings.service
Unit save-jalbum-settings.service could not be found.
karl@hofkirchen:~> systemctl --user status save-jalbum-settings.service
● save-jalbum-settings.service - Save jAlbum Project Files
Loaded: loaded (/home/karl/.config/systemd/user/save-jalbum-settings.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2018-05-20 05:50:03 CEST; 7h ago
Main PID: 2586 (bash)
CGroup: /user.slice/user-1000.slice/user@1000.service/save-jalbum-settings.service
├─ 2586 /bin/bash /home/karl/bin/save-jalbum-settings.sh
└─26208 sleep 60
Mai 20 05:50:03 hofkirchen systemd[2579]: Started Save jAlbum Project Files.
karl@hofkirchen:~> find /home/karl/.config/systemd/
/home/karl/.config/systemd/
/home/karl/.config/systemd/user
/home/karl/.config/systemd/user/save-jalbum-settings.service
/home/karl/.config/systemd/user/default.target.wants
/home/karl/.config/systemd/user/default.target.wants/save-jalbum-settings.service
karl@hofkirchen:~>