[atomic-update] My little project for transactional updates on read-write systems

Hello everyone,

Due to the excellent transactional-update (TU) package not being officially supported on read-write (R/W) systems, I created a small program to perform atomic updates on openSUSE systems with R/W root filesystems: Tumbleweed, Slowroll, Leap, etc.

There are some changes compared to TU, so I will detail the how it works section from Github here as well.

How it works:

  • On performing an update or running a command using atomic-update, a new root filesystem snapshot is created
  • The new snapshot is used to boot an ephemeral container to see which services are in a failed state, for later comparison
  • All changes are made against this new snapshot and not to the currently running system’s snapshot
  • The snapshot is booted again in an ephemeral container to see if the changes broke any new services
  • If the changes are successful, the new snapshot is set as the default snapshot. The changes can be either applied live or the system rebooted into the new default snapshot
  • If the changes are unsuccessful, the new snapshot is discarded

Performing updates like this have a number of benefits:

  • Prevent a broken system due to system crash, power loss, etc.
  • Prevent non-interactive updates from breaking system due to conflicts/errors causing zypper to abort (the default action on conflicts/errors) in the middle of an update
  • Prevent updates from causing an inconsistent system state due to failing scripts but an otherwise successful update
  • Avoid having to reboot into read-only grub snapshots to perform rollback

Downsides:

  • Updates must be either applied live or the system rebooted shortly thereafter to avoid losing changes made to the old root filesystem.

PRs and Issues are welcome :smiley_cat:

P.S. I have performed several dups and am currently using it on all my Tumbleweed and Slowroll systems, but still you should tread carefully and try it out in a VM first. :warning:

3 Likes

Don’t like tinkering with virtual machines. I presume rolling back to the previous default subvolume will undo the changes made by atomic-update. Correct?

Yep, that should do it. Either using snapper rollback or setting the default subvolume directly using btrfs.

Some trouble:

3400g:~ # atomic-update --debug dup
2024-04-01 19:16:36,373: INFO: Starting atomic distribution upgrade...
2024-04-01 19:16:36,431: DEBUG: Snapper root config name: root
2024-04-01 19:16:36,463: DEBUG: Active snapshot number: 2792, Default snapshot number: 2792
2024-04-01 19:16:37,281: DEBUG: Latest atomic snapshot number: 2807
2024-04-01 19:16:37,281: INFO: Using snapshot 2792 as base for new snapshot 2807
2024-04-01 19:16:37,936: DEBUG: Btrfs root device: /dev/nvme0n1p2
2024-04-01 19:16:37,937: DEBUG: Setting up temp mounts...
2024-04-01 19:16:37,958: INFO: Verifying snapshot prior to update...
2024-04-01 19:16:37,958: DEBUG: Booting container
2024-04-01 19:16:37,959: DEBUG: Getting container id
2024-04-01 19:16:38,980: DEBUG: Container ID = rootfs-556ed3bff677b299
2024-04-01 19:16:38,980: DEBUG: Waiting for container bootup to finish...
2024-04-01 19:18:41,022: ERROR: Timeout waiting for bootup of ephemeral container from snapshot. Cancelling task...
2024-04-01 19:18:41,022: INFO: Cleaning up...
2024-04-01 19:18:41,022: DEBUG: Stopping ephemeral systemd-nspawn containers...
2024-04-01 19:18:41,048: DEBUG: Cleaning up temp mounts...
2024-04-01 19:18:41,251: DEBUG: Cleaning up temp dirs...
2024-04-01 19:18:41,255: DEBUG: Cleaning up unfinished snapshots...
3400g:~ # 
1 Like

Thanks for the debug output. I’ve made some improvements to handle long running container bootups, at least it should yield some more debug info on what’s causing the hangup! :eyes:

Yep. More debug info:

3400g:~ # atomic-update --debug dup
2024-04-01 21:20:53,039: INFO: Starting atomic distribution upgrade...
2024-04-01 21:20:53,093: DEBUG: Snapper root config name: root
2024-04-01 21:20:53,127: DEBUG: Active snapshot number: 2792, Default snapshot number: 2792
2024-04-01 21:20:53,891: DEBUG: Latest atomic snapshot number: 2807
2024-04-01 21:20:53,891: INFO: Using snapshot 2792 as base for new snapshot 2807
2024-04-01 21:20:54,594: DEBUG: Btrfs root device: /dev/nvme0n1p2
2024-04-01 21:20:54,594: DEBUG: Setting up temp mounts...
2024-04-01 21:20:54,615: INFO: Verifying snapshot prior to update...
2024-04-01 21:20:54,615: DEBUG: Booting container
2024-04-01 21:20:54,616: DEBUG: Getting container id
2024-04-01 21:20:55,636: DEBUG: Container ID = rootfs-fe4bcc3c7079c9b6
2024-04-01 21:20:55,637: DEBUG: Waiting for container bootup to finish...
/usr/bin/atomic-update:134: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
  logging.warn("Timeout waiting for bootup of ephemeral container from snapshot")
2024-04-01 21:21:56,653: WARNING: Timeout waiting for bootup of ephemeral container from snapshot
2024-04-01 21:21:56,653: DEBUG: systemd-analyze time output:
Failed to get shell PTY: There is no system bus in container rootfs-fe4bcc3c7079c9b6.
2024-04-01 21:21:56,666: DEBUG: systemctl list-jobs output:
Failed to get shell PTY: There is no system bus in container rootfs-fe4bcc3c7079c9b6.
2024-04-01 21:21:56,666: DEBUG: Getting failed systemd units
2024-04-01 21:21:56,676: ERROR: Could not decode JSON output of failed systemd units. Cancelling task...
2024-04-01 21:21:56,676: DEBUG: systemctl --failed output:
Failed to get shell PTY: There is no system bus in container rootfs-fe4bcc3c7079c9b6.
2024-04-01 21:21:56,676: INFO: Cleaning up...
2024-04-01 21:21:56,676: DEBUG: Stopping ephemeral systemd-nspawn containers...
2024-04-01 21:21:56,690: DEBUG: Cleaning up temp mounts...
2024-04-01 21:21:56,868: DEBUG: Cleaning up temp dirs...
2024-04-01 21:21:56,873: DEBUG: Cleaning up unfinished snapshots...
3400g:~ # 
1 Like

Could you provide the output of:

# this would create a temp container in a new temp btrfs subvol based on the current root subvol
systemd-nspawn --directory / --ephemeral --boot
# once booted, login, and shutoff using systemctl poweroff

Perhaps there is some selinux/apparmor policy preventing this. I have apparmor enabled on my system but not selinux.

3400g:~ # systemd-nspawn --directory / --ephemeral --boot
Spawning container 3400g-54d1c1d3c01962cf on /.#machine.c24e77d9ce8d9176.
Press Ctrl-] three times within 1s to kill container.
systemd 255.4+suse.17.gbe772961ad running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA -SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK -XKBCOMMON -UTMP +SYSVINIT default-hierarchy=unified)
Detected virtualization systemd-nspawn.
Detected architecture x86-64.

Welcome to openSUSE Tumbleweed!

Hostname set to <3400g>.
bpf-lsm: BPF LSM hook not enabled in the kernel, BPF LSM not supported
/usr/lib/systemd/system/plymouth-start.service:15: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Queued start job for default target Graphical Interface.
[  OK  ] Created slice Slice /system/getty.
[  OK  ] Created slice Slice /system/modprobe.
[  OK  ] Created slice User and Session Slice.
[  OK  ] Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Reached target Local Encrypted Volumes.
[  OK  ] Reached target Local Integrity Protected Volumes.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Reached target Slice Units.
[  OK  ] Reached target Swaps.
[  OK  ] Reached target System Time Set.
[  OK  ] Reached target Local Verity Protected Volumes.
[  OK  ] Listening on Device-mapper event daemon FIFOs.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket.
         Mounting Huge Pages File System...
         Starting Load AppArmor profiles...
         Mounting FUSE Control File System...
         Starting Journal Service...
         Starting Apply Kernel Variables for 6.8.1-1-default...
         Starting Remount Root and Kernel File Systems...
         Starting Create Static Device Nodes in /dev gracefully...
[  OK  ] Mounted Huge Pages File System.
[  OK  ] Mounted FUSE Control File System.
systemd-remount-fs.service: Main process exited, code=exited, status=1/FAILURE
systemd-remount-fs.service: Failed with result 'exit-code'.
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
systemd-remount-fs.service: Triggering OnFailure= dependencies.
[  OK  ] Finished Apply Kernel Variables for 6.8.1-1-default.
[  OK  ] Finished Create Static Device Nodes in /dev gracefully.
[  OK  ] Created slice Slice /system/failure-notification.
         Starting Remount Root and Kernel File Systems...
[  OK  ] Started Journal Service.
[  OK  ] Finished Load AppArmor profiles.
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
         Starting Remount Root and Kernel File Systems...
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
         Starting Remount Root and Kernel File Systems...
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
         Starting Remount Root and Kernel File Systems...
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
         Starting Flush Journal to Persistent Storage...
         Starting Create Static Device Nodes in /dev...
[  OK  ] Finished Flush Journal to Persistent Storage.
[  OK  ] Finished Create Static Device Nodes in /dev.
[  OK  ] Reached target Preparation for Local File Systems.
[UNSUPP] Starting of Btrbk.automount unsupported.
[DEPEND] Dependency failed for Local File Systems.
[  OK  ] Stopped Dispatch Password Requests to Console Directory Watch.
[UNSUPP] Starting of Crucial.automount unsupported.
[UNSUPP] Starting of Sandisk.automount unsupported.
[UNSUPP] Starting of Seagate.automount unsupported.
[  OK  ] Reached target Timer Units.
[  OK  ] Listening on System Extension Image Management (Varlink).
         Mounting /home/Albums...
[  OK  ] Reached target Login Prompts.
[  OK  ] Reached target System Time Synchronized.
[  OK  ] Reached target Network.
[  OK  ] Reached target Network is Online.
[  OK  ] Reached target System VNC service.
[  OK  ] Reached target Path Units.
[  OK  ] Reached target Socket Units.
[  OK  ] Started Emergency Shell.
[  OK  ] Reached target Emergency Mode.
         Starting Tell Plymouth To Write Out Runtime Data...
         Starting Create Volatile Files and Directories...
[  OK  ] Mounted /home/Albums.
[  OK  ] Finished Tell Plymouth To Write Out Runtime Data.
[  OK  ] Finished Create Volatile Files and Directories.
         Starting Rebuild Journal Catalog...
         Starting Write boot and shutdown times into wtmpdb...
[  OK  ] Finished Rebuild Journal Catalog.
         Starting Update is Completed...
[  OK  ] Finished Update is Completed.
[  OK  ] Finished Write boot and shutdown times into wtmpdb.
You are in emergency mode. After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, or "exit"
to continue bootup.
Give root password for maintenance
(or press Control-D to continue): 
3400g:~ # 
1 Like

Thanks, I think I might’ve nailed down the problem! :hugs:

1 Like

Yep:

3400g:~ # atomic-update dup
2024-04-02 07:57:24,993: INFO: Starting atomic distribution upgrade...
2024-04-02 07:57:26,217: INFO: Using snapshot 2792 as base for new snapshot 2813
2024-04-02 07:57:27,219: INFO: Verifying snapshot prior to update...
2024-04-02 07:57:29,623: INFO: Checking for packages to upgrade
2024-04-02 07:57:31,754: INFO: Nothing to do. Exiting...
2024-04-02 07:57:31,754: INFO: Cleaning up...
3400g:~ # 
1 Like

Some video (gif) examples of atomic-update in action, really just an excuse for me to try out the awesome asciinema package :wink:

Performing dup

atomic-dup


Testing to see what happens when breaking a snapshot

break-snapshot

Applying updates live does not break the system during a major DE upgrade.
Tested against the following DEs in a TW VM:

  1. Gnome 45 → 46
  2. KDE 5 → 6

atomic-update (AU) is bug for bug compatible with transactional-update (TU) w.r.t. applying live, so TU should also have no issues!

Unlike a traditional update where files for currently running programs are deleted, AU/TU when applying live simply bind mounts the new /usr, /etc, and /boot dirs on top of the existing dirs. This ensures running programs do not crash due to losing access to its old files while new instances of the same program would use the newer updated version/files.

Current version of atomic-update is sluggish:

erlangen:~ # atomic-update --version
atomic-update v0.1.7
erlangen:~ # time atomic-update dup
2024-04-10 12:17:30,406: INFO: Starting atomic distribution upgrade...
2024-04-10 12:17:31,060: INFO: Using snapshot 3353 as base for new snapshot 3356
2024-04-10 12:19:48,914: INFO: Verifying snapshot prior to update...
2024-04-10 12:19:54,456: INFO: Checking for packages to upgrade
2024-04-10 12:19:57,224: INFO: Nothing to do. Exiting...
2024-04-10 12:19:57,224: INFO: Cleaning up...

real    2m27.766s
user    0m3.883s
sys     0m1.615s
erlangen:~ # 

Could you perform a test run with debug option like so:

sudo atomic-update --debug run false
erlangen:~ # atomic-update --debug run false
2024-04-10 21:27:34,697: INFO: Starting atomic transaction...
2024-04-10 21:27:34,736: DEBUG: Snapper root config name: root
2024-04-10 21:27:34,751: DEBUG: Active snapshot number: 3361, Default snapshot number: 3361
2024-04-10 21:27:35,319: DEBUG: Latest atomic snapshot number: 3364
2024-04-10 21:27:35,319: INFO: Using snapshot 3361 as base for new snapshot 3364
2024-04-10 21:27:35,856: DEBUG: Btrfs root device: /dev/nvme1n1p2
2024-04-10 21:27:35,856: DEBUG: Setting up temp mounts...
2024-04-10 21:29:53,202: INFO: Verifying snapshot prior to update...
2024-04-10 21:29:53,202: DEBUG: Booting container
2024-04-10 21:29:53,209: DEBUG: Getting container id
2024-04-10 21:29:54,271: DEBUG: Container ID = rootfs-656aae7179ce05df
2024-04-10 21:29:54,271: DEBUG: Waiting for container bootup to finish...
2024-04-10 21:29:58,612: DEBUG: Getting failed systemd units
2024-04-10 21:29:58,650: DEBUG: Number of failed units = 11
2024-04-10 21:29:58,650: DEBUG: Failed units = apache2.service, failure-notification@apache2.service, failure-notification@minidlna.service, failure-notification@rpcbind.service, failure-notification@sshd.service, failure-notification@vncmanager.service, minidlna.service, sshd.service, vncmanager.service, rpcbind.socket, vsftpd.socket
2024-04-10 21:29:58,650: DEBUG: Stopping container...
2024-04-10 21:29:58,654: INFO: Running command 'false' within chroot...
2024-04-10 21:29:58,656: ERROR: Command returned exit code 256
2024-04-10 21:29:58,656: INFO: Discarding snapshot 3364
2024-04-10 21:29:58,664: INFO: Cleaning up...
2024-04-10 21:29:58,664: DEBUG: Stopping ephemeral systemd-nspawn containers...
2024-04-10 21:29:58,675: DEBUG: Cleaning up temp mounts...
2024-04-10 21:29:59,034: DEBUG: Cleaning up temp dirs...
2024-04-10 21:29:59,036: DEBUG: Cleaning up unfinished snapshots...
erlangen:~ # 

Thanks, looks like some mounts in /etc/fstab are timing out.
Could you provide the output of:

cat /etc/fstab
erlangen:~ # cat /etc/fstab 
UUID=19CF-0B54                             /boot/efi               vfat   defaults                      0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /                       btrfs  defaults                      0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /.snapshots             btrfs  subvol=/@/.snapshots          0  0
# subvolumes exempted from snapshotting
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /var                    btrfs  subvol=/@/var                 0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /usr/local              btrfs  subvol=/@/usr/local           0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /srv                    btrfs  subvol=/@/srv                 0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /root                   btrfs  subvol=/@/root                0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /opt                    btrfs  subvol=/@/opt                 0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /home                   btrfs  subvol=/@/home                0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /boot/grub2/x86_64-efi  btrfs  subvol=/@/boot/grub2/x86_64-efi  0  0
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /boot/grub2/i386-pc     btrfs  subvol=/@/boot/grub2/i386-pc  0  0
#-----------------------------------------------------------------------------------------------------------
UUID=0e9f8bb1-2e36-4a6e-aab8-50a12a269d37  /home_SSD               btrfs  defaults                      0  0
UUID=68BA-53B2                             /GARMIN                 vfat   user,noauto                   0  0
UUID=0267-906F                             /GARMIN-KART            vfat   user,noauto                   0  0
LABEL=FR735                                /FR735                  vfat   user,x-systemd.automount,x-systemd.idle-timeout=10 0  0
UUID=2f0030b8-7257-4cba-be3e-b33154cda052  /WD25                   ext4   user,noauto                   0  0
#-----------------------------------------------------------------------------------------------------------
UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344  /Btrbk                  btrfs  subvolid=5,x-systemd.automount,x-systemd.idle-timeout=10 0  0
UUID=8a723ba5-c46f-45df-b708-0cf9c541da27  /Backup                 btrfs  subvolid=5,x-systemd.automount,x-systemd.idle-timeout=10 0  0
UUID=47e6d9ee-e910-4ea4-8c8f-7ac75f49a4d3  /Crucial                btrfs  subvolid=5,x-systemd.automount,x-systemd.idle-timeout=10 0  0
UUID=2260f160-cc05-47cc-9893-cc32c050177d  /Seagate                btrfs  subvolid=5,x-systemd.automount,x-systemd.idle-timeout=10 0  0
#-----------------------------------------------------------------------------------------------------------
# Examples
//fritz.box/FRITZ.NAS                      /fritz.box              cifs   noauto,username=mistel        0  0
UUID=78383e24-1ed7-45ad-9a6b-65b8b98b93c2  /Sandisk                btrfs  subvolid=5,x-systemd.automount,x-systemd.idle-timeout=10 0  0
6700k:/home/karl /home_karl_6700k nfs4 rw,_netdev,x-systemd.automount,x-systemd.timeout=5,x-systemd.idle-timeout=10 0 0
erlangen:~ # 

Pushed an update to disable mounting network mounts.

If it doesn’t address the slowness:

# run as root
snapper -c root create -c number -d 'Test snap'
snapper list | tail
# assuming we created snapshot number #42
time mount -v -o subvol=@/.snapshots/42/snapshot UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344 /mnt;
time for i in dev proc run sys; do mount -v --rbind --make-rslave /$i /mnt/$i; done;
time chroot /mnt mount -v -a -O no_netdev;
# unmount everything
mount -l | grep '/mnt' | awk '{print $3}' | awk '{print length, $0}' | sort -rn | awk '{print $2}' | awk '{system("umount " $0)}';
# delete test snapshot
snapper rm 42

Sometime it’s fast:

erlangen:~ # atomic-update --debug run false
2024-04-12 07:37:18,398: INFO: Starting atomic transaction...
2024-04-12 07:37:18,445: DEBUG: Snapper root config name: root
2024-04-12 07:37:18,460: DEBUG: Active snapshot number: 3361, Default snapshot number: 3361
2024-04-12 07:37:19,046: DEBUG: Latest atomic snapshot number: 3370
2024-04-12 07:37:19,046: INFO: Using snapshot 3361 as base for new snapshot 3370
2024-04-12 07:37:19,609: DEBUG: Btrfs root device: /dev/nvme1n1p2
2024-04-12 07:37:19,609: DEBUG: Setting up temp mounts...
2024-04-12 07:37:19,623: INFO: Verifying snapshot prior to update...
2024-04-12 07:37:19,623: DEBUG: Booting container
2024-04-12 07:37:19,623: DEBUG: Getting container id
2024-04-12 07:37:20,632: DEBUG: Container ID = rootfs-b926c6697f774605
2024-04-12 07:37:20,632: DEBUG: Waiting for container bootup to finish...
2024-04-12 07:37:23,885: DEBUG: Getting failed systemd units
2024-04-12 07:37:23,919: DEBUG: Number of failed units = 11
2024-04-12 07:37:23,919: DEBUG: Failed units = apache2.service, failure-notification@apache2.service, failure-notification@minidlna.service, failure-notification@rpcbind.service, failure-notification@sshd.service, failure-notification@vncmanager.service, minidlna.service, sshd.service, vncmanager.service, rpcbind.socket, vsftpd.socket
2024-04-12 07:37:23,919: DEBUG: Stopping container...
2024-04-12 07:37:23,923: INFO: Running command >>> false <<< within chroot...
2024-04-12 07:37:23,925: ERROR: Command returned exit code 256
2024-04-12 07:37:23,925: INFO: Discarding snapshot 3370
2024-04-12 07:37:23,930: INFO: Cleaning up...
2024-04-12 07:37:23,930: DEBUG: Stopping ephemeral systemd-nspawn containers...
2024-04-12 07:37:23,938: DEBUG: Cleaning up temp mounts...
2024-04-12 07:37:24,296: DEBUG: Cleaning up temp dirs...
2024-04-12 07:37:24,298: DEBUG: Cleaning up unfinished snapshots...
erlangen:~ # 

Sometime it hangs:

erlangen:~ # atomic-update --debug run false
2024-04-12 07:39:57,043: INFO: Starting atomic transaction...
2024-04-12 07:39:57,047: DEBUG: Snapper root config name: root
2024-04-12 07:39:57,062: DEBUG: Active snapshot number: 3361, Default snapshot number: 3361
2024-04-12 07:39:57,752: DEBUG: Latest atomic snapshot number: 3370
2024-04-12 07:39:57,752: INFO: Using snapshot 3361 as base for new snapshot 3370
2024-04-12 07:39:58,341: DEBUG: Btrfs root device: /dev/nvme1n1p2
2024-04-12 07:39:58,341: DEBUG: Setting up temp mounts...
2024-04-12 07:39:58,354: INFO: Verifying snapshot prior to update...
2024-04-12 07:39:58,354: DEBUG: Booting container
2024-04-12 07:39:58,354: DEBUG: Getting container id
2024-04-12 07:39:59,363: DEBUG: Container ID = rootfs-c4a0abef4e6077e0
2024-04-12 07:39:59,363: DEBUG: Waiting for container bootup to finish...
2024-04-12 07:40:30,313: DEBUG: Getting failed systemd units
2024-04-12 07:40:30,347: DEBUG: Number of failed units = 13
2024-04-12 07:40:30,347: DEBUG: Failed units = apache2.service, failure-notification@apache2.service, failure-notification@fetchmail.service, failure-notification@minidlna.service, failure-notification@rpcbind.service, failure-notification@sshd.service, failure-notification@vncmanager.service, fetchmail.service, minidlna.service, sshd.service, vncmanager.service, rpcbind.socket, vsftpd.socket
2024-04-12 07:40:30,347: DEBUG: Stopping container...
2024-04-12 07:40:30,350: INFO: Running command >>> false <<< within chroot...
2024-04-12 07:40:30,352: ERROR: Command returned exit code 256
2024-04-12 07:40:30,352: INFO: Discarding snapshot 3370
2024-04-12 07:40:30,359: INFO: Cleaning up...
2024-04-12 07:40:30,359: DEBUG: Stopping ephemeral systemd-nspawn containers...
2024-04-12 07:40:30,368: DEBUG: Cleaning up temp mounts...
2024-04-12 07:40:30,746: DEBUG: Cleaning up temp dirs...
2024-04-12 07:40:30,748: DEBUG: Cleaning up unfinished snapshots...
erlangen:~ # 
erlangen:~ # time mount -v -o subvol=@/.snapshots/3369/snapshot UUID=0e58bbe5-eff7-4884-bb5d-a0aac3d8a344 /mnt;
mount: /dev/nvme1n1p2 mounted on /mnt.

real    0m0.004s
user    0m0.003s
sys     0m0.001s
erlangen:~ # time for i in dev proc run sys; do mount -v --rbind --make-rslave /$i /mnt/$i; done;
mount: /dev bound on /mnt/dev.
mount: /proc bound on /mnt/proc.
mount: /run bound on /mnt/run.
mount: /sys bound on /mnt/sys.

real    0m0.008s
user    0m0.003s
sys     0m0.005s
erlangen:~ # time chroot /mnt mount -v -a -O no_netdev;
/boot/efi                : successfully mounted
/                        : ignored
/.snapshots              : successfully mounted
/var                     : successfully mounted
/usr/local               : successfully mounted
/srv                     : successfully mounted
/root                    : successfully mounted
/opt                     : successfully mounted
/home                    : successfully mounted
/boot/grub2/x86_64-efi   : successfully mounted
/boot/grub2/i386-pc      : successfully mounted
/home_SSD                : successfully mounted
/GARMIN                  : ignored
/GARMIN-KART             : ignored
mount: /FR735: can't find LABEL=FR735.
/WD25                    : ignored
/Btrbk                   : successfully mounted
/Backup                  : successfully mounted
mount: /Crucial: can't find UUID=47e6d9ee-e910-4ea4-8c8f-7ac75f49a4d3.
mount: /Seagate: can't find UUID=2260f160-cc05-47cc-9893-cc32c050177d.
/fritz.box               : ignored
mount: /Sandisk: can't find UUID=78383e24-1ed7-45ad-9a6b-65b8b98b93c2.
/home_karl_6700k         : ignored

real    0m0.096s
user    0m0.000s
sys     0m0.012s
erlangen:~ # 
1 Like

Container bootup delays are expected sometimes when some systemd service times out or when the host is under load. This can be disabled by the --no-verify option.

Seems like the initial 2m delay caused by Setting up temp mounts... was fixed? :thinking:

1 Like