A to keep a very, very long story short: I want to clone an existing HDD to a new HDD. I used GNU ddrescue to do the job. Unfortunately, I accidentally sent the image to a file in /tmp on the source HDD. Of course the drive filled up and the ddrescue session stopped. I used rm to discard the unwanted image file and thought that was the end of the matter. Unfortunately, it wasn’t. I did a second try at cloning the drive: ddrescue -f /dev/sda /dev/sdb ddrescue.log
The target HDD is unbootable. I missed that it has 512B logical sectors and 4096B physical sectors, while the source drive has 512B logical and physical sectors. A thread on this issue is coming soon to a forum near you…
The machine was powered down, SATA cables pulled, and an attempt to boot the target HDD was made. It failed outright: BIOS couldn’t find anything to boot. I then reconnected the source HDD and found it would boot up but couldn’t finish startup. Even in failsafe mode, things go only so far (trying to start /tmp). What follows was posted at the end of another thread, suffering a bad case of topic drift.
NOTE: Each HDD has two partitions: 70Mb EXT4 (for /boot I assume) and ~690Gb LVM. As far as I can tell, the sizes are identical. After that… see above re: sector size issues.
boot.lvm looks for LVM volumes. It finds one, and starts up the swap space as /dev/system/swap
boot.lvm then says “Reading all physical volumes” and then “Found volume group ‘system’ using metadata type lvm2”
At this point the record is confusing because there seem to be responses from earlier activities. It seems that group “system” goes through a check that is successful.
boot.lvm reports “3 logical volumes on volume group “system” now active”
A check is started on /dev/system/home (later reported as clean)
systemd-fsck reports /dev/sda1: recovering journal (this applies to the 70Mb EXT4 partition) and that is also found to be clean.
/dev/sda1 is remounted and all the signs are that it’s in acceptable condition.
Starting /boot
Starting /home
Starting Load Random seed - all get OK’s
Somewhere in all of the above, there are two file system checks /dev/disk/by-id/ata-Hitachi… (the full name for the drive used for booting). They seem to be OK
I can’t find where it starts, but there is a message “Job dev-disk-by\x2did-ata\x2dWDC_…[name of WD drive used to receive clone of Hitachi drive]…part1.device/start timed out. Dependcy failed. Aborted start of /tmp”.
At this point the dominoes fall over in rapid succession - systemd reports:
Job remote-fs-pre.target/start failed with result “dependency”.
Job local-fs-pre.target/start failed with result “dependency”.
Triggering OnFailure= dependencies of local-fs.target.
Job temp.mount/start failed with result “dependency”.
Job local-fs-pre.target/start failed with result “dependency”.
Job dev-disk-by\x2did-ata\x2dWDC_…[name of WD drive used to receive clone of Hitachi drive]…part1.device/start failed with result ‘timeout’.
Welcome to emergency mode (yada yada yada)
boot.lvm: can’t deactivate volume group “system” with 3 open logical volumes
systemd reports on the time spent in the startup and the show ends with Give root password for login:
So it appears that everything seen in the LVM volume is fine until reaching /tmp and that’s where the file that filled the system lived.
The question, I guess, is how to say “forget what’s in /tmp - it’s all temporary anyway”. Of course, it may well be that anything downstream from /tmp is also chewed up. Until I can get /tmp, I guess there’s no way to be certain about that.