**NOTE** January 2022 - Changes to Gstreamer and Pipewire packages from PackmanPlease read the following thread about the current changes
-
Need to fix broken LVM2 volume
A to keep a very, very long story short: I want to clone an existing HDD to a new HDD. I used GNU ddrescue to do the job. Unfortunately, I accidentally sent the image to a file in /tmp on the source HDD. Of course the drive filled up and the ddrescue session stopped. I used rm to discard the unwanted image file and thought that was the end of the matter. Unfortunately, it wasn't. I did a second try at cloning the drive: ddrescue -f /dev/sda /dev/sdb ddrescue.log
The target HDD is unbootable. I missed that it has 512B logical sectors and 4096B physical sectors, while the source drive has 512B logical and physical sectors. A thread on this issue is coming soon to a forum near you...
The machine was powered down, SATA cables pulled, and an attempt to boot the target HDD was made. It failed outright: BIOS couldn't find anything to boot. I then reconnected the source HDD and found it would boot up but couldn't finish startup. Even in failsafe mode, things go only so far (trying to start /tmp). What follows was posted at the end of another thread, suffering a bad case of topic drift.
NOTE: Each HDD has two partitions: 70Mb EXT4 (for /boot I assume) and ~690Gb LVM. As far as I can tell, the sizes are identical. After that... see above re: sector size issues.
- - - -
boot.lvm looks for LVM volumes. It finds one, and starts up the swap space as /dev/system/swap
boot.lvm then says "Reading all physical volumes" and then "Found volume group 'system' using metadata type lvm2"
At this point the record is confusing because there seem to be responses from earlier activities. It seems that group "system" goes through a check that is successful.
boot.lvm reports "3 logical volumes on volume group "system" now active"
A check is started on /dev/system/home (later reported as clean)
systemd-fsck reports /dev/sda1: recovering journal (this applies to the 70Mb EXT4 partition) and that is also found to be clean.
/dev/sda1 is remounted and all the signs are that it's in acceptable condition.
Starting /boot
Starting /home
Starting Load Random seed - all get OK's
Somewhere in all of the above, there are two file system checks /dev/disk/by-id/ata-Hitachi... (the full name for the drive used for booting). They seem to be OK
I can't find where it starts, but there is a message "Job dev-disk-by\x2did-ata\x2dWDC_...[name of WD drive used to receive clone of Hitachi drive]...part1.device/start timed out. Dependcy failed. Aborted start of /tmp".
At this point the dominoes fall over in rapid succession - systemd reports:
Job remote-fs-pre.target/start failed with result "dependency".
Job local-fs-pre.target/start failed with result "dependency".
Triggering OnFailure= dependencies of local-fs.target.
Job temp.mount/start failed with result "dependency".
Job local-fs-pre.target/start failed with result "dependency".
Job dev-disk-by\x2did-ata\x2dWDC_...[name of WD drive used to receive clone of Hitachi drive]...part1.device/start failed with result 'timeout'.
Welcome to emergency mode (yada yada yada)
boot.lvm: can't deactivate volume group "system" with 3 open logical volumes
systemd reports on the time spent in the startup and the show ends with Give root password for login:
So it appears that everything seen in the LVM volume is fine until reaching /tmp and that's where the file that filled the system lived.
The question, I guess, is how to say "forget what's in /tmp - it's all temporary anyway". Of course, it may well be that anything downstream from /tmp is also chewed up. Until I can get /tmp, I guess there's no way to be certain about that.
Sometimes I sits and thinks, sometimes I just sits...
-
Re: Need to fix broken LVM2 volume
On 2014-01-23 00:16, RBEmerson wrote:
> boot.lvm looks for LVM volumes. It finds one, and starts up the swap
> space as /dev/system/swap
> boot.lvm then says "Reading all physical volumes" and then "Found volume
> group 'system' using metadata type lvm2"
>
> At this point the record is confusing because there seem to be responses
> from earlier activities. It seems that group "system" goes through a
> check that is successful.
> boot.lvm reports "3 logical volumes on volume group "system" now active"
Just a note: you can not have both disks plugged in at the same time,
reliably, because as one is a clone of the other, they have the same
identifiers (label, uuid, and "Disk identifier"). The system can get
confused by this, until a check is done of identifiers used in fstab and
by grub.
So try to boot the original disk alone. Or the cloned copy alone.
--
Cheers / Saludos,
Carlos E. R.
(from 12.3 x86_64 "Dartmouth" at Telcontar)
-
Re: Need to fix broken LVM2 volume
I only wish one disk or the other would boot! The old Hitachi (or source) drive at least shows signs of booting (see the summary of the log above), while WD (or target) drive is utterly silent.
ADDED: For clarity's sake, at this point only one of these two drives is being physically connected. That is, if I try to boot the WD drive, the Hitachi drive is physically disconnected, and vice versa.
Sometimes I sits and thinks, sometimes I just sits...
-
Re: Need to fix broken LVM2 volume
The following is lifted from the thread I started about how to correctly clone two HDD's. In the course of that discussion, the following happened:
[...]I ran dmesg, with the idea of capturing all the info from the startup. My next thought was to pipe the output to a temporary file, trim it down with emacs, and write it to a stick, and post that stuff here. Hmmm... temporary file... hmmm... speaking of temporary files, I wonder what happens when I do "dir /tmp"?
In fact, /dev/sda's (the old Hitachi HDD) /tmp exists and the directory is easily listed. Which says /dev/sda has a functional tmp directory tree.
The problem is a very simple one: /dev/sda's /etc/fstab is broken. It wants to mount the /tmp tree from a drive that doesn't exist! Going through /etc/fstab again:
Code:
/dev/system/swap swap swap defaults 0 0
/dev/system/root / ext4 acl,user_xattr 1 1
/dev/disk/by-id/ata-Hitachi_HDS721075CLA332_JP2740HP04Y2NH-part1 /boot ext4 acl,user_xattr 1 2
/dev/system/home /home ext4 acl,user_xattr 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/disk/by-id/ata-WDC_WD7500BPKX-22HPJT0_WD-WX41A93N7736-part1 /tmp ext4 acl,user_xattr 1 2
This line: /dev/disk/by-id/ata-WDC_WD7500BPKX-22HPJT0_WD-WX41A93N7736-part1 /tmp ext4 acl,user_xattr 1 2 should be changed to mount the /tmp found on the Hitachi HDD (AKA /dev/sda). Remember that, at the moment, the WD HDD does not exist. It is physically disconnected from the desktop box.
"All" that's left is to figure out how to mount the Hitachi HDD's /tmp in the right place. I just don't know enough about editing fstab to do the job without making a permanent mess.
Sometimes I sits and thinks, sometimes I just sits...
-
Re: Need to fix broken LVM2 volume
BUMP
No ideas on how to edit the fstab to use the [correct] /tmp in the startup drive instead of, as the above fstab shows, one in another drive?
Wrap this one up and this thread can be tied off, closed, and left to fade to the bottom of the topic list.
Sometimes I sits and thinks, sometimes I just sits...
-
Re: Need to fix broken LVM2 volume
On 2014-01-23 15:06, RBEmerson wrote:
>
> BUMP
>
> No ideas on how to edit the fstab to use the [correct] /tmp in startup
> drive instead, as the above fstab shows, of one in another drive?
You need to start a rescue openSUSE CD or usb stick. For 13.1 I suggest
the XFCE image from openSUSE site. for 12.2 or older, I suggest the
Gnome or KDE install/live disks.
But practically any Linux distro should do.
You have to start that, mount both the boot partition, LVM, and at least
the system partition in the correct structure:
Code:
LVM -> "/"
--- /boot
Then, using any editor on that live system, just edit that fstab file. I
don't know the exact name the "/tmp" will have. It can not be the first
partition of your original disk. It should be somewhere inside that LVM,
or even none at all.
Yes, no line at all should also boot. It will then use the "/tmp"
directory of your "/" partition.
Once this edit is done, you also have to recreate initrd on the original
disk.
Ugh. There are missing details all along the above... I do not know how
to mount LVM devices from rescue disk.
--
Cheers / Saludos,
Carlos E. R.
(from 12.3 x86_64 "Dartmouth" at Telcontar)
-
Re: Need to fix broken LVM2 volume
All of the discussion below applies to one HDD, the one used as the source in a failed cloning attempt. Startup attempts use only the HDD in question. There is no use of live systems on any additional device (e.g., DVD or stick).
- - - -
AFAIK, much of the file system mounts at startup. That is, from CLI, I can access many of the directory trees on the drive. The failure happens with literally the last entry in the fstab, the line that tries to mount /tmp.
Put another way, I do not have a problem accessing the contents of the LVM structure on the drive.
The primary question is: "how do I eliminate the by-id reference in the line shown below, replacing it with an fstab entry, with no by-id qualifier, that will mount the startup HDD's /tmp properly"?
Code:
/dev/disk/by-id/ata-WDC_WD7500BPKX-22HPJT0_WD-WX41A93N7736-part1 /tmp ext4 acl,user_xattr 1 2
Remember that the by-id entry above shows an unusable ID.
A related question is: "How do I remove all by-id entries, in this fstab, so I can clone the startup system on another drive?"
Sometimes I sits and thinks, sometimes I just sits...
-
Re: Need to fix broken LVM2 volume
If you are sure you are looking at the drive and not the virtual files system of the booted cd/dvd then just get rid of it /tmp is not normally a separate mount Really have no idea where that line cam from unless you specified a separate mount point for /tmp at some time. If there is a sepearate /tmp virtual partition in the LVM you could just reference that instead of the WD
Of course always back up any important file before you change it in case you need to drop back
-
Re: Need to fix broken LVM2 volume
On 2014-01-23 17:06, RBEmerson wrote:
> Put another way, I do not have a problem accessing the contents of the
> LVM structure on the drive.
Ah, good. That's easier.
> The primary question is: "how do I eliminate the by-id reference in the
> line shown below, replacing it with an fstab entry, with -no- by-id
> qualifier, that will mount the startup HDD's /tmp properly"?
> Code:
> --------------------
>
> /dev/disk/by-id/ata-WDC_WD7500BPKX-22HPJT0_WD-WX41A93N7736-part1 /tmp ext4 acl,user_xattr 1 2
> --------------------
> Remember that the by-id entry above shows an unusable ID.
Just comment it out. Put a '#' in front. Quick and fast. You probably
can use the editor 'joe' or 'vi', both should be available.
This should boot. Instead of a "/tmp" partition, you get a "/tmp"
directory. Available space is much smaller, of course.
Warning: the space might be nil, as a result of your previous action of
dumping the clone into there. In that case, just boot in runlevel 1, and
delete that huge image file in /tmp.
The correct method needs identifying which device has the /tmp "space"
on the LVM. I don't know this. I can make guesses.
It could be somewhere under "/dev/mapper/".
It could be somewhere under "/dev/disk/by-id/"
Or any of the available "/dev/disk/by-*/"
You can run "blkid", and get a listing of what filesystem devices are
available on your system. On mine I get this (not complete):
Code:
> cer@Telcontar:~> blkid
> /dev/sda1: LABEL="a_boot_1" UUID="93f0311e-2a93-49ca-b836-d362ffc84486" TYPE="ext2"
> /dev/sda2: LABEL="a_boot_2" UUID="5135ab82-1374-4c30-b9d0-4b56d6d6d6c6" TYPE="ext2"
> /dev/sda3: LABEL="a_boot_3" UUID="6d9d4270-3fd1-4027-9316-c614d6c090e4" TYPE="ext2"
> /dev/sda5: LABEL="a_one" UUID="9404bbbd-9eeb-4fb9-96b1-1c42b0f776ff" TYPE="reiserfs"
> /dev/sda6: LABEL="a_swap" UUID="4c547811-211b-4d16-9efa-e426e5d77d2c" TYPE="swap"
> /dev/sda7: LABEL="a_main" UUID="0381840a-71fa-4d58-96bb-bc8f8da80ef7" TYPE="ext4"
> /dev/sda8: LABEL="a_vmware" UUID="63896768-f881-4ff9-84dd-f3dd95580d80" TYPE="xfs"
> /dev/sda9: LABEL="a_test2" UUID="00eb9a40-d067-459e-a22f-1d3b667dddbb" TYPE="ext4"
> /dev/sda10: LABEL="a_test3" UUID="b3e6b180-3ee5-45f5-b27a-6b2cd9c18d67" TYPE="reiserfs"
> /dev/sda11: UUID="825b22e8-af55-0e83-9372-7666fb8987fd" UUID_SUB="4256daee-2d7c-e3bb-9535-8c8d6415c578" LABEL="Telcontar:0" TYPE="linux_raid_member"
....
> /dev/md0: LABEL="raid5" UUID="451fb568-860a-4ee3-b238-1423bfb0a034" TYPE="xfs"
Notice that it does not list "id", but "uuid". Both "uuid" and "label"
are cloned, the rest are not - which answers your next question.
> A related question is: "How do I remove -all- by-id entries, in this
> fstab, so I can clone the startup system on another drive?"
Just replace the entries with the equivalent 'uuid' or 'label' entries.
Your choice :-)
You can get all of the entries with this:
Code:
ls -l /dev/disk/by-*
Notice that they are symlinks to entries of the type "../../sda10". Do
not use those in your fstab, only to identify which entries point to the
same destination - like this:
Code:
> cer@Telcontar:~> ls -l /dev/disk/by-* | grep sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 ata-ST3500418AS_5VM2RSY4-part10 -> ../../sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 scsi-1ATA_ST3500418AS_5VM2RSY4-part10 -> ../../sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 scsi-SATA_ST3500418AS_5VM2RSY4-part10 -> ../../sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 wwn-0x5000c5001914354b-part10 -> ../../sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 a_test3 -> ../../sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 pci-0000:00:1f.2-scsi-0:0:0:0-part10 -> ../../sda10
> lrwxrwxrwx 1 root root 11 Jan 7 14:57 b3e6b180-3ee5-45f5-b27a-6b2cd9c18d67 -> ../../sda10
The 'a_test3' is the label, and the 'b3e6b180-...' is the uuid.
--
Cheers / Saludos,
Carlos E. R.
(from 12.3 x86_64 "Dartmouth" at Telcontar)
-
Re: Need to fix broken LVM2 volume
The following comments were posted to an earlier thread, now closed. The comments are quite relevant to this thread.
 Originally Posted by gogalthorp
You can use by label but then you must insure that any mounted file system as a unique label. IMO that is the best way to insure a cloned boot. But again you can not have both drives mounted at the same time because of duplicae naming.
By-id is ok but then you must edit the fstab and reinstall grub and maybe initd to reflect the new id's But you can have both drives mounted at the same time
Using/dev/sdx# naming avoids this problem but the sdx can change in situation depending on what is attached to the system
So there is no perfect way but depends on the way you plan to use the system
Sometimes I sits and thinks, sometimes I just sits...
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|