How to mount a LVM mirror set from a different load?

Hello everyone,
I have 4 sata hard drives (2) 500GB and (2) 1.5TB. I loaded opensuse 11.1 and created an LVM mirror raid 1 set on the 500GB’s, for boot, swap, and root, and another mirror set of 1.5TB drives for data.
After all was done, I eventually wanted to start over so I unplugged the 1.5TB raid set and wiped the first 11.1 load and reloaded opensuse 11.1 again and now really like my load.
Here is where I need help, I plugged back in the 1.5TB drives to mount the data volume into my current suse load, but don’t know what to do since the drives are mirrored with LVM.
Can someone help me get started? I think the drives are at the bottom of the fdisk - sdc and sdd.

Here is a copy of my current fstab:
/dev/md1 swap swap defaults 0 0
/dev/md2 / ext3 acl,user_xattr 1 1
/dev/md0 /boot ext3 acl,user_xattr 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/disk/by-id/ata-ST3500630AS_5QG0RACG-part4 /xens ext2 acl,user_xattr 1 2

And here is a copy of my current fdisk
Device Boot Start End Blocks Id System
/dev/sda1 2 13 96390 fd Linux raid autodetect
/dev/sda2 14 2101 16771860 fd Linux raid autodetect
/dev/sda3 2102 54317 419425020 fd Linux raid autodetect
/dev/sda4 54318 60801 52082730 83 Linux

/dev/sdc1 1 130542 1048578583+ fd Linux raid autodetect
/dev/sdd1 1 130542 1048578583+ fd Linux raid autodetect

Any help on this would be really great.

Having upgraded from SUSE 10.x to 11.1 i had a similar problem, that the suse 11.1 disk manager did not understand my lvm configuration at all, and was offering to wipe them all for me, so i opted to pop out the drives until i had done my base install

as a result, now that the system is up and running again with 11.1, there is apparently no gui way to non-destructively create a lvm configuration from existing lvm pv/vg/lv data.

So i did this to recover the vg/pv’s without data loss :

Recovery of RAID and LVM2 Volumes

And this has worked for me. But now i have the problem that boot.lvm wont automatically execute at boot time, possibly because i skipped the “offical” suse way that only lets you create lvm’s after re-initializing your drives (eep!)

so while i can’t yet put my filesystems in /etc/fstab
as boot.lvm having been run to make the vg’s/pv’s visible, the install went relatively well and i have my data

(presently, the raid lv’s dont exist during boot because “boot.lvm” has not run, and the fstab check stops my system from booting because it can’t check the drives)

well this sounds bad, but at least i have my data back, and hopefully today i can debug why boot.lvm is not automatically running (maybe i am missing a flag somewhere to start lvm?).


because i did not use the gui to add the lvm data, the symbolic link S10boot.lvm was missing out of /etc/init.d/boot.d which executes boot.lvm on boot.

so i used the “advanced” YAST view of System Services / Runlevels to enable it (the simple view does not show boot.d scripts)

all good now… perhaps somebody might find this information useful.

Sounds wonderful! I’ll have to try this out some time since I had to delete the raid set and re-create the LVM Raid set under suse 11.1 some time ago now. I’ll try and duplicate your effort some time and report back.