Not sure if this is the right group, if there’s a better one please
let me know …
To an existing Opensuse 10.3 system with two 80 GB SATA hard disk
drives (sda, sdb) that have become too small, I have added two
500 GB disks (sdc, sdd) which i want to run as a mirrored pair.
To this end, I have partitioned both new disks identically, with a
500 MB ext2 partition (sdc1/sdd1) that may one day become a /boot
partition, and a Linux RAID partition (sdc2/sdd2) taking up the
remaining capacity. I have joined sdc2 and sdd2 into a RAID1 (md0)
which I made the sole PV of an LVM VG called, very creatively,
“RAID”.
This is all working fine, but each time I reboot the machine, LVM
complains, on the boot console and in /var/log/boot.msg:
Found duplicate PV Lgrl5nNfenRUg9bIwM20q1hfMrWylyyL: using /dev/sdd2 not /dev/sdc2
3 logical volume(s) in volume group “system” now active
Found duplicate PV Lgrl5nNfenRUg9bIwM20q1hfMrWylyyL: using /dev/sdd2 not /dev/sdc2
3 logical volume(s) in volume group “system” now active
This happens well after these partitions have been claimed by md for
the mirror; as witnessed earlier in the boot log:
md: bind<sdd2>
md: bind<sdc2>
md: raid1 personality registered for level 1
raid1: raid set md0 active with 2 out of 2 mirrors
md0: bitmap initialized from disk: read 30/30 pages, set 5 bits
created bitmap (466 pages) for device md0
Also “vgdisplay --verbose RAID” correctly displays
— Physical volumes —
PV Name /dev/md0
PV UUID Lgrl5n-Nfen-RUg9-bIwM-20q1-hfMr-WylyyL
PV Status allocatable
Total PE / Free PE 119108 / 32324
so apparently LVM is really using /dev/md0, not /dev/sdd2 as it
claims in those strange two messages. So what are those messages
trying to tell me? Does or doesn’t my VG now use disk mirroring
as intended?
And another thing: when running “top” I notice a process “md0_raid1”
frequently consuming 15% CPU even if the machine is completely idle.
Is that normal, or a sign of trouble?
Thanks,
–
Tilman Schmidt
Phoenix Software GmbH
Bonn, Germany