mdmadm at boot time

I’m planning to migrate from ubuntu server to open suse server
I’m testing the migration in a virtual machine and I’m having problems with mdadm
Every time Open suse boots I need to run mdadm --assemble /dev/md0 and activate the volume group vgchange -ay VGNAME
During the boot time boot.md and mdadmd are running but (startup scripts only scan) I believe mdmad is not running because needs to be assemble every time.
Any idea?

Thanks

qsummon wrote:

>
> I’m planning to migrate from ubuntu server to open suse server
> I’m testing the migration in a virtual machine and I’m having problems
> with mdadm Every time Open suse boots I need to run
> mdadm --assemble /dev/md0 and activate the volume group vgchange -ay
> VGNAME
> During the boot time boot.md and mdadmd are running but (startup
> scripts only scan) I believe mdmad is not running because needs to be
> assemble every time.
> Any idea?

Try creating /etc/mdadm.conf


Per Jessen, Zürich (10.6°C)
http://en.opensuse.org/User:pjessen

Thanks Jessen
Already got one, here is a copy
Any idea why I need to assemble the raid every time I boot

Thanks

nfs-test01:~ # cat /etc/mdadm.conf

mdadm.conf

Please refer to mdadm.conf(5) for information about this file.

by default, scan all partitions (/proc/partitions) for MD superblocks.

alternatively, specify devices to scan, using wildcards if desired.

DEVICE partitions

ARRAY /dev/md0 level=raid5 auto=md metadata=00.90 UUID=5d1bb32b:da6964c8:46a7ba41:b057fcdb

auto-create devices with Debian standard permissions

CREATE owner=root group=disk mode=0660 auto=yes

automatically tag new arrays as belonging to the local system

HOMEHOST <nfs-test01>

instruct the monitoring daemon where to send mail alerts

MAILADDR root

definitions of existing MD arrays

This file was auto-generated on Thu, 29 Oct 2009 21:44:05 -0500

by mkconf $Id$

On 2010-10-10 08:06, qsummon wrote:
>
> I’m planning to migrate from ubuntu server to open suse server
> I’m testing the migration in a virtual machine and I’m having problems
> with mdadm
> Every time Open suse boots I need to run mdadm --assemble /dev/md0 and
> activate the volume group vgchange -ay VGNAME

The partition type is fd?


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

qsummon wrote:

>
> Thanks Jessen
> Already got one, here is a copy
> Any idea why I need to assemble the raid every time I boot
>

Are the partition types set up as 0xFD ?


Per Jessen, Zürich (9.9°C)
http://en.opensuse.org/User:pjessen

No partition type. I have created the Vg and lvm directly to the disk and was working with no issues and Ubuntu is recognizing it at boot time. I beliieve is not necessary to create a partition for it

Here is the output from fdisk, lvdisplay and vgdisplay from the virtual machine I’m using for the test (I have the same in my physical production server)

FDISK
Disk /dev/md0: 3219 MB, 3219652608 bytes
2 heads, 4 sectors/track, 786048 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn’t contain a valid partition table

LVDISPLAY
— Logical volume —
LV Name /dev/vgtest/lvtest
VG Name vgtest
LV UUID JEZ2eO-hPid-DS73-VZj7-ir7w-kJdp-80vBIO
LV Write Access read/write
LV Status available

open 0

LV Size 3.00 GiB
Current LE 767
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 6144
    Block device 253:7

VGDISPLAY

nfs-test01:~ # vgdisplay -v vgtest
Using volume group(s) on command line
Finding volume group “vgtest”
— Volume group —
VG Name vgtest
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.00 GiB
PE Size 4.00 MiB
Total PE 767
Alloc PE / Size 767 / 3.00 GiB
Free PE / Size 0 / 0
VG UUID VfCfkF-8SwO-fn1A-3OWr-B1yC-MIJ3-zjfV1T

— Logical volume —
LV Name /dev/vgtest/lvtest
VG Name vgtest
LV UUID JEZ2eO-hPid-DS73-VZj7-ir7w-kJdp-80vBIO
LV Write Access read/write
LV Status available

open 0

LV Size 3.00 GiB
Current LE 767
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 6144
    Block device 253:7

— Physical volumes —
PV Name /dev/md0
PV UUID OnNyZr-5Csu-Wq3z-Ofb9-C1dX-27Qo-xMt9cG
PV Status allocatable
Total PE / Free PE 767 / 0

I went trough the following exercise in the open suse server (Virtual Machine)
I delete the 3 drive I was using for the mdadm raid (This raid was created on ubuntu server)
I create 3 new drives and create a new mdmad raid
mdadm --create /dev/md0 --chunk=128 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

My first assumption was the old raid was not working at the boot time because was created in a diffreret OS, but with this NEW raid still not working at boot time.
I have to manually run mdadm --assemble -s in order to see the disk

Just a quick reminder of my project.
I have a physical server I want to migrate from ubuntu to open suse
I’m testing the migration on a virtual machine I build with the same parameters as the physical one (just smaller drives)
I assign the 3 disk to the open suse server and mdmadm is not workling at the boot time.

Thank you very much
Marcelo

Why not use the YaST partitioner to create the RAID filesystem when installing? It should result in the correct initrd sequence to detect and assemble the RAID. Also it should put the required modules in the initrd.

What does your GRUB kernel line look like? Hopefully something like this?

kernel /boot/vmlinuz-2.6.31.12-0.1-default root=/dev/md0 blah blah

I haven’t checked, but I think that root=/dev/md0 will cause the initrd to assemble the RAID for the root filesystem so that it can be mounted and the boot sequence continues with a valid /etc. After that /etc/mdadm.conf should direct the assembly of further filesystems. As you realise, the RAID for / has to be a special case.

The boot and OS partitions (root/etc) are not an mdadm partition. I’m booting from one ssd.
I want to be able to recognize one existing raid 5 I already create on Ubuntu, not to create a new one
How I get access to yast partitioner? I’m just seeing the general settings under the /etc/sysconfig editor
I’m running Open Suse 11.3 as a server

Thanks

qsummon wrote:

>
> No partition type. I have created the Vg and lvm directly to the disk
> and was working with no issues and Ubuntu is recognizing it at boot
> time. I beliieve is not necessary to create a partition for it

Can we see the output from “cat /proc/mdstat” please?

I’m not concerned with your LVM setup, that’s not where the problem is.


Per Jessen, Zürich (8.6°C)
http://en.opensuse.org/User:pjessen

On 2010-10-11 16:06, qsummon wrote:
>
> I went trough the following exercise in the open suse server (Virtual
> Machine)
> I delete the 3 drive I was using for the mdadm raid (This raid was
> created on ubuntu server)
> I create 3 new drives and create a new mdmad raid
> mdadm --create /dev/md0 --chunk=128 --level=5 --raid-devices=3 /dev/sdb
> /dev/sdc /dev/sdd
>
> My first assumption was the old raid was not working at the boot time
> because was created in a diffreret OS, but with this NEW raid still not
> working at boot time.
> I have to manually run mdadm --assemble -s in order to see the disk
>
> Just a quick reminder of my project.
> I have a physical server I want to migrate from ubuntu to open suse
> I’m testing the migration on a virtual machine I build with the same
> parameters as the physical one (just smaller drives)
> I assign the 3 disk to the open suse server and mdmadm is not workling
> at the boot time.

You will have to expand the explanation of the disk layout as seen by the guest. I thought you had a
software raid made of partitioned disks. In that setup, partitions have to be of type FD for linux
to attempt to assemble the array during boot.

Now it seems there is an LVM setup on the hard disks, and raid on top the LVM. Or is it the other
way round? Please explain in detail.

I don’t have this clear, and anyway, as I don’t like LVM and I don’t understand it, I withdraw.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

this is the output before manually assemble the raid

nfs-test01:~ # more /proc/mdstat
Personalities : [linear]
unused devices: <none>

This is the outout after mdadm
nfs-test01:/etc # mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 4 drives.

nfs-test01:/etc # cat /proc/mdstat
Personalities : [linear] [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]
3144192 blocks level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

Thanks

I use Volume Groups and Logical volumes. I’m not using partition tables ( I believe in not needed). I got the same in all my VMs and in my physical
here is a complete layout of the disk

DISK 1
/boot (only ext3 non-lvm)

Volume group VGROOT
Logical Volumes under it
/home
/opt
/root
/swap
/tmp
/usr
/var

DISK2,3,4 (Set as Raid 5)
Volume Group VGTEST
Logical Volume lvtest

On 2010-10-11 21:06, qsummon wrote:

> I want to be able to recolonize and existing raid 5 I already create on
> Ubuntu, not to create a new one

But you would learn how yast creates it. I think it would perhaps copy /etc/mdadm.conf to initrd, or
something of the sort.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

I couldn’t find yast partitioner. The only thing I’m seeing is mdadm in /etc/sysconfig Editor (in yast)
where is the partitioner?
I’m using Open Suse 11.3

YaST > System > Partitioner

and if you don’t have it, perhaps you don’t have the yast2-storage package installed?

On 2010-10-12 15:06, qsummon wrote:
>
> I couldn’t find yast partitioner. The only thing I’m seeing is mdadm in
> /etc/sysconfig Editor (in yast)
> where is the partitioner?
> I’m using Open Suse 11.3

What? Yast is the jewel of the crown, the single thing that diferentiates suse/opensuse from any
other distro.

It is a program. It is named “yast” in the menu, or call “yast2” in a root’s xterm. Then go to
section system, there is the partitioner.

But in order to learn how yast sets such a system, you would need to do this from the installer, I
think.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

I’m not seeing partitioner under System
I’m only seeing
/etc/sysconfig Editor
Date and Time
Language
System services (run level)

On 2010-10-12 19:06, qsummon wrote:
>
> I’m not seeing partitioner under System
> I’m only seeing
> /etc/sysconfig Editor
> Date and Time
> Language
> System services (run level)

Then start the software manager module, search for things with “yast” in the name, and start adding
modules.


Cheers / Saludos,

Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)

qsummon wrote:

>
> this is the output before manually assemble the raid
>
> nfs-test01:~ # more /proc/mdstat
> Personalities : [linear]
> unused devices: <none>
>
> This is the outout after mdadm
> nfs-test01:/etc # mdadm --assemble /dev/md0
> mdadm: /dev/md0 has been started with 4 drives.
>
> nfs-test01:/etc # cat /proc/mdstat
> Personalities : [linear] [raid6] [raid5] [raid4]
> md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]
> 3144192 blocks level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
>
> unused devices: <none>
>
> Thanks

Okay, no partition types needed.

I took another look at your mdadm.conf - your ARRAY lines are a little
different to what I have on a vanilla openSUSE installation:

DEVICE containers partitions
ARRAY /dev/md0 UUID=8b34420b:eafcc774:19203a7a:fb36d9f8
ARRAY /dev/md1 UUID=c7b3e5d5:f1815443:776c2c25:004bd7b2
ARRAY /dev/md2 UUID=19a4169b:6dcc4e04:776c2c25:004bd7b2

I don’t know if your extra parameters make a difference though.


Per Jessen, Zürich (9.4°C)
http://en.opensuse.org/User:pjessen