I’m planning to migrate from ubuntu server to open suse server
I’m testing the migration in a virtual machine and I’m having problems with mdadm
Every time Open suse boots I need to run mdadm --assemble /dev/md0 and activate the volume group vgchange -ay VGNAME
During the boot time boot.md and mdadmd are running but (startup scripts only scan) I believe mdmad is not running because needs to be assemble every time.
Any idea?
>
> I’m planning to migrate from ubuntu server to open suse server
> I’m testing the migration in a virtual machine and I’m having problems
> with mdadm Every time Open suse boots I need to run
> mdadm --assemble /dev/md0 and activate the volume group vgchange -ay
> VGNAME
> During the boot time boot.md and mdadmd are running but (startup
> scripts only scan) I believe mdmad is not running because needs to be
> assemble every time.
> Any idea?
On 2010-10-10 08:06, qsummon wrote:
>
> I’m planning to migrate from ubuntu server to open suse server
> I’m testing the migration in a virtual machine and I’m having problems
> with mdadm
> Every time Open suse boots I need to run mdadm --assemble /dev/md0 and
> activate the volume group vgchange -ay VGNAME
The partition type is fd?
–
Cheers / Saludos,
Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)
No partition type. I have created the Vg and lvm directly to the disk and was working with no issues and Ubuntu is recognizing it at boot time. I beliieve is not necessary to create a partition for it
Here is the output from fdisk, lvdisplay and vgdisplay from the virtual machine I’m using for the test (I have the same in my physical production server)
Disk /dev/md0 doesn’t contain a valid partition table
LVDISPLAY
— Logical volume —
LV Name /dev/vgtest/lvtest
VG Name vgtest
LV UUID JEZ2eO-hPid-DS73-VZj7-ir7w-kJdp-80vBIO
LV Write Access read/write
LV Status available
open 0
LV Size 3.00 GiB
Current LE 767
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 6144
Block device 253:7
VGDISPLAY
nfs-test01:~ # vgdisplay -v vgtest
Using volume group(s) on command line
Finding volume group “vgtest”
— Volume group —
VG Name vgtest
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.00 GiB
PE Size 4.00 MiB
Total PE 767
Alloc PE / Size 767 / 3.00 GiB
Free PE / Size 0 / 0
VG UUID VfCfkF-8SwO-fn1A-3OWr-B1yC-MIJ3-zjfV1T
— Logical volume —
LV Name /dev/vgtest/lvtest
VG Name vgtest
LV UUID JEZ2eO-hPid-DS73-VZj7-ir7w-kJdp-80vBIO
LV Write Access read/write
LV Status available
open 0
LV Size 3.00 GiB
Current LE 767
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 6144
Block device 253:7
— Physical volumes —
PV Name /dev/md0
PV UUID OnNyZr-5Csu-Wq3z-Ofb9-C1dX-27Qo-xMt9cG
PV Status allocatable
Total PE / Free PE 767 / 0
I went trough the following exercise in the open suse server (Virtual Machine)
I delete the 3 drive I was using for the mdadm raid (This raid was created on ubuntu server)
I create 3 new drives and create a new mdmad raid
mdadm --create /dev/md0 --chunk=128 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
My first assumption was the old raid was not working at the boot time because was created in a diffreret OS, but with this NEW raid still not working at boot time.
I have to manually run mdadm --assemble -s in order to see the disk
Just a quick reminder of my project.
I have a physical server I want to migrate from ubuntu to open suse
I’m testing the migration on a virtual machine I build with the same parameters as the physical one (just smaller drives)
I assign the 3 disk to the open suse server and mdmadm is not workling at the boot time.
Why not use the YaST partitioner to create the RAID filesystem when installing? It should result in the correct initrd sequence to detect and assemble the RAID. Also it should put the required modules in the initrd.
What does your GRUB kernel line look like? Hopefully something like this?
I haven’t checked, but I think that root=/dev/md0 will cause the initrd to assemble the RAID for the root filesystem so that it can be mounted and the boot sequence continues with a valid /etc. After that /etc/mdadm.conf should direct the assembly of further filesystems. As you realise, the RAID for / has to be a special case.
The boot and OS partitions (root/etc) are not an mdadm partition. I’m booting from one ssd.
I want to be able to recognize one existing raid 5 I already create on Ubuntu, not to create a new one
How I get access to yast partitioner? I’m just seeing the general settings under the /etc/sysconfig editor
I’m running Open Suse 11.3 as a server
>
> No partition type. I have created the Vg and lvm directly to the disk
> and was working with no issues and Ubuntu is recognizing it at boot
> time. I beliieve is not necessary to create a partition for it
Can we see the output from “cat /proc/mdstat” please?
I’m not concerned with your LVM setup, that’s not where the problem is.
On 2010-10-11 16:06, qsummon wrote:
>
> I went trough the following exercise in the open suse server (Virtual
> Machine)
> I delete the 3 drive I was using for the mdadm raid (This raid was
> created on ubuntu server)
> I create 3 new drives and create a new mdmad raid
> mdadm --create /dev/md0 --chunk=128 --level=5 --raid-devices=3 /dev/sdb
> /dev/sdc /dev/sdd
>
> My first assumption was the old raid was not working at the boot time
> because was created in a diffreret OS, but with this NEW raid still not
> working at boot time.
> I have to manually run mdadm --assemble -s in order to see the disk
>
> Just a quick reminder of my project.
> I have a physical server I want to migrate from ubuntu to open suse
> I’m testing the migration on a virtual machine I build with the same
> parameters as the physical one (just smaller drives)
> I assign the 3 disk to the open suse server and mdmadm is not workling
> at the boot time.
You will have to expand the explanation of the disk layout as seen by the guest. I thought you had a
software raid made of partitioned disks. In that setup, partitions have to be of type FD for linux
to attempt to assemble the array during boot.
Now it seems there is an LVM setup on the hard disks, and raid on top the LVM. Or is it the other
way round? Please explain in detail.
I don’t have this clear, and anyway, as I don’t like LVM and I don’t understand it, I withdraw.
–
Cheers / Saludos,
Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)
I use Volume Groups and Logical volumes. I’m not using partition tables ( I believe in not needed). I got the same in all my VMs and in my physical
here is a complete layout of the disk
DISK 1
/boot (only ext3 non-lvm)
Volume group VGROOT
Logical Volumes under it
/home
/opt
/root
/swap
/tmp
/usr
/var
DISK2,3,4 (Set as Raid 5)
Volume Group VGTEST
Logical Volume lvtest
I couldn’t find yast partitioner. The only thing I’m seeing is mdadm in /etc/sysconfig Editor (in yast)
where is the partitioner?
I’m using Open Suse 11.3
On 2010-10-12 15:06, qsummon wrote:
>
> I couldn’t find yast partitioner. The only thing I’m seeing is mdadm in
> /etc/sysconfig Editor (in yast)
> where is the partitioner?
> I’m using Open Suse 11.3
What? Yast is the jewel of the crown, the single thing that diferentiates suse/opensuse from any
other distro.
It is a program. It is named “yast” in the menu, or call “yast2” in a root’s xterm. Then go to
section system, there is the partitioner.
But in order to learn how yast sets such a system, you would need to do this from the installer, I
think.
–
Cheers / Saludos,
Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)
On 2010-10-12 19:06, qsummon wrote:
>
> I’m not seeing partitioner under System
> I’m only seeing
> /etc/sysconfig Editor
> Date and Time
> Language
> System services (run level)
Then start the software manager module, search for things with “yast” in the name, and start adding
modules.
–
Cheers / Saludos,
Carlos E. R.
(from 11.2 x86_64 “Emerald” at Telcontar)