Software Raid - Expert partioner confusion

Hi, I’m new to the forum and openSuSe 12.2 having decided to rebuild my suse 9.3 server.

I want to use software Raid 1 on two 1tb sata drives. I have been following the 'Advanced Disk Setup’ how to under section 3.3.1 but i’m a bit confused about the procedure. I understand that partitions have to be '‘added’ to raid but do I need to add all the partitions on each of the two drives to raid?.

Given that I want to finish up with a Raid 1 array with partitions below, should I add all of these to raid with the exceptions of /swap ? and do I change the File System ID to 0xFD Linux RAID on all partitions?

/
/boot
/usr
/var
/var/log
/var/spool
/var/spool/mail
/home
/home/shares
/tmp
/swap

Do I have this right ?

///////////////////////////////////////
Bignige
////////////////////////////////////////

Hi Bignige, you can setup the raid 1 with mdadm ( good man pages). Create partitions, create array, format it etc from the terminal, before installation of suse. After the command

mdadm --create /dev/md0 --level=1 /dev/sda1 /dev/sdb2 the array is instantly created and should work. The 2 HD should have symmetrical partitions, but you can leave parts of the drives outside the array. Use --query, --detail and --examine to check the array. Took me some time but works great. Good luck. Marc

You do know that 9.3 is way way out of support right?

On 2013-02-09 21:36, gogalthorp wrote:
>
> You do know that 9.3 is way way out of support right?

I understand that he is trying to rebuild his old 9.3 server in another
server with 12.2. Different machine.


Cheers / Saludos,

Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)

yes…i’m upgrading to 12.2 after getting a corrupt file system on the 9.3 server.

Marc suggested I use mdadm. would that let me create an un-partitioned raid which I could then partition during suse install or would I still need to partition then create raid. also i’m still unsure if /tmp & /swap should be on the raid?

Take a look at the slideshow on this page: openSUSE 11.4 raid + lvm - vm installation guide

It’s old, but it hasn’t change that much. It shows you how to create a raid1 + lvm with the expert partitioner during setup by doing things in the right order. The example uses virtual disks (don’t worry about that).

  • I meant that you should skip the vm-create stuff. Just look at the slideshow!

Thanks. i’ve just been looking at the slideshow. .looks fairly clear I will give it a try.

On 2013-02-09 23:36, Bignige wrote:
>
> yes…i’m upgrading to 12.2 after getting a corrupt file system on the
> 9.3 server.

Warning.

“Upgrade” here has a precise meaning. It means that you follow a
particular procedure so that all packages on an install are upgraded to
the packages of the following version (or more). There is no hardware
change, there are no partition changes. Configuration files remain, some
are updated, some replaced, and you have to do certain things.

The procedures are documented here:

Online upgrade
method

Offline upgrade
method

Chapter 16. Upgrading the System and System Changes

Installing fresh another system in the same partitions as the previous
one, albeit keeping “/home” is not an upgrade in this context.

Replicating with a fresh install and version on another computer an
older install is not an upgrade.

Or to be precise, a “System Upgrade” in the context of openSUSE
documentation.

So, what is it you are doing, exactly? Because you have confused it even
more with your response above.

> also i’m still unsure if /tmp &
> /swap should be on the raid?

That’s up to you. There are no “shoulds”. You measure the pros and
against and decide based on your needs and requirements. :slight_smile:


Cheers / Saludos,

Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)

Perhaps I used the word ‘upgrade’ incorrectly. What I am doing is replacing 2 hard disks and memory and then doing a fresh install of 12.2. I will copy /home & /home shares to the new system. I am doing this because my 9.3 install got corrupted due to a suspected fault on one raid 1 device. I was not able to get the server to complete the boot process e.g INIT: cannot execute /etc/init.d/boot. I ran rescue system, fsck and copied /boot, /etc, /bin, lib and other directories back to the remaining good disk in case some files had become corrupt but could still not get past the boot error. My gut feeling is that somehow the boot structure was damaged irrevocably, so decided to fit new drives etc and start again.

my previous system had the raid running as fake hw raid using a promise fastrack card but there are no drivers for it for 12.2 and I figured it was probably quicker and easier to use software raid rather than try to compile a new driver from the available partial source available.

I had /tmp & /swap as raid before, I don’t know what the pros and cons of leaving them of are. from what I’ve read /swap doesn’t get used much on modern systems.

Thanks for your advice. its been a great help.

Cheers

Nigel

On 2013-02-10 13:06, Bignige wrote:

> my previous system had the raid running as fake hw raid using a promise
> fastrack card but there are no drivers for it for 12.2 and I figured it
> was probably quicker and easier to use software raid rather than try to
> compile a new driver from the available partial source available.

Right.

The most common strategy is just to use the YaST installer to create the
raid partitions as needed, ie, one raid device for partition. It is in
theory also possible to do just one raid device and then partition it
inside. I tried this once, did not succeed.

The advantage is that YaST should be able to handle it all properly. Should.

> I had /tmp & /swap as raid before, I don’t know what the pros and cons
> of leaving them of are. from what I’ve read /swap doesn’t get used much
> on modern systems.

Swap outside is faster. Then one disk failure and the system goes down,
negating the advantage of the raid. Similarly for /tmp. That’s the basic
scenario :slight_smile:

It is true that swap is not used that much nowdays, because memory sizes
are bigger. But it depends… I have 8GiB of ram, and swap is used,
1034472k at the moment.


Cheers / Saludos,

Carlos E. R.
(from 12.1 x86_64 “Asparagus” at Telcontar)

Well I created all my partitions, added them to raid 1 and installed 12.2. The install went fine but on reboot I get:-

boot : “no operating system installed” (or similar)

I assume that the bios can’t find the boot location or boot loader is only on one of the two hard disks…something like that.

output from parted shows all the partitions on /dev/sda & /dev/sdb and all the raid partitions listed as /dev/mdxxx etc

I assume Grub is the boot loader but I dont see the menu.list (or the file ) where the boot devices are referenced. Not having a lot of experiance with raid boot issues how do I troubleshoot this and get suse to boot up?

Cheers

Nigel

No OS error is cause by not having any boot code in the MBR. Be sure you install grub in the MBR

So my understanding is that suse install failed to install grub on the physical drives /dev/sda & /dev/sdb because it tried to put grub on the raid device /dev/md120 (where /boot is located)?

if this is correct, should grub be installed on both /dev/sda & /dev/sdb and then configure grub to point to /dev/md120?

is the best way to achieve this to install grub via command line or is there a way to re-install suse but add some options to make grub install correctly.? It seems there is little documentation for installing grub on raid1.

could you comment on the above and possibly point me in the right direction.

Nigel

It is easiest to do it right in the install. just look and see where things are going and change as needed when the partition scheme is sown for approval.

You can have it both places. But you must have some form of MBR code to boot, otherwise the error.

So I decided to re-install 12.2 partitioned and added raid 1. However at the pre-install summary

I get highlighted in red. “it was not possible to determine the exact order of disks for device map” see image showing message and partiton list.

http://www.kilner-vacuum-lifting.com/20130212_170925.jpg I tried pointing status Location to /dev/md0 or /dev/md1 and then I just get a message complaining “about /dev/md0 no longer available”

Note: / is the 30gb partition & /boot is the 196.11mb partition.

How should I change the booting options to get grub2 to install.?

I do not see any need to change boot options. It is should be OK to install bootloader into MBR (which is implied by /dev/sda).

On installing I get:-

creating software RAID /dev/md0 from /dev/sda1 /dev/sdb1

system error code - 6008

cannot open /dev/sda1 device or recource busy!

how can that be? I removed all partitions before the re - install.

the message comes up for every raid device to be created…I click ignore for all then the install hangs.

any ideas?

Can only refer to https://bugzilla.novell.com/show_bug.cgi?id=733299. Unfortunately, such problems are near to impossible to solve unless developers can reproduce them.

I suspect that even though you delete partitions, the MD signatures are happily left on disk. So when you create partitions of the same size, it picks up old MD arrays.

ok…thank you. I’ve checked the Bugzilla and tried installing by importing partitions but there were none to import.

I am now going to buy a compatible Raid card. hopefully I should then be able to install.

Thanks very much for your assistance.

All the problems were caused by the raid card (promise) which had gone faulty. I bought a Dynamode sata/raid card (silicon image 3114 chipset) installed it, setup raid 1 in the bios and installed 12.2.

Went like a dream! no drivers needed cost £23

Thanks for all your help guys.