RAID1 install for SuSE 11.0? (where's the bootloader go)

Hi All: Where’s the preferred location for the boot loader in a RAID1 install? I noticed that openSuSE11.0 recognizes the RAID and that’s great, but the installer mentions there may be a problem if I don’t choose the optimal boot loader installer.

(The installer wanted to to install the bootloader to the MBR - and I let it - but there’s isn’t an MBR on a RAID1, right?). The install went swimmingly and seemed perfect until I turned off and restarted the computer - now it cannot boot. I think I borked the RAID by putting the bootloader in the MBR as the installer wanted.

Trying to figure this out,
:shame::’(Pattim:( :’(

Of course the MBR exists. In fact you have two, on on each disk. Remember that there is no software RAID until the kernel boots, so at that point you have two separate disks. Usually the bootloader is put on both disks. YaST ought to do it for you but sometimes gets it wrong. You can do it from the interactive GRUB.

grub
device (hd0) /dev/sda
root (hd0,0) # or whatever the partition containing /boot is
setup (hd0)

at which point you get messages about finding boot stage files, etc.

and repeat for /dev/sdb (but with hd0 name again)

You can do this from the rescue system.

Just to add a note to @ken_yap’s post . . . you need to handle /boot/grub/menu.lst and /boot/grub/device.map as if there were no array, as grub cannot see that. So, e.g., if sda1+sdb1 are the array, say /dev/md0, then in device.map it is still (hd0) aligned with sda and (hd1) aligned with sdb. And in menu.lst the root line uses (hd0,0), i.e., pointing grub to /dev/sda1, not the array. This works because sda1 and sdb1 are identical (you could also have a stanza with root being (hd1,0), pointed to sdb1). Then on the kernel line you have the clause (using this example) “root=/dev/md0”.

Thank you very much!!!
I guess I was thinking that since the motherboard has an NVIDIA chipset that obviously sees an array, since it reports “Optimal” during POST, maybe changing just one drive’s MBR would break the array. I know each physical drive carries something, somewhere on itself identifying it’s position and membership in an array… that doesn’t get messed up by changing the MBR, does it? Or is the MBR where this info is stored? Now the NVIDIA boot agent is reporting a cable problem when I try to boot from the RAID - is this because SuSE changed the MBR? (I have re-installed SuSE twice so the cable is obviously OK.) I’m a little fuzzy on how GRUB gets called in the boot sequence in this case. Maybe the NVIDIA boot agent can’t find GRUB… I don’t suppose it’s smart enough to look for a “/boot” directory and I don’t know whether there is a “virtual” MBR on the RAID device.

I should have mentioned that during POST, both my ADAPTEC SCSI RAID0 (for swap file) and SATA RAID1 (for system) are identified, so I guess this is called “hardware RAID” - and SuSE 11.0 recognized them both and let me put a swap file on the SCSI RAID and “/” “/boot” and “/home” on the SATA RAID (I didn’t mess with the individual sda and sdb members of the SATA RAID). In the openSuSE Installer’s Partitioning section, it showed the SCSI raid (but no individual disks) as well as the SATA RAID partition (which I installed to - these used to be referred to as “md”) as well as sda and sdb (the two RAID members). Odd.

So, during install, should I have said install to MBR of the SATA RAID, sda, or sdb (I don’t think there’s the ability to say “both sda and sdb”). I think I recall on these forums that some folks were putting their bootloaders on CD’s or floppies. I suppose I could try to do that, but I’d prefer to go with (a working version of) openSuSE’s default in these cases.

Thank You!!!

The RAID metadata is held in the initial blocks of the RAID filesystem. which is a container for whatever type of filesystem you want hold inside it.

Thanks Ken! I should have mentioned this system is Novell SuSE Linux Certified, which I guess is why the 11.0 installer recognized the hardware RAID. So during system install, I guess my options are to locate the bootloader in the MBR of sda, sdb, or the raid? (it used to be called “md0” I think) But not to choose “/boot” as the location. I guess I can try all three of the former to see which works - of course, that’ll take a couple of days… :frowning:

I’m concerned that you may be mixing apples and oranges (@ken, double-check me on this) . . .

The Adaptec SCSI is a hardware RAID controller, the bios and the OS will see the array as its own discrete disk device, like any other disk; there is no visibility into the disks in the array. The controller when installed will hook the bios so that its interface can be run for managing the array. That device is what is referenced for partitioning, for grub, for the MBR.

A chipset will see an array only if its internal disk controller supports RAID or there is a separate discrete RAID device on the board whose I/O is routed through the chipset controller. The chipset does not see a separate hardware RAID controller as such; as explained above, it is just a disk. Chipset RAID controllers, which are actually software RAID, support both traditional disk use or a RAID array. The array is controlled through a bios extension. Typically the OS can see both the array (as it is passed from the bios) and the underlying disks. In linux, the array will be assigned its own device name and device-mapper will manage the I/O between the array file system and the underlying disk partitions, which are also seen. In this setup, the kernel and grub should access using the device-mapper assigned device-names, so for example, boot/grub/device.map will not use “(hd0) /dev/sda” but “(hd0) /dev/<mapper-id>”. When grub is installed to the MBR, i.e., (hd0), it is going to the mapped device level because this is where the partition table is held.

I, apparently mistakenly, took your posts to indicate you were using Linux software RAID, where the mdadm driver creates an array atop disk partitions. This is when the device naming is for example, /dev/md0. The OS also sees the underlying disks. So, similar to chipset RAID, the kernel’s file system is the array, hence “kernel root=/dev/md0” in grub’s menu.lst boot stanza. However, since grub itself cannot see the array because the OS has not started it yet, and the array has not been passed from the bios as is true with both of the above, this is where grub must boot from what it knows, i.e., /dev/sda1 or /dev/sdb1. This also explains why if using a linux OS array with a RAID type which is not mirrored (e.g., sda1+sdb1 in RAID 0), grub will not be able to boot at all, because when accessing the underlying sda1 the kernel file is not intact there, only half of it is on that partition, the other half being on sdb1 - in this setup, there needs to be a separate /boot partition not in an array, or /boot needs to be in a mirrored array type like RAID 1.

I hope I have helped with, not added to, the confusion.

Thank you - it explained at least a few things. I noticed during install that the Adaptec RAID (used only for swap) showed just a “physical” drive (sda), but the chipset mobo raid showed both the /dev/mapper and also the underlying drives (sdb, sdc) as your explanation also indicated. I had previously learned enough to know to install to the mapper rather than sdb or sdc. Anyway, GRUB was more opaque when I went to reinstall the bootloader from the openSuSE 11.0 repair function on the DVD. There didn’t appear to be an option to install to a mapper, but rather a few possibilities that weren’t mutually exclusive - three checkboxes for MBR, boot partition, and one other (the first three in the list before “dedicated partition” - which has a dropdown box for a physical device). I checked these first three and let it run - it didn’t complain that I had checked three (?) and now it seems to boot OK, so I’m assuming maybe even there are more than one bootloaders installed…

After booting, the YaST applet reports that the bootloader is installed in the MBR, but it doesn’t really say (or I can’t read it right) which MBR. Thanks for taking the time to enlighten me. I’ve noticed that some things are harder (for me) to learn than others. <yawn!> G’nite! :wink:

Glad that info was of some value. Yes, you installed grub’s stage1 to more than one location, but that doesn’t matter - as long as one of them works. The only way you can know where the bootstrap is installed, other than process of elimination testing by behaviors, is to actually look at the MBR and boot sectors in hex.