Activating DMRAID on boot-up

Hi all,

I’m having a problem with activating my RAID 1 on boot-up. I’ve done some searching, and there are two possible solutions that I’ve found:

  1. Use INSSERV to start DMRAID

  2. Use MKINITRD to create a new INITRD file which loads the DMRAID module.

Neither solution showed any detail of how to accomplish this, and which files to edit or what order I should use to tackle either.

With the second solution, I have another SUSE 11.2 installation on another hard drive. Would it be ok to boot into that and create a new INITRD with DMRAID activated, or would it be better to break the RAID-set boot into one of the drives and create INITRD for that RAID 1 system, then recreate the RAID-set? The only issue I would see is that fstab, device mapper and grub would need the new pdc_xxxxxxxxxx value, which can be changed from the second installation.

I take it that the module would be loaded before /etc/fstab would be read. I’m not a total newbie, but any guidance at newbie level would be welcomed, as I’m sure there are lots of people with the same issue.

My system is:
Asus M4A78T-E with fakeraid SB750 controller
AMD Black Edition - AMD Phenom II X4 3.4 GHz Processor
2 x Samsung 1TB drives
Suse 11.2

I have Windows XP on another hard drive purely for overclocking and syncing the 1TB drives, but I can’t actually boot into the SUSE system until this issue is sorted out.

Many thanks

There are 3 ways to do RAID.

  1. hardware RAID card this is good and OS agnostic
  2. software RAID this is good but does not work with a Windows dual boot
  3. FAKE RAID this is a BIOS RAID and requires additional drivers. This may or may not work in Linux depending on the chip sets

So which on do you have?

i have the latter, Fake RAID as mentioned above against the motherboard.

Where are all the SUSE gurus?

Did I mention that Fake RAID may not work. It appears to depend on the exact chip-set. Some claim to have gotten it to work with dmraid some can’t. I think it is the specific chip sets involved. Most gurus will not be dual booting and running RAID. Or if they do they will be using a true hardware RAID solution or running Linux only and using software RAID.

Hi
I use software raid… I have a SIL3114 chip on this motherboard, I
should look (one day) at kicking into life the sata_sil driver and see
what happens.

So what is the exact name of the chip involved? Do you have the PCI
id’s?


Cheers Malcolm °¿° (Linux Counter #276890)
SUSE Linux Enterprise Desktop 11 (x86_64) Kernel 2.6.32.24-0.2-default
up 4 days 0:21, 3 users, load average: 0.01, 0.07, 0.02
GPU GeForce 8600 GTS Silent - Driver Version: 260.19.21

Thanks guys for your comments.

I’ve also heard that it can be chipset dependant, however I don’t see that there should be a problem since the onboard RAID controller does actually facilitate a mechanism to allow the OS to to read the drives as you would in software RAID (although I’ve never tried it). I’m using pdc: Promise FastTrack (S,0,1,10)

dmraid -ay seems to activate the RAID-set with no issues once I’ve booted into SUSE, so all I’m trying to do is enable dmraid -ay at boot time.

All I’m asking is, is it better to use INSSERV to start dmraid at boot time or INITRD? and how would I do this, which files should I use, and what part of the boot process will this fall into. I read something somewhere that boot.dmraid should take care of the activation, but obviously it doesn’t or is not even called.

From the number of views this thread is getting, a lot of people have the same issue, so come on guys and gals, for those that know a little bit about the boot process I’m asking about, can you at least point us in the right direction?

FYI: there is finally a linux version of RAIDXpert available. for anyone using the AMD SB800 RAID controller,

Below is my lspci:

00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge
00:02.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (ext gfx port 0)
00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2)
00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3)
00:11.0 RAID bus controller: ATI Technologies Inc SB700/SB800 SATA Controller [RAID5 mode]
00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller
00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller
00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller
00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller
00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller
00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller
00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3c)
00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller
00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA)
00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller
00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge
00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller
00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control
00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control
01:00.0 VGA compatible controller: ATI Technologies Inc Device 68b8
01:00.1 Audio device: ATI Technologies Inc Device aa58
02:00.0 Ethernet controller: Attansic Technology Corp. Atheros AR8121/AR8113/AR8114 PCI-E Ethernet Controller (rev b0)
03:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403

Hi
Create a /boot that is not on the RAID. You can the add the module to
the kernel sysconfg entry via YaST to load at boot, that will
add/rebuild initrd for you. Then have the service set to start as well.


Cheers Malcolm °¿° (Linux Counter #276890)
SUSE Linux Enterprise Desktop 11 (x86_64) Kernel 2.6.32.24-0.2-default
up 5 days 7:16, 6 users, load average: 1.04, 0.77, 0.34
GPU GeForce 8600 GTS Silent - Driver Version: 260.19.21

Sounds like a good idea. I’ll give it a try on the weekend and post back as soon as I can with the results.

Thanks

I decided to use my backup installation to boot into the RAID 1. I activated the RAID from the backup installation to see which modules I needed using lsmod, and inserted them into YAST’s sysconfig editor under Kernel/Modules loaded on boot. But it still didn’t work, so maybe I’m doing something wrong, like not using the right values, or not using the script /etc/init.d/boot.dmraid somewhere.

Currently, I’m trying to add the dmraid and device mapper modules to initrd to make a new one, so I think this maybe the way to go.

Still, if anyone has a better solution or knows how to create a new initrd for SUSE 11.x then by all means add a post.

Hi all,

I haven’t had a chance to read all the posts to see if there was a solution to my FakeRAID problem. I have managed to solve it myself though. After many weeks of trying to use a custom kernel and initrd,to no avail, it turned out that I had to get rid of my extended partition and just use the primary partitions across the 1TB drive.

I tested briefly that I could boot from either drive, and it worked well, as both were blank beforehand. Originally I had seperate partitions for /boot, /, /home, /usr and swap, but because I am now limited to just 4 primary partitions, I’m just using /boot, /, /home and swap. Furthermore I have another 1TB drive which I use to backup /etc, /usr and /home, as it’s a pain to reinstall everything if root screws up.

I hope that someone can try this with 2 blank drives, and post back results so that others may benefit. Use my experience at your own risk. Good luck!