Installation problem with dmraid

Hi,

I recently had my two hdd in a fakeraid setup (Motherboard is an asus m2v, controller should be via). I then deleted the raid setup via the bios controller and deactivated raid, i. e. set the mode to SATA instead of RAID. I then installed M$ XP which no longer detected a raid and installed fine on one of the disks.
When I tried to install OpenSUSE 11.1 the installer detects the old raid with the old partitions, no partitions on the single disks are found.
When I go into the console and try to delete the metadata of the raid via
dmraid -x
the error message is: Raid set deletion is not supported in ddf1 format
I tried to delete the metadata via:
dmraid -E -r /dev/sda
dmraid -E -r /dev/sdb
there is an error message, too and the metadata is untouched.
My last resort was to overwrite the metadata via
dd if=/dev/zero of=/dev/sda seek=metadataOffset
but even this does not succeed.

I’m now out of ideas on how to proceed. Is there a way to install OpenSUSE on my system or deactivate dmraid once and for all? Any help is appreciated :frowning:

Not sure if this will help you or not. Did you try using the fdisk program to partition it?

Both disk drives were partitioned and formated under XP.
Today I tried the Boot-parameter:
nodmraid
and
dmraid=off
but to no avail. I know that under Fedora nodmraid deactivates the dmraid module but it seem to doesn’t work under OpenSUSE.
It’s like dmraid is polling the raid info out of the eeprom of the raid controller but ignoring the fact that raid is turned off in the bios :\

I hope there is a way to get OpenSUSE running but I don’t know how to proceed. Is there a way to turn off dmraid?

Again, I am not sure about whether this is going to work or not. Using fdisk (in Linux), you can change the partition type.

I do not understand on how to proceed. Do I have to use fdisk on the ddf1 raid? I already did that but it only shows the old partitions I had once on the raid. I haven’t deleted them because I don’t want to mess up my current partitions on the disk.
When using the installer “expert mode” during disk partitioning on sda/sdb there is an error telling the disks are already in use. The installer let me delete the old raid partitions, though, but I didn’t dare to execute the changes because don’t want to mess up the current partitions.

You need to remove old raid partitions. Please back up your data and then, proceed.

I did that yesterday evening. I booted from the Suse installation dvd, and run fdisk on the ddf1 raid. I deleted all partitions, made the changes stick and did a reboot. As expected the old raid partitions were gone and an empty raid was detected by the installer. I still do not managed to install on any of the former raid hdds. Sadly this also destroyed my M$ XP installation so I couldn’t boot into any OS.
This is what I tried to avoid…
Luckily I remembered a posting with a similar problem. The solution were to activate the raid in the bios and build up a spare raid (JBOD raid). I did that, deleted that raid afterwards and switched the bios from raid to SATA.
After that the Suse installer no longer recognized the raid and instead let me install on the single disks. I really hope that bug is fixed before Suse supports JBOD raid like fakeraid 0…

I knew that it was going to destroy XP. That’s why I said to backup before touching partitions.

The problem, and I hit it often also, is that the bios fakeraid firmware writes metadata to the physical drive, and then the linux kernel automagically recognizes this metadata and disables access to the underlying raw device before you have any chance to intervene. You can’t run fdisk on anything to clear it. The kernel does not allow you to do what it thinks is shooting yourself in the foot. You have to have a way to tell the kernel to ignore fakeraid metadata before the kernel loads up.
The problem persists even when you disable the fakeraid option in your bios or sata card, or when you move the drives to other machines. Once the metadata is there, linux treats it as a fundamental property of the drive and doesn’t let you “break” it, only use it.
“dmraid -rE” doesn’t even work, at least not from a suse installer shell where you most need it.

This is a common problem, which is why, as he said, there are options in the fedora/centos and other installers just for this purpose. I see no especially user-friendly option in the opensuse installer that addresses the need, but there is a way.

2 parts, first add this to the kernel options at the syslinux boot prompt:
brokenmodules=dm_mod

( Other boot command options are documented here:
Linuxrc - openSUSE )

That was still not good enough for me though. Even if I avoided going into yast by specifying text or ssh and then not starting yast after logging in, I still could not erase that metadata. dmraid -x and dmraid -rE still gave errors. I could access the drives with dd , but I could not find the metadata offset to wipe it. It wasn’t at the beginning, end, or middle of the drive. “dmraid -rE” displays a sector number in it’s error message, but erasing 100 megs starting at that point didn’t do it, nor did erasing the last or the first 100M of the drive.

I HAD to go back into the servers bios and re-enable sata raid, then reboot, then enter the raid bios, and use the menus there to delete the array config.

Then reboot, motherboard bios, disable sata raid, reboot, then everything was fine.

Apparently using dd to re-write the entire drive works, but that would take forever with 8 500G drives!! even if I had 8 background dd processes running concurrently.

Thank You Seneca Data for playing around with the on board fakeraid before shipping me my “no OS/freedos” server…grrr.