Formating RAID drives

Hello,

I have an X64 system with 11.1 and two 500GB drives (not RAID) using swap, /, /home, and /local. I needed to install a 6 slot PCI expansion chassis which also has room for up to 8 additional hard drives.

I added 4 1TB drives and a SATA controller to the expansion box and everything works.

When I go to the Partitioner in Yast all 6 drives are listed but when I went to the RAID setup it says ‘ERROR - there are not enough suitable unused devices.’ The 4 new drives are blank and unformatted.

Okay, I never used the RAID setup in Yast so I figured, no problem, I’ll set up the RAID in the hardware. I did that using all 4 new drives as a RAID-10, no problem.

When I go back to the Partitioner, again all the drives are there (listed individually) but I’m afraid if I format the new drives individually it will conflict with the hardware RAID setup.

And if I did format the 4 new drives in the Partitioner what mount points do I use?

I have the option to do a low-level format in the hardware RAID setup but I don’t see how that helps with the mount points.

My limited experience with setting up a RAID was with a very similar machine where I setup a RAID with 2 drive in the hardware prior to (or in conjunction with) a new install of SUSE and the RAID was recognized a one drive and everything worked without me having to go to the Partitioner.

Any suggestions on setting up a RAID with new drives on an already functioning machine would be appreciated.

Thanks,
Kevin

If you properly set up raid at the hardware/firmware level, openSUSE should recognize that. I don’t know what is happening in your case.

If you want to set up at software level, you may use the mdadm command to do that. Read up the HOW-TOs first.

What You need to do is create a partition and mark it as RAID. So you go the normal way of creating a partition but when you get to partitioning you click don’t format and mark it as RAID.

The difference you saw is the difference between a hardware raid and software raid, for hardware raid you don’t need any drivers nor anything, the drives would show up as you said as one drive. The cheaper raid controllers are in fact software raid’s thus you need a driver for it. When on a package says that the raid controller works on windows, linux and some other OSes then You know it is a hardware raid.

On Fri, 2009-06-19 at 00:56 +0000, BenderBendingRodriguez wrote:
> The difference you saw is the difference between a hardware raid and
> software raid, for hardware raid you don’t need any drivers nor
> anything, the drives would show up as you said as one drive. The cheaper
> raid controllers are in fact software raid’s thus you need a driver for
> it. When on a package says that the raid controller works on windows,
> linux and some other OSes then You know it is a hardware raid.

Actually for SOFTWARE RAID you don’t need any drivers, etc. For HW
RAID, if it’s not a standalone subsystem, there will be some kind of
driver, though it (like SATA/SCSI) may well be already built into your
kernel or as a module.

(life is all about drivers… so even with software RAID you do have
drivers in the kernel for getting to storage, etc… just talking about
the RAID feature specifically)

In general the most flexible “HW” RAID devices are standalone
subsystems. These just require “normal” common storage driver support.

Anything else involves some risk, unless it is a free driver that is
well supported and comes with Linux. Risk in Linux is mixed. For
example, LSI wanted support for their RAID cards, but didn’t like to do
the work, and dropped support after a VERY short period of time (3 years
or so). There was a period where things like the older MegaRAID series
did not work well with Linux. In Linux, you need “champions” of driver
support… or things can decay.

FRAID controllers (Fake RAID), as you mentioned are onboard (usually)
controllers, usually SATA, that require a larger than normal proprietary
closed (usually) dirver/firmware. We can only hope for more free/open
FRAID support in the future (not sure there is anything working for
anything contemporary at the moment). FRAID tends to leverage more of
your system’s CPU instead of having a dedicated RAID processor (saves
money). Because of the issues with FRAID support in Linux, and the fact
that a lot of the heavy lifting is done by the CPU anyhow, in general,
you might be much better off with Linux software RAID.

HW RAID controllers are a mixed bag. Some are supported (with real free
drivers), other are not well supported (free or otherwise). Cheap HW
RAID have such poor dedicated processors that, again, you may find that
Linux software RAID is a better choice. Even the “powerhouse” multi
thousand of dollars (USD) controllers of the past, may be almost
worthless given today’s extreme CPUs.

HW RAID gives up its value the fastest. RAID subsystems survive the
longest (esp. given their NO dependence on drivers). And SW RAID, duh,
will out live them all, as long it works well for your scenario (since
it does use your CPU). FRAID? I’m aFRAID it’s too difficult to tell if
this will ever become a viable preferred choice… it’s not right now.

You do need drivers for software RAID be it Fake or a crap PCI RAID expansion. But mostt drivers are built in the kernel (fake raid), what i mean is that the real HW RAID is seen as a normal disk etc. while software RAID (cheap 3w bastards are in fact software drivers and as i said, if You see the supported OS is Windows only then it is software because there is a driver for Windows only :slight_smile:

On Fri, 2009-06-19 at 14:46 +0000, BenderBendingRodriguez wrote:
> You do need drivers for software RAID be it Fake or a crap PCI RAID
> expansion. But mostt drivers are built in the kernel (fake raid), what i
> mean is that the real HW RAID is seen as a normal disk etc. while
> software RAID (cheap 3w bastards are in fact software drivers and as i
> said, if You see the supported OS is Windows only then it is software
> because there is a driver for Windows only :slight_smile:

:slight_smile: Actually FRAID is usually NOT build into the kernel or at least
requires some “extra” piece that is not freely distributable (in the
FOSS sense of the word). Feel free to educate me if that’s totally
wrong though.

  1. SW RAID (always there, works, free duh)
  2. HW RAID subsystem (always there, works, pricey)
  3. HW RAID controllers (mixed support, some yes/no, price varies)
  4. HW FRAID controllers (mixed support, usually no, usually free)

Thank you all for your responses,

I did a little more research and found a ‘cool solutions’ that described setting up RAID 0, 1, & 5 under YaST. Thinking maybe YaST couldn’t deal with RAID-10 I went back to the hardware setup and set the 4 new drives as RAID-5.

They still showed up as individual drives, so no luck there.

Following BenderBending’s advice, I created the partitions and marked the new drives as RAID. I selected a mount point and formated them, everything worked flawlessly.

Interestingly, Kdiskfree did not report the space available but Dolphin did!

Thank you all again - another problem solved easily with SUSE.

And Bender, your linux geek love making instructions would be GREAT t-shirt material!

K.