installation on raid

i’m about to install 11.3 64 on an hp xw8400 workstation. i have installed two sata drives and set the bios level intel storage matrix to raid1.

i’m familiar with raid controllers on large systems like openvms and tru64. but i would like to view a tutorial or how-to as it applies to opensuse. for example, the differences between md and dmraid. recovery options and methods. i’ve never worked with this intel product before either.

can someone point me to any articles?

thanks.

ewhite20 wrote:

>
> i’m about to install 11.3 64 on an hp xw8400 workstation. i have
> installed two sata drives and set the bios level intel storage matrix
> to raid1.
>
> i’m familiar with raid controllers on large systems like openvms and
> tru64. but i would like to view a tutorial or how-to as it applies to
> opensuse. for example, the differences between md and dmraid.

mdraid is software raid, it runs as part of Linux - dmraid is support
for on-board RAID solutions, such as your Intel Storage Matrix.


Per Jessen, Zürich (17.6°C)
http://en.opensuse.org/User:pjessen

On board RAID is usually BIOS (FAKE) RAID and usually requires external drivers that may not be available to Linux. If you are not dual booting with Windows I recommend software RAID. But if dual booting you will need a real hardware RAID card FAKE RAID won’t cut it.

i didn’t realize the intel raid controller was without a brain. guess i should have sprung for sas drives as the lsi controller does.

are there any reliability issues to using the intel raid with software? obviously i don’t have a big investment in the sata drives, so not doing raid is not a deal killer. although i have used software raid in the past, i was not really interested in doing it on a linux workstation.

put another way, i want to build this as a solid production workstation to run until 11.3 is sunset. i don’t want to be “fiddling” with it to keep it going. 10.3 was solid (retired my last 10.3 machine a week ago) and i’m keeping my fingers crossed that 11.3 will continue the tradition. if the intel raid is too finicky, i’ve got no problem abandoning it.

Well I have no direct knowledge of the Intel chip but most FAKE RAID chips require software support. I saw that some one said it worked on 11.2 but not on 11.3. I just know that unless you have real hardware RAID or use Software RAID it is a **** shoot.

Real hardware RAID is transparent you throw bits at the controller and it handles it from there. The OS does not need to know or care Where or how they are stored, it just looks like another disk device.

ewhite20 wrote:

> are there any reliability issues to using the intel raid with
> software?

Probably not.

> obviously i don’t have a big investment in the sata drives, so not
> doing raid is not a deal killer. although i have used software raid
> in the past, i was not really interested in doing it on a linux
> workstation.

Why not?? All my office workstations (not plain desktops) have software
RAID.


Per Jessen, Zürich (20.9°C)
http://en.opensuse.org/User:pjessen

overhead. which may not be an issue, i’ve not used linux software raid.

i’m also wondering how these drives will fare with the software raid. western digital claims they make two kinds of drives - those for raid environments and those for desktop non-raid environments. the difference being in error recovery. i purchased desktop drives (lower power consumption and larger buffer size.) again, may not be an issue but i’ve never tested it.

To be honest RAID is a big pain and provides little benefit for a desktop system. RAID 0 (stripped) does increase speed at the cost of increased risk to the data. RAID 1 (mirror) increases up times at the cost of disk space and you still need to do backup to assure data integrate.

With modern processors the additional overhead for software RAID is very very small. And FAKE RAID uses the BIOS and support drivers to do its thing and thus is more like software RAID for overhead. That all takes CPU cycles. So there is no real gain there.

Hello ewhite20,

Firstly, let me echo the sentiment of an earlier poster, only use the on-board fake-raid (intel ichrXXX) IFF you need to dual boot and share data between windoZe and gnu/linux.

If yes, gnu/linux, including OpenSUSE, support and works just fine with intel’s fake using dmraid (device mapper raid). The only caveat I would make, OpenSUSE 11.3 may have released with a foobared udev … so initial install MIGHT be problem … hope someone can clarify this issue. I personally use dmraid (fake-raid) for an array I have on a CHEAP adaptec controller … hee hee … I bought it thinking it was a real raid card … lol. From this array, I boot win7 and have an extra ext4 partition. I have had no problems with OpenSUSE 11.1, 11.2, 11.3 or Fedora Core 13 with this configuration. And it is NOT a hassle … it just works! I even had a drive failure recently … worked as expected!

mdraid (a.k.a. kernel raid, linux kernel raid, software raid) is a better solution than dmraid (fake-raid):

  1. Portable … the disks will work in ANY gnu/linux box
  2. Performance … much better than my cheap adaptec … not sure how it compares to the intel ICHRXXX junk, but I suspect the same. mdraid has been around longer and is more mature than dmraid.
  3. Flexibility … mdraid is AWESOME … my main system consists of 4 disks managed by mdraid. I was doing raid benchmarking recently … and when completed … I was able to repartition and reformat each drive in my system with out ever rebooting … obviously keeping the same mapping to the root partition.

Enjoy!

oxala

ewhite20 wrote:

>
> pjessen;2209782 Wrote:
>> ewhite20 wrote:
>>
>> > obviously i don’t have a big investment in the sata drives, so not
>> > doing raid is not a deal killer. although i have used software
>> > raid in the past, i was not really interested in doing it on a
>> > linux workstation.
>>
>> Why not?? All my office workstations (not plain desktops) have
>> software RAID.
>>
>
> overhead. which may not be an issue, i’ve not used linux software
> raid.

You’re worried about overhead - on a workstation? (I’m assuming it’s
not a 486).

> i’m also wondering how these drives will fare with the software raid.
> western digital claims they make two kinds of drives - those for raid
> environments and those for desktop non-raid environments. the
> difference being in error recovery.

Actually, I think you’ll find the difference is in MTBF and duty-cycle.
The drives intended for use in RAID have an expected 24 hours/day
duty-cycle and their MTBF is calculated with that as a factor. Same
for the desktop drives, but here the duty-cycle is assumed to be 8 hour
per day.


Per Jessen, Zürich (25.6°C)
http://en.opensuse.org/User:pjessen

gogalthorp wrote:

>
> To be honest RAID is a big pain

To be honest, not at all.

> and provides little benefit for a desktop system.

It depends - if your infra-structure is such that a dead workstation is
easily swapped for new one, you’re right, there’s little benefit.
Otherwise …

> RAID 1 (mirror) increases up times at the cost of disk space and you
> still need to do backup to assure data integrate.

cost of disk-space? like EUR100 per Terabyte?


Per Jessen, Zürich (25.8°C)
http://en.opensuse.org/User:pjessen

go to the knowledge base area of western digitals website (wdc.com) and search for desktop enterprise raid. look for the article titled “the difference between…”.

I should have said FAKE RAID is a pain. Just look at all the posts here that deal with it.

Lots of things can cause a dead Workstation not just a bad drive! You are far more likely to lose a power supply then a drive. Also drives tend to degrade rather then fail catastrophically. So you have warnings and can replace a backup and replace the drive.

that is “at the cost of disk space” because the data is redundant you have 1/2 or less of the space you paid for.

There are places that RAID makes sense. Where life is threatened you need 5 9s uptime. For a typical desktop not so much.

not certain i can agree with you on the failure part. i seem to have had more than my share of sudden drive death. often times i’ll see two or three bus resets and then its gone. a few times i’ve seen creeping errors over the course of a few days, then bus resets and finally a failure. however, these are on large systems with external controllers and mainframe class operating systems (one of them is a pharmacy management system. yes, lot’s of 9’s required.)

for most of today i’ve plundered the wdc web site. they are pretty consistent about not using desktop class drives in a raid setting. maybe it’s marketing. but had i given the drive purchase more thought (and done research into the intel controller), i could have bought enterprise class drives for 20 bucks more per drive. i did finally get MD raid up last night without the intel matrix hardware. mdadm looks quite decent. i’m really tempted to run with it. but if i’m to believe what i see on the wdc site, i’ll have issues sooner or later. so i think i’m going to go with single drives. live and learn.

this has been an education. thanks to all for the responses.

ewhite20 wrote:

>
> pjessen;2210202 Wrote:
>> ewhite20 wrote:
>>
>> > i’m also wondering how these drives will fare with the software
>> raid.
>> > western digital claims they make two kinds of drives - those for
>> raid
>> > environments and those for desktop non-raid environments. the
>> > difference being in error recovery.
>>
>> Actually, I think you’ll find the difference is in MTBF and
>> duty-cycle.
>>
>> The drives intended for use in RAID have an expected 24 hours/day
>> duty-cycle and their MTBF is calculated with that as a factor. Same
>> for the desktop drives, but here the duty-cycle is assumed to be 8
>> hour per day.
>>
>>
>>
>> –
>> Per Jessen, Zürich (25.6°C)
>> ‘User:Pjessen - openSUSE’ (http://en.opensuse.org/User:pjessen)
>
> go to the knowledge base area of western digitals website (wdc.com)
> and search for desktop enterprise raid. look for the article
> titled “the difference between…”.
>

Personally I prefer the information from the datasheets over the
marketing stuff. YMMV.


Per Jessen, Zürich (24.5°C)
http://en.opensuse.org/User:pjessen

gogalthorp wrote:

>
> I should have said FAKE RAID is a pain. Just look at all the posts
> here that deal with it.

Agree.

> Lots of things can cause a dead Workstation not just a bad drive! You
> are far more likely to lose a power supply then a drive.

Not in my experience. A desktop drive typically has a three-year
warranty (often =lifetime), a server ditto usually five year. My
power supplies always outlast my disk drives. Cheap fans often go
fairly quickly, but I gave up on cheap fans long ago :slight_smile:

> Also drives tend to degrade rather then fail catastrophically. So you
> have warnings and can replace a backup and replace the drive.

Very true if you run active SMART monitoring. We do, but when the cost
of a drive is less than a man-hour, it’s still a good investment, IMHO.

> that is “at the cost of disk space” because the data is redundant you
> have 1/2 or less of the space you paid for.

Well, I guess it’s matter of ones perspective. I’m paying for the
redundancy, not the space - I get exactly what I’m paying for,
space+redundancy.

> There are places that RAID makes sense. Where life is threatened you
> need 5 9s uptime. For a typical desktop not so much.

For a typical desktop in the back office, I agree - for developer
workstations, I think the extra EUR100 is easily worth it.


Per Jessen, Zürich (24.6°C)
http://en.opensuse.org/User:pjessen

ewhite20 wrote:

> for most of today i’ve plundered the wdc web site. they are pretty
> consistent about not using desktop class drives in a raid setting.
> maybe it’s marketing.

No, I don’t think it is. Partially perhaps, but a desktop drive really
was not intended to have the same duty-cycle as a server drive.


Per Jessen, Zürich (24.4°C)
http://en.opensuse.org/User:pjessen