software raid howto..

Hi forum members,

I am using openSuSe 11.1. I am a raid newbie and would like to know if anyone has a simple software raid howto that i could follow. I would like to raid 2 1gig sata drives.

rgds
Miguel

The YaST partitioner can guide you through this.

Hi,

I have set up a software Raid 1 with the yast partitioner. Just create two partitions of exactly the same size and do not format or mount them. Then select the Raid menu and create a new raid with the two partitions. Near the top you can choose the type of raid.

I had certain issues with my software raid. Sometimes “acces beyond end of device” errors occurred according to /var/log/messages. A fsck.ext3 found errors. Can you please have a look on that when your raid is up? I would really want to solve that issue… Thanks!

Regards
Volker

Hi all!

I’m a newbie too, please help me!

I have a working OpenSUSE 11.1 system on a single 250GB hdd and I have another one too, I’d like to build a RAID 1 array. Is it possible to build it with yast or I have to re-install the system?

My advice:

Read several articles first. There are a number of links below. Understand the terminology and concepts. Go slowly. Don’t try to do everything at once. Reboot often and check your work. It can’t hurt.

Test, test, test, and test some more. Break it. Rebuild it. Then you can consider putting your faith in it. I have not tried to do an install of openSUSE on a RAID set though, I understand that it can be done.

What I have done that is least aggravating:

-YAST - Create partitions on each drive setting type to RAID, no formatting yet.

-Reboot

-YAST - Create RAID set and choose FS for formatting.

-Reboot

-Create directories for mounting and set appropriate permissions.

-YAST - Edit RAID set to apply mount point(s).

-Reboot

-Test, test, test. Break and rebuild.

I have built several sets. Other than the first (<:-p), a recent one gave me fits until I realized it was because a failing hard drive was accumulating bad sectors. My only experience is building with empty disks then, moving data to the new RAID set. However, one can easily build with an empty partition(s) on a disk with other partitions in use.

I prefer having identical disks for building sets however, I do have one set working flawlessly with two drives of different sizes. The remaining space on the larger drive is used only for some of my least important data and backups from different media.

If you desire to install openSUSE on a RAID set, check out this:

How to install openSUSE on software RAID

How to install openSUSE on software RAID - openSUSE

Hope these help:

Linux SATA RAID FAQ

Linux SATA RAID FAQ

2.3 Soft RAID Configuration

OpenSUSE 11.1 Reference - Soft RAID Configuration

How to install openSUSE on software RAID

How to install openSUSE on software RAID - openSUSE

Creating a RAID-1 on OpenSUSE

» Creating a RAID-1 on OpenSUSE (en)](http://www.roumazeilles.net/news/en/wordpress/2007/02/01/creating-a-raid-1-on-opensuse/)

The Software-RAID HOWTO

The Software-RAID HOWTO

Linux Raid

Linux Raid - Linux-raid

thanks for the answer, I’ll try the raid after I read all of the HowTos.
Could I ask, if I don’t uderstand something?

thanks a lot!

I have to build a Raid1 fileserver too. I’ll choose Linux because the filsesystems are superior. (For everything else, I prefer Unix cleaner design).

I’m concern about the filesystem :

  • reiser is less and less supported and won’t probably be developped any more (Hans Reiser is in jail for a long time).
  • ext3 was yesterday (?)
  • ext4 had the “delayed allocation” bug, which can result in data loss (files sized to 0) and even a raid cannot help in this matter.

This bug* is fixed in kernel 2.26.30 and possibly patched 2.26.29 kernels (as in Fedora to be released in about a week). I just read in the CT (excellent german magazine) that kernel 2.26.28 shipped with the latest Ubuntu (Jaunty) has been patched against that bug as well. So a very simple (but quiet important question): is the 2.26.27 kernel shipped with openSUSE 11.1 affected by that bug (I’m afraid so) and is it going to be patched any time soon ?
I would prefer openSUSE (I already have an openSUSE fileserver but it’s not raid), because I know openSUSE better than Fedora and Ubuntu is nice… but maybe not as fileserver. :wink:

I don’t have much experience with raid either. So I will read, try, test and post results and questions here. I’m not sure when I’ll start however, since too many systems keep me busy at the moment.

* I know, it's not a bug but a feature misused by bad written applications. But people you lose data because of that don't care about what causes the mess.

@ dicktater
Thanks for the links.

And just like yesterday, ext3 is still working well for me.

Here’s something new (to clueless me) that I discovered whilst setting up a new array. When I rebooted after formatting the new array, in Gkrellm I noticed disk activity on the two new drives. WTF? sez me. So, I ran “top” and found the md2_resync process running but thankfully, using only marginal amount of processor time. So, I ran:

cat /proc/mdstat

This was something I had not seen before (heck, I wasn’t looking for it). Unless I was careless, I don’t recall resync being discussed in any of the literature that I read and certainly didn’t notice it before.

mdstat showed that the resync in process would take about 2.5 hours to complete, even though I had yet to copy any data to the drives. OK, I know. This is a block-level process so it matters not what the blocks contain. I’ll assume that it is best to wait until resync finishes it’s first pass before attempting to write data to the new array. Seems ike a good way to pre-test drives for bad blocks.

In looking up info in resync, I found this:

“(Incidentally I would recommend against making a RAID array from several drives from the same manufacturer. Especially if they’re the same model. Even more so if they’re the same batch.)”
Pathetic SATA performance | debianHELP

Huh? So, where did I get the idea that it was best to use identical drives. I guess that I can understand different production run batches but, trying to match specs for different drives is not fun.

Here’s a question or two for anyone with more RAID experience than I:

I understand the advantage of having multiple partitions for better data organization and utilization of different filesystems to meet specific purposes. However in my circumstance with my file server, I currently have six SATA drives (mobo has six onboard SATA ports): 300, 400, 500, 500. 1000, 1000 GB. I have created the following RAID 1 arrays:

md0 = 300GB (with a 100GB partition left over on one disk)
md1 = 500GB (identical drives)
md2 = 1000GB (identical drives)

I am not using LVM. It is a part of my curiculum futuro, though. Maybe it should preceed NFS? I dunno.

My concern is primarily redundancy. I currently organize my data to arrays with a strict and deep directory hierarchy. My intent is to replace the smaller arrays over time as needs dictate and money allows.

Are there any other reasons why one would break a large disk into more than one partition before setting up RAID? Given my situation above, what are some advantages and/or disadvantages to each of the scenarios below?

Examples:

2 1TB drives —> 1 @ 1000GB partition each

2 1TB drives —> 2 @ 500GB partitions each

2 1TB drives —> 4 @ 250GB partitions each

For such things i recently discovered LVM and boy this is great!!

You can always install it on top of RAID :slight_smile:

I was struggling to understand how LVM works but now all i can say is WOW, the greatest thing in it is the idea of immediate snapshots for backing up. You don’t have to turn off the system or anything to make a backup, simple snapshot, tar the snapshot and off we go back to running our server/desktop.

It CAN get a bit messy when you’ll look at it so it’s best to plan the partitioning on a piece of paper if you intend to also use RAID with LVM :slight_smile:

And if you ever add another disk it’s easy as connecting it turning on the PC and “extending” LVM on that disk :slight_smile:

Here is a simple HowTo from TuxRadar:
LVM made easy | TuxRadar