Solution for RAID

Hi folks, I’ve to change my hard drivers and buy the new one, The question is which plan have a better performance?

One: 1TiB SATA 6Gb/s 64MiB buffer x2 RAID1+0
Two: 500GiB SATA 3Gb/x 32MiB buffer x3 RAID1+5

Thanks.

That’s a difficult question to answer, based on what you’ve posted so far. What do you plan to use the RAID array for? Oh, and you can’t do RAID 1+0 with the first configuration, you won’t have enough drives to both mirror and stripe … You’ll need to buy 4 drives to get RAID 1+0. Also, maybe I don’t understand what you mean by RAID 1+5; RAID 5 does striping plus a spare drive for recovery … so not sure what you mean by the 1+5 configuration.

Now, having said that, here are my very general thoughts. I’m sure controversy will erupt, but I work with customers day in and day out who demand high performance from their I/O subsystems, and hence I answer your question from that perspective.

1). If heavy write, use RAID 1+0
2). If heavy read, very little write, use RAID 5
3). If a mix, and you have the money to spend, then I strongly recommend 1+0 over 5 – I just refuse to accept the overhead of the parity bit, even if it is small – it’s there nevertheless.

So, in summary, RAID 1+0 if you want performance, and can afford 2x the number of drives. RAID 5 if you can’t or want to keep it simple. AND make sure that you spend the $$ and get a card that supports hardware RAID, not software-based RAID (that is, don’t go cheap if you can help it).

That’s my opinion, and I’m sticking to it!

Now, what is the storage going to be used for?

HTH…

Actually I’m using software right now, mirror raid for boot and swap and strip for root and home partition but I need to find out three SATA 3Gb/s disks is better for strip raid or two SATA 6Gb/s?

Well, assuming your hardware can support 6GB/sec from the disk transfer speed, then of course the higher the speed, the faster it’s going to be – in theory.

So to summarize:

you are using RAID 1 for boot + swap
you are using RAID 0 for / and /home?

HTH…

Yes correct

Hmm – more confused than ever. Is this just 1 drive, or are these partitions on different drives?

If it’s 2 different drives, why not do something like:

RAID 1 on the current drives, and perform a standard installation – those things don’t change much over time, and (other than, say ‘/var’), they’re fairly read-intensive. So you would get mirroring for the boot drive, and your system, and thus you could tolerate a drive failure. This scheme would be for things like /, /usr, /boot, etc. /var would be the only write-intensive filesystem on this array, and as a whole shouldn’t stress the drive throughput that much.
Mount /tmp in RAM if you have lots of memory.
RAID 1+0 (or RAID 5 eek!) /home, and anything else that you access or use a lot, like MP3 files, media files, etc.

that seems to make more sense to me, and unless I’m missing something, the system should really cook along (from an I/O perspective).

HTH…

Actually I’ve problem with my suse 11.4 right now, kde taking too long to boot and consume all my RAM for disk cache, Is that connected to my current RAID configuration?

This is RAID configuration:
100MiB RAID 1 boot 100+100
1GiB RAID 1 swap 1+1
30GiB RAID 0 15+15 root
Rest RAID 0 home
500GiB SATA 3Gb/s x2

Not necessarily. What is your machine configuration, how much RAM, CPU type and speed, what type of Video Card do you have, etc?

When you say “consume all my RAM”, how are you noting that? How long is “too long to boot”? It’s possible that maybe you just need to disable some services.

Athlon 640 / Asus crosshair 4 formula with 890FX / 8GiB RAM / nVidia 250GTS. It consume approx 6.8GiB of my RAM and taking about 2min to boot and the other problem is sometimes kde logout automatically.

I doubt your performance issues are related to your system configuration, it sounds quite healthy.

I’d look into what services could be disabled, and what is hanging / taking so long for boot /startup/etc. Also, check into disabling un-needed KDE services.