After a few months of research I still have a couple questions concerning a RAID 5 array I wish to build soon.
System specs:
Asus P5Q Pro
Intel E6750
2x 1GB DDR2 800
Gb Ethernet LAN
openSuse 11.0 64-bit
This is my personal server. I mainly use it for NAS and media functions (encoding, streaming, etc). All my LAN hardware is Gb and I do intend to use the Gb speeds (even now without the RAID I push 300-400 Mb/s over the network).
I want to install a ~2TB RAID 5 array (I’m still trying to decide between 4x 500, 750, or 1024 GB drives). BIOS RAID is not an option. I am not sure if I want to use kernel RAID (pure software) or go with a full hardware RAID. I would like a single large volume and I would like the throughput to be close to (or in excess of) 1 Gb/s.
The first question is will kernel RAID be able to provide the target 1 Gb/s throughput? I know the hardware solution will be faster (and use less CPU), but if the software solution is fast enough I don’t see the need to buy a $500 RAID controller just yet.
Second do large partitions/volumes (over 2 TB) work? I have not been able to find a conclusive answer to this. I know traditional partitions do not support sizes over 2 TB. The other confusion here is that partitions and volumes seems to be used interchangeably. From what I know the partition is the way the volume is divided up (say you have a 500 GB hard drive, the volume would be 500 GB but you could make up to 4 primary partitions with a total size of up to 500 GB. Basically the partitions are divisions in the volume). I think this concerns hardware RAID more than the kernel RAID since the hardware RAID would present the OS a single very large hard drive with capacity over 2 TB. I will probably put the OS on a different volume due to the size issues, so this large volume is for storage (mainly multimedia).
If I go hardware I am trying to decide between the HighPoint RocketRAID 3510 (Newegg.com - HighPoint RocketRAID 3510 SATA II Hardware RAID Controller with Intel 2Nd Generation PCI-Express I/O Processor RAID 0/1/5/6/10 JBOD - Controllers / RAID Cards) or the 3ware 9650SE-8LPML (Newegg.com - 3ware 9650SE-8LPML PCI Express SATA II Controller Card RAID Levels 0, 1, 5, 6, 10, 50, Single Disk, JBOD - Controllers / RAID Cards). I am leaning towards the 3ware because the other 3ware products seem to have good linux support. I also want the RAID controller to be natively supported by the kernel. I have had bad luck with other (although cheap) RAID solutions and their proprietary drivers not working.
In the future I will have to add some other SATA/RAID adapter since the P5Q Pro only has 6x SATA connectors (for future expansion I intend to add a second RADI 5 array when I fill this first one, then cycle out the arrays as needed). This would also be an argument for hardware RAID. Once concern with hardware RAID is that performance seems to be mixed. Some people claim nice hight speed while other claim speeds as low as 8MB/s (note that most of the HDs I am considering I have tested before and clocked real world speeds at about 60 MB/s sustained). The RAID array needs to match at least 125 MB/s. The RAID is mostly read operations although it does do a fair share of writing (the data has to get there somehow).
Also which file system should I use? Should I stick with ext3 or should I try something else? I have only used ext2/3 and ReiserFS so I don’t know a whole lot about the others out there (I have seen XFS mentioned a lot with RAID 5 arrays).
Any comments, suggestions, and/or advice is welcome. The purpose in this thread is to hopefully figure out what I should buy before I invest $500-1K in hardware. I know things like backups will be a pain with this large array (they already are and I only have 600GB of data…).
One last thing I was pondering but never could find any mention of anyone attempting, some of the Western Digital GP drives support multiple RPM speeds (5400 or 7200). From what I understand they idle at 5400 RPM and then kick up to 7200 RPM when accessed. I have one in an external enclosure and it works nice, but I am wondering what that would do to a RAID array. My guess is it would be like building an array using some 5400 RPM drives and some 7200 RPM drives. The power (and cooling) savings would be nice but intuitively I think would be bad for the array. Any ideas why no one has attempted this (or at least not admitted to trying it)?