Hi,
i have setup a software raid with mdadm. It consists of 4 hdds (1 samsung hd203wi and 3 * hd204ui) each with 2tb.
When im doing benchmarks with bonnie++ or hdparm i get about 60mb/s write speed and 70mb/s read speed. Each single drive from the array has a read speed of > 100mb/s when testet with “hdparm -t”.
I will show some information about my setup:
# mdadm -D /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Thu Aug 11 18:22:42 2011
Raid Level : raid5
Array Size : 5860539648 (5589.05 GiB 6001.19 GB)
Used Dev Size : 1953513216 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Aug 25 04:51:32 2011
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-asymmetric
Chunk Size : 256K
Name : raid:0
UUID : 7457b089:6b1627d6:ce57b6fd:9a201cf6
Events : 1486299
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
2 8 65 2 active sync /dev/sde1
3 8 81 3 active sync /dev/sdf1
My chunk size is 256k, i also tried 128k but the performance was bad, too.
# tune2fs -l /dev/md0
tune2fs 1.41.14 (22-Dec-2010)
Filesystem volume name: <none>
Last mounted on: /home
Filesystem UUID: 02a772b5-21f2-4ed8-be22-a9d4c7dda2e8
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 366288896
Block count: 1465134912
Reserved block count: 73256745
Free blocks: 426107103
Free inodes: 365087866
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 674
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
RAID stride: 64
RAID stripe width: 192
Flex block group size: 16
Filesystem created: Tue Dec 7 18:40:27 2010
Last mount time: Thu Aug 25 02:26:32 2011
Last write time: Thu Aug 25 02:26:32 2011
Mount count: 31
Maximum mount count: 32
Last checked: Thu Aug 11 18:23:52 2011
Check interval: 15552000 (6 months)
Next check after: Tue Feb 7 17:23:52 2012
Lifetime writes: 4370 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 180879512
Default directory hash: half_md4
Directory Hash Seed: db9eb473-727f-447e-abee-b226dc2bbcfc
Journal backup: inode blocks
I adjusted the stride and stripe width values as this howto https://raid.wiki.kernel.org/index.php/RAID_setup#ext2.2C_ext3.2C_and_ext4 says ( At least i think i did it … ).
# blockdev --getra /dev/md0
64
# blockdev --getra /dev/sd{c,d,e,f}
256
256
256
256
I tried different ra values, the best for me was 64. That was a surprise, as i read alot that increasing this value would not harm the performance. In my case, when i use different values from 64 or 192 i get read speeds of ~30-40mb/s. I increased it up to 32768, but the performance was always bad.
# fdisk -l /dev/sd{c,d,e,f}
Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 Köpfe, 63 Sektoren/Spur, 243201 Zylinder, zusammen 3907029168 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00028e50
Gerät boot. Anfang Ende Blöcke Id System
/dev/sdc1 2048 3907028991 1953513472 fd Linux raid autodetect
Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 Köpfe, 63 Sektoren/Spur, 243201 Zylinder, zusammen 3907029168 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004a004
Gerät boot. Anfang Ende Blöcke Id System
/dev/sdd1 2048 3907028991 1953513472 fd Linux raid autodetect
Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
255 Köpfe, 63 Sektoren/Spur, 243201 Zylinder, zusammen 3907029168 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008f827
Gerät boot. Anfang Ende Blöcke Id System
/dev/sde1 2048 3907028991 1953513472 fd Linux raid autodetect
Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
255 Köpfe, 63 Sektoren/Spur, 243201 Zylinder, zusammen 3907029168 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006c778
Gerät boot. Anfang Ende Blöcke Id System
/dev/sdf1 2048 3907028991 1953513472 fd Linux raid autodetect
As far as i know does a start sector of 2048 mean that the partitions are correctly aligned
Im using opensuse 11.4 x64 with the latest patches from the update repositories. Im using the 2.6.37.6-0.7-desktop kernel.
I have 4gb ram and an atom d525.
What is wrong with my setup? Im out of ideas…
Regards
pepe