LVM/RAID troubles

I am trying to build a software raid array of several WD10EADS 1TiB drives in an external enclosure connected via a silicon image pci-e card.

The card has an issue with the server at boot time (Fragmented BIOS) but is seen in the OS (OpenSUSE 11.0).

Regardless, all the drives, sdb, sdc, sdd and sde are seen as 931.5GB.

I have tried Raid 5 and was disappointed with performance, so I have been tweaking settings like blockdev --setra and the stripe size, even trying RAID 10.

However, there are times where I try to remove the lv and it does not work, forcing me to change init levels before I can force it. I also unmount the volume via yast, and on reboot get errors trying to mount a non root volume. I am able to fix it without issue by removing the line from /etc/fstab but I would assume this should be cleaned up as if it is mounted by yast at creation, the destruction should also remove the fstab entry.

When I removed the md device (/dev/md3) on this machine, and tried to rebuild the array using different parameters (f2 RAID10 vs n2) the device was not “letting go” of the drives and I had one or more drives upon recreation of the array showing that they were owned by another md device. I did a mdadm --zero-superblock on the 4 devices and tried to rebuild again. I changed to f2 and also changed the chunk size from the default of 64K to 256K and 512K on different attempts, but got issues such as described. As well, on the initial creation at defaults of n2 and 64K, the drives went into a sync as per /proc/mdstat. The rebuild of the drives as raid 10 did not set the sync, instead listing it as pending.

What gives?