Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: dmraid vs mdadm and RAID Status Monitoring

  1. #1
    Join Date
    Apr 2011
    Location
    Vancouver, BC, Canada
    Posts
    100

    Default dmraid vs mdadm and RAID Status Monitoring

    Hello,

    I'm pretty new to FakeRAID vs JBOD in a software RAID, and could use some help/advice. I recently installed 11.4 on a brand new server system that I pieced together, using Intel RAID ICH10r on an ASUS P8P67 Evo board with 2500K Sandy Bridge CPU. I have two RAIDs setup, one RAID-1 mirror for the system drive and /home, and the other consists of four drives in RAID-5 for a /data mount.

    Installing 11.4 seemed a bit problematic. I ran into this problem: https://bugzilla.novell.com/show_bug.cgi?id=682766. I magically got around it by installing from the live KDE version with all updates downloaded before the install. When prompted, I specified I would like to use mdadm (it asked me), however it proceeded to setup a dmraid. I suspect this is because I have fake raid enabled via the bios. Am I correct in this? Or should I still be able to use mdadm with bios raid setup?

    Anyways, to make a long story short, I now have the server mostly running with dmraid installed vice mdadm. I have read many stories online that seem to indicate that dmraid is unreliable versus mdadm, especially when used with newer SATA drives like I happen to be using. Is it worth re-installing the OS with the drives in JBOD and then having mdadm configure a linux software raid? Are their massive implications one way or another on if I do or do not install mdadm or keep dmraid?

    Finally, what could I use to monitor the health and status of a dmraid? mdadm seems to have it's own monitoring associated with it when I was glazing over the man pages.

    Thanks for the advice/help!

  2. #2
    Join Date
    Nov 2009
    Location
    West Virginia Sector 13
    Posts
    16,288

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    Problem is that FAKE RAID does bot work well with Linux. You may be able to find a driver for you chip set you and maybe not. All RAID that is integrated on the MB seems to be the FAKE kind. At least I've not heard of another. There are 2 ways to get RAID to work with Linux. Real hardware RAID and Software RAID. Not that Software RAID will not work with Windows.

  3. #3
    Join Date
    Apr 2011
    Location
    Vancouver, BC, Canada
    Posts
    100

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    It seems as though dmraid is fully working with my Intel ICH10r FAKE RAID. I tested it simply by pulling a drive. OpenSUSE noticed the issue immediately and the CPU shot up in usage, but everything was still operational. It didn't notify me, however, that the RAID failed. I plugged back in the drive, and it re-synced and continues to work fine.

    I'm only running OpenSUSE, so Windows RAID compatibility is of no issue. I could see this being very problematic if I were running a dual boot though.

    Are there any performance gains with running the dmraid (FAKE RAID) over the 100% linux software RAID (mdadm)? What about restoring from a RAID controller failure? If I found another Intel FAKE RAID, would it be easy to restore the drive config and get back and running? Is it easy to restore a software RAID onto any JBOD controller?

  4. #4
    Join Date
    Nov 2009
    Location
    West Virginia Sector 13
    Posts
    16,288

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    Well Good to know it seems hit or miss if any given FAKE RAID chip set will work. You hit a point about any hardware solution. In most cases you have to replace the any failed hardware with the exact same hardware to be assured of functioning. Software solutions tend to be more standards based.

  5. #5
    Join Date
    Apr 2011
    Location
    Vancouver, BC, Canada
    Posts
    100

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    Hmmm.... so essentially, it's worth going with software RAID versus FAKE RAID just on the possibility of my motherboard and/or bios RAID controller failing. I'm guessing that Intel RAID solutions are not standardized? That would just seem to make too much sense.... It would be nice if I could find some documentation on restoring a FAKE RAID to another Intel FAKE RAID controller... but right now Google's failing me.

    Is it fairly straight forward to restore a JBOD Linux software RAID? I've never played around with Linux software RAID, so really have no idea what it entails....

  6. #6
    Join Date
    Nov 2009
    Location
    West Virginia Sector 13
    Posts
    16,288

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    Thats the problem only Intel can say. But most hardware solutions though they fallow a similar pattern may do things a little different to distinguish them selves from their competitors.

    Software RAID works fine. But all RAID requires more love and care then non-RAID.

  7. #7
    Join Date
    Apr 2011
    Location
    Vancouver, BC, Canada
    Posts
    100

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    So I've re-installed openSUSE as a software RAID. I disabled the Intel FAKE RAID and setup my 6 disks as JBOD. Layout is as follows:

    Code:
    /md0     /boot   100MB    (sda1 and sdb1 - RAID 1)
    /sda2    swap    4GB
    /sdb2    swap    4GB
    /md1    /           20 GB     (sda3 and sdb3 - RAID 1)
    /md2    /home   900 GB   (sda4 and sdb4 - RAID 1)
    /md3    /data     5.46 TB  (sdc, sdd, sde, and sdf - RAID 5)
    This took several tries as the KDE Live and Net installers did not want to install this setup for whatever reason (kept crashing at end of install). The full install version of 11.4 did work fine. However, I have a couple notes on this:

    1. During the install, the installer did not prompt me to use mdadm like it did when I had the FAKE RAID controller active. I was able to specify the RAID through the partitioner, which seemed to work okay.

    2. Once installed, my CPU usage is a lot higher compared to when running dmraid. Typically it seems to be using 12 to 15 % for the md_raid processes when the computer is at idle.

    3. My HD I/O is thrashing very badly. All drives are continually active for some reason, even when not running any programs, and am just logged into the machine. I/O across each RAID does not seem to be in sync when monitoring the activity in System Monitor.

    The last two points concern me a lot. I'm seriously contemplating taking my chances with dmraid for the increase in performance and to stop my drives from thrashing so much.

    Thoughts and suggestions?

  8. #8
    Join Date
    Jun 2008
    Location
    UTC+10
    Posts
    9,683
    Blog Entries
    4

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    I have RAID1 and I don't have issues with CPU usage at idle. Did you check to see what processes are using CPU with htop or top? Maybe you have a document indexer going in the background?

  9. #9
    Join Date
    Apr 2011
    Location
    Vancouver, BC, Canada
    Posts
    100

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    Here's the start of my top....

    Code:
    top - 19:41:39 up  4:29,  1 user,  load average: 2.89, 2.87, 2.95
    Tasks: 132 total,   2 running, 130 sleeping,   0 stopped,   0 zombie
    Cpu0  :  0.0%us,  8.5%sy,  0.0%ni, 86.7%id,  3.7%wa,  0.0%hi,  1.0%si,  0.0%st
    Cpu1  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
    Cpu2  :  0.4%us,  0.4%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
    Cpu3  :  0.3%us,  0.3%sy,  0.0%ni, 91.7%id,  7.6%wa,  0.0%hi,  0.0%si,  0.0%st
    Mem:     16059M total,     1491M used,    14568M free,       75M buffers
    Swap:     8189M total,        0M used,     8189M free,      991M cached
    
      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
      914 root      20   0     0    0    0 S   12  0.0  24:44.94 md3_raid5 
      941 root      20   0     0    0    0 D    9  0.0  19:02.27 md3_resync 
      939 root      20   0     0    0    0 D    2  0.0   6:02.72 md2_resync 
      889 root      20   0     0    0    0 S    1  0.0   4:45.27 md2_raid1  
    13447 root      20   0     0    0    0 S    1  0.0   0:03.67 kworker/0:0  
      331 root      20   0     0    0    0 S    0  0.0   0:01.26 md1_raid1 
        1 root      20   0 12460  688  560 S    0  0.0   0:00.89 init      
        2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd  
        3 root      20   0     0    0    0 S    0  0.0   0:00.30 ksoftirqd/0 
        6 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/0 
        7 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/0 
        8 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/1
    It seems pretty obvious that the raid is dictating the CPU.... I've had the box just sitting now for ~5 hours, and it's still thrashing. All rpm updates have been done too. If you take a look at the load averages, you can see it's pretty high for a box that is just on and not doing anything yet.

  10. #10
    Join Date
    Jun 2008
    Location
    UTC+10
    Posts
    9,683
    Blog Entries
    4

    Default Re: dmraid vs mdadm and RAID Status Monitoring

    Maybe it's just syncing, check the output of cat /proc/mdstat. Once synced, it will idle.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •