Page 1 of 3 123 LastLast
Results 1 to 10 of 24

Thread: 12.3 with raid10

  1. #1

    Default 12.3 with raid10

    Hello all. Fairly new to linux but not computers and networks. I used the Intel SATA Raid controller on a server to create a raid10 array using 4x 2TB Western Digital RE4 hard disks. I then installed open SUSE 12.3 on it. All is great other than the raid rebuilds after every reboot. For some reason it does not mark the array as clean before it shuts down. i have read that having the root file system on the array will cause this as it unmounts the file system before it can be marked. "They" say you can modify a script to prevent this but have yet to see what script or the mods for it to prevent this. I would greatly appreciate any help on this. I have a production database on this server and wish to not rebuild it if i don't have to.

    mdadm --detail /dev/md126

    /dev/md126:
    Container : /dev/md/imsm0, member 0
    Raid Level : raid10
    Array Size : 3711641600 (3539.70 GiB 3800.72 GB)
    Used Dev Size : 1855820928 (1769.85 GiB 1900.36 GB)
    Raid Devices : 4
    Total Devices : 4

    State : active
    Active Devices : 4
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 0

    Layout : near=2
    Chunk Size : 64K


    UUID : dd2ab43b:37dd6ee2:8d78e0a9:ba8c4eec
    Number Major Minor RaidDevice State
    3 8 0 0 active sync /dev/sda
    2 8 16 1 active sync /dev/sdb
    1 8 32 2 active sync /dev/sdc
    0 8 48 3 active sync /dev/sdd

    cat /proc/mdstat

    Personalities : [raid10] [raid0] [raid1] [raid6] [raid5] [raid4]
    md126 : active raid10 sda[3] sdb[2] sdc[1] sdd[0]
    3711641600 blocks super external:/md127/0 64K chunks 2 near-copies [4/4] [UUUU]

    md127 : inactive sdc[3](S) sda[2](S) sdd[1](S) sdb[0](S)
    12612 blocks super external:imsm

    unused devices: <none>


    Thanks!

  2. #2
    Join Date
    Sep 2012
    Posts
    7,093

    Default Re: 12.3 with raid10

    Did you install all available updates? There was update for 12.3 mdadm; from changelog it sounds like it may fix it.

  3. #3
    Join Date
    Jun 2008
    Location
    Netherlands
    Posts
    29,740

    Default Re: 12.3 with raid10

    And a remark on technique of the forums here.

    As a new member (welcome!) you will not be aware of the CODE tags. The CODE tags are created by clicking on the # button in the tool bar of the post editor. You are kindly requested to copy/paste your computer texts (when applicable, prompt, command, output and next prompt) direct from the terminal emulator in between CODE tags. That will make them clearly stand out against the "story telling", keep layout as it was on the terminal and it has more advantages in readability.

    Thank you for your cooperation.
    Henk van Velden

  4. #4

    Default Re: 12.3 with raid10

    I applied all updates after the install via yast and zypper. my mdadm version is:

    Code:
    mdadm --version
    mdadm - v3.2.6 - 25th October 2012
    my md version is:

    Code:
     
    md --version
    mkdir (GNU coreutils) 8.17
    Copyright (C) 2012 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    Are these up to date?

    Thanks!

  5. #5

    Default Re: 12.3 with raid10

    I just checked another server that i use and has 4x 250GB SSD drives in raid10, also using on board intel SATA controller, and it also is rebuilding after reboots. This server has the same versions of OS and mdadm on it. Have i built the raid incorrectly? I built them in the controll bios then did the install of the OS. If I recall the OS install noticed the setup in the controller and asked if I was ok with this. Not sure of the exact message. It was a while ago. I said yes and all was well, until I noticed the resycnh after the reboots. thanks!

  6. #6

    Default Re: 12.3 with raid10

    Made progress. After running all updates one of the 12.3 boxes, my mdadm version is 3.2.6. I built a test box last night and bought 4x 500GB WD SATA3 drives for it. I went through the same procedure of building raid10 array from intel on board controller on my MSI G series mother board. Installed 12.3 and updated completely. Still marking dirty on reboot or shutdown. I then laid waste and did the entire procedure over with the opensuse 13.1. This does not mark array dirty on reboot or shutdown. version of mdadm is 3.3 on the opensuse 13.1 box.

    How can i update my mdadm 3.2.6 on the 12.3 box to the version 3.3 that is on my 13.1 box? Please help Thanks!

  7. #7
    Join Date
    Nov 2009
    Location
    West Virginia Sector 13
    Posts
    16,286

    Default Re: 12.3 with raid10

    As I mentioned in the other thread you posted to it sounds like a bug so you need to report to Bugzilla to have it fixed. 13.3 is still supported so it may generate a solution.

    Alternatively you can compile the newer code from source,. Of course there is no way to know if something else will break from this operation. Then again it could be a kernel problem.

    In you position I'd do what had to be done to move to 13.1 and then move to the Evergreen repos to have the advantage of long support life.

  8. #8

    Default Re: 12.3 with raid10

    Spent many hours on this and have not gotten 12.3 on RAID10 to not re synch on reboot. i would love to upgrade to 13.x but the software, vicidial, I need to run on the box is not supported on that version, from a distribution stand point. If I new what I was doing I guess I could spend the time in trying to make that work, but I think that's more than I can chew.

    I tried a neat little tool call raider even. I did the install on single disk. used raider to convert to raid10. spit some errors out in the very end. everything worked but the last drive was removed and i could not add it. It said it was raid10 with 3 of 4 drives (UUU_) and did not go into re synch when I rebooted. I just could no get the drive added. I think there is a fix for my problem as this raider did not cause a re synch on reboots. I have read in many places about a script that marks the array as clean before it unmounts it. I just for the life of me am unable to find this script. I would pay for that script BTW

  9. #9

    Default Re: 12.3 with raid10

    Quote Originally Posted by gogalthorp View Post
    As I mentioned in the other thread you posted to it sounds like a bug so you need to report to Bugzilla to have it fixed. 13.3 is still supported so it may generate a solution.

    Alternatively you can compile the newer code from source,. Of course there is no way to know if something else will break from this operation. Then again it could be a kernel problem.

    In you position I'd do what had to be done to move to 13.1 and then move to the Evergreen repos to have the advantage of long support life.
    Done, Bugzilla report that is. Thanks!

  10. #10

    Default Re: 12.3 with raid10

    I have 12.3 on raid10, with no resynch happening on reboots Problem is I had to disable my intel on board raid controller to get it this way and just assemble the array during the install.

    I still need to test what happens on drive failures, but so far so good. I am not sure about the boot partitions. I guess i will find out though.

    If everything goes well on the testing I will then have to consider the rebuild of the servers as they are using the raid controller. I would just live with it but the asterisk system on these requires reboots. i have a cron job kicking this off every weekend. Is this to much stress on the disks? 4x Samsung 250GB SSD drives in one server and 4x Western Digital RE4 2TB drives in the other. Thoughts please. Thanks!

Page 1 of 3 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •