Results 1 to 4 of 4

Thread: New 13.1 Install on RAID1 Fails

  1. #1

    Default New 13.1 Install on RAID1 Fails

    I was running 12.3 x86-64 successfully on a RAID1 setup until one of my Seagate drives threw up some corrupted data and rendered the system non-bootable. The hardware specs are as follows:

    MSI 970A-G46 motherboard
    AMD FX-8120 8 core CPU @ 3.1 GHz (no OC)
    16 GB Patriot VIPER DDR3 Dual 1600MHz
    2 x 2 TB Seagate Barracuda HD
    MSI 6450 Radeon 2 GB DDR3 video

    I took the opportunity to move to 13.1 x86-64. I bought a brand new pair of drives (again, Seagate 2 TB Barracudas) for the install, and then I would migrate the data off the old RAID1 array to the new install.

    Not to be - after the install, the system would not boot, hanging with the message "waiting for device /dev/md125 to appear...... could not find /dev/md125"

    The partition layout is as follows:

    Partition Size Type
    sda1 258.00 MB xFD
    sda2 4.00 GB xFD
    sda3 256.00 GB xFD
    sda4 1.57 TB xFD
    sdb1 258.00 MB xFD
    sdb2 4.00 GB xFD
    sdb3 256.00 GB xFD
    sdb4 1.57 TB xFD

    RAID layout is as follows:

    /dev/md-boot sda1 sdb1 raid1
    /dev/md-swap sda2 sdb2 raid1
    /dev/md-root sda3 sdb3 raid1
    /dev/md-data sda4 sdb4 raid1

    Install runs without any issue and completes normally. As noted above, upon first boot, it can not find a root partition. (As a side note, this is the exact same layout scheme that I used on the 12.3 system.)

    If I run install again, in the partitioning section of the install, the partitions are now listed as follows:

    Partition Size Type
    sda1 258.00 MB xFD
    sda2 4.00 GB xFD
    sda3 256.00 GB xFD
    sda4 1.57 TB Extended
    sda5 2.01 GB F swap
    sda6 1.56 TB F x83 Extended /
    sdb1 258.00 MB xFD
    sdb2 4.00 GB xFD
    sdb3 256.00 GB xFD
    sdb4 1.57 TB xFD

    If I launch the rescue system from the install DVD immediately after the installation completes, and then manually mount my /dev/md-root, I note that there is no /etc/mdadm.conf file.

    I ascertain at this point that when the installation completes that the /etc/mdadm.conf file is never constructed, and thus, fails to mount any of the RAID1 partitions.

    I have been able to reproduce this multiple times with the same results.

    Since I was on a deadline and needed to recover the data from the previous raid arrays from the 12.3 install (remember, I set that pair of drives aside and was using new drives), i deferred to installing a fresh 12.3 x86-64 instance on the new pair of drives. I used the exact same layout of partitions and arrays for the 12.3 install and had no issues.

    I have noted that there have been other posts of installations of 13.1 not being able to boot involving raid configurations, and I didn't want to trust a production load on a "hacked" install (I probably could have constructed a /etc/mdadm.conf immediately after installation was complete, but before first boot via the rescue system, as noted above).

    I'd really like to use 13.1 and be current, but this is kind of holding me back.... I'll wait until it's resolved... in 13.2?

  2. #2
    Join Date
    Nov 2009
    Location
    West Virginia Sector 13
    Posts
    16,288

    Default Re: New 13.1 Install on RAID1 Fails

    Did you report it to bugzilla?? If not how will it get fixed??

  3. #3

    Default Re: New 13.1 Install on RAID1 Fails

    Let me first say, that I have a similar setup and because of a
    disk crash decided to a fresh install as well. The way I approached
    it was:
    1) old 12.3 raid1 add the new disk and sync, so that I'm sure
    I have all my data on a second disk again.
    Layout looks like this:
    /dev/sda(b)1 * 2048 210943 104448 fd Linux raid autodetect
    /dev/sda(b)2 210944 3907028991 1953409024 fd Linux raid autodetect

    on raid1 of sda(b)1 I have /boot
    on raid1 of sda(b)2 I have lvm running:
    /dev/system/home:system
    /dev/system/root
    /dev/system/swap
    /dev/system/server

    2) During the install, when it came to system partitioning, I recognized
    that I really had to pay attention to the layout compared to what was
    suggested. That means: What partitions is the installer recognizing,
    what is the installer assuming that's not existing and provides his own
    suggestions, are all the types (swap; filesystem, etc) there and do they
    have proper mount points ....

    3) Then install and reboot went without any problem. No issues at all.


    Looking at your second partition table you posted it looks like the
    installer has not identified sda2 to be the swap partition and consequently
    created his own one which lead to the fact that he put into sda4 as an
    extended one.

    I guess what I'm trying to say is pay close attention to the partition/raid
    config while in installer mode ...

    Hope this helps.
    4.4.138-59-default, GeForce 9800 GT passive, MEM 8GB,
    AMD FX(tm)-6300 Six-Core , Asrock 960GM/U3S3 FX

  4. #4

    Default Re: New 13.1 Install on RAID1 Fails

    Actually there is a difference - I'm not using LVM. They are separate standalone md arrays...

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •