Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: Partitions Raid 10

  1. #1

    Default Partitions Raid 10

    I am installing a server on Suse 11 . I would like use a Raid 10. My motherboard (Asus P5QPRO) support RAID 10, so in BIOS I set RAID 10 and I am beginning to install Suse 11 but I am assailed by doubts about the partition of my 4 SATA.
    In Custom partitioning for experts (in suse installation) I see:

    /dev/mapper/isw_bbhgjighdb_Volume10 465.7GB BIOS RAID isw_bbhgjighdb_VOLUME10
    /dev/sda 465.7GB ST3500320AS
    /dev/sdb 465.7GB ST3500320AS
    /dev/sdc 465.7GB ST3500320AS
    /dev/sdd 465.7GB ST3500320AS

    What are the right partitions for RAID 10? Thanks

  2. #2
    Join Date
    Sep 2008
    Location
    Dubai
    Posts
    1,903

    Default Re: Partitions Raid 10

    If you have RAID at the motherboard/BIOS level, you need to set up everything there and it exposes the only the final configuration to the OS as normal drives. You don't really do anything specific to the RAID at the OS level. As far as the OS is concerned, it simply sees them as normal drives and partitions.

    Have you properly set up RAID10 at the BIOS level?
    openSUSE 12.3 (x86_64) with Kernel 3.7.10-1.16-desktop and KDE 4.11.2 on MacBook Pro
    Latest MS Windows version used: Win95

  3. #3

    Default Re: Partitions Raid 10

    Thanks syampillai for your reply.
    In Intel Matrix Storage manager I set RAID 10 strip size 64 KB. It is very simply: it is only 1 click!
    I see:
    RAID VOLUME
    ID 0- NAME VOLUME 10 - LEVEL RAID 10 (RAID 0-1) - STRIP 64 KB - SIZE 931.5 GB (I have 4 HD 500 GB) -STATUS NORMAL - BOOTABLE YES
    PHYSICAL DISK my 4 HD size 465.7GB TYPE/STATUS member disk (0) for each disk .
    Do I need to create the partition only on
    /dev/mapper/isw_bbhgjighdb_Volume10 465.7GB BIOS RAID isw_bbhgjighdb_VOLUME10 ?
    Do I need to create partition ext3 (and swap of course)?

  4. #4
    Join Date
    Sep 2008
    Location
    Dubai
    Posts
    1,903

    Default Re: Partitions Raid 10

    Once setup at the BIOS level, theoretically, you should see only one disk at the OS level. In your case, it is showing the RAID disk and the 4 physical disks. I am not familiar with Matrix Storage manager.

    Anyway, install the OS on the RAID disk now, ignoring those physical disks /dev/sda, b, c and d.

    Another issue I see the size it is showing. The RAID disk is shown in the BIOS having 931.5GB (which is correct) and in the OS, it has only 465.7 GB.

    Partitions:
    Sizing of partitions depends on your need. Do you want to set it up for a server environment?
    ext3 is good. As a rule of thump, swap should be approximately double that of the main memory available.
    openSUSE 12.3 (x86_64) with Kernel 3.7.10-1.16-desktop and KDE 4.11.2 on MacBook Pro
    Latest MS Windows version used: Win95

  5. #5

    Default Re: Partitions Raid 10

    Quote Originally Posted by mox88 View Post
    Thanks syampillai for your reply.
    In Intel Matrix Storage manager I set RAID 10 strip size 64 KB. It is very simply: it is only 1 click!
    I see:
    RAID VOLUME
    ID 0- NAME VOLUME 10 - LEVEL RAID 10 (RAID 0-1) - STRIP 64 KB - SIZE 931.5 GB (I have 4 HD 500 GB) -STATUS NORMAL - BOOTABLE YES
    PHYSICAL DISK my 4 HD size 465.7GB TYPE/STATUS member disk (0) for each disk .
    Do I need to create the partition only on
    /dev/mapper/isw_bbhgjighdb_Volume10 465.7GB BIOS RAID isw_bbhgjighdb_VOLUME10 ?
    Do I need to create partition ext3 (and swap of course)?
    RAID 10 with a 64K chunk size is going to be a dog. I would recommend 256 or 512K chunks depending on your work load.

    Also, you may want to consider using md devices (Software Raid) from within Linux, as if you change motherboards or yours breaks, your data may be questionably intact. With software RAID you can plug the drives in to another machine with at least as recent a kernel and retrieve your data. Also, many "BIOS" RAID solutions are just a firmware driven software solution of unknown performance and quality.

  6. #6

    Default Re: Partitions Raid 10

    Not many but all motherboard driven RAIDs are in fact software (and CPU) driven, that's why you need drivers for that

    For real RAID you don't need any drivers.
    How does a linux geek make love??

    - rtfm; unzip; strip; touch; finger; mount; fsck; more; yes; umount; zip; sleep;

  7. #7

    Default Re: Partitions Raid 10

    They are still RAID, as they fit the definition; however they are not "hardware driven" as they are still highly CPU dependent for any calculations.

  8. #8

    Default Re: Partitions Raid 10

    Thanks for the answers! I have not crier if I can configure RAID 10 from Yast and I kindly explain to me how someone carries .. I am really confused thanks

  9. #9

    Default Re: Partitions Raid 10

    Sorry for my previous post, it is not understandable...
    I am trying to use suse 11 installation to set raid 10.
    I understand how set 2 HD in raid 1 or in raid 0 but I don't understand how I set 4 HD in raid 10 and I don't understand how I must partitioning the 4 hdd: 2 HD in raid 1 with 3 partitions:
    a - /boot
    b - swap
    c - /

    and the other 2 HD in raid 1 with ?????

    thanks in advance for the help you give me

  10. #10

    Default Re: Partitions Raid 10

    In YaST to do it you have to use RAID 1 then RAID 0, however if you do it via command line it is much more flexible and can even use uneven numbered drives. Here's a link on how to do that with decent settings:

    RAID - MythTV

    Also, you should not do a mirror of stripes, but rather, a stripe of mirrors.

    Otherwise, you lose some of the robustness in the tolerance of failure. If you stripe first, a loss on the RAID 0 side will wipe out your data, so any disk loss will result in a failure of the set. If you lose any 2 disks you lose all your data that way. However, with a good RAID 10 you can lose up to 2 non adjacent drives.

    To explain, let us assume you have 4 drives for your RAID 10. You have A, B, C and D

    A and B become /dev/md0, a RAID 0
    C and D become /dev/md1, a RAID 0

    /dev/md0 and /dev/md1 become /dev/md2, a RAID 1, of 2 RAID 0, (10)

    You lose drives A and D and now you have no data

    Now, my scenario works like this:

    A and B become /dev/md0, a RAID 1
    C and D become /dev/md1, a RAID 1

    /dev/md0 and /dev/md1 become /dev/md2, a RAID 0, of 2 RAID 1, (10)

    You lose drive A and D

    B contains all the data that A had. D contains all C had, you are still running.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •