Results 1 to 5 of 5

Thread: Duplicate Raid 1 system disk

  1. #1
    Join Date
    Oct 2008
    Location
    Grenoble, France
    Posts
    19

    Default Duplicate Raid 1 system disk

    I have installed Opensuse 11.1 on a software Raid 1 architecture.
    I would like to clone this config on 4 other PCs.

    It is easy to create the partition on the new host:
    1) save master config
    sfdisk -d /dev/sda >/mnt/SYSTEME/OpenSuse11.1/sda-partitions.txt
    sfdisk -d /dev/sdb >/mnt/SYSTEME/OpenSuse11.1/sdb-partitions.txt

    2) create partitions on the new host
    sfdisk /dev/sda </mnt/SYSTEME/OpenSuse11.1/sda-partitions.txt
    sfdisk /dev/sdb </mnt/SYSTEME/OpenSuse11.1/sdb-partitions.txt

    where /mnt/SYSTEME is a nfs file system.

    But how can I (automaticaly) duplicate de Raid config with mdadm ?
    I need the same UUID (found in /etc/mdadm.conf) and the same architecture (in /proc/mdstat). Doing this by hand is a little bit dangerous (easy to make mistakes!).

    I had a glance at mkraid and raidtab but it does not exist anymore in OpenSuse 11.1.

    Thanks for your help or suggestions.

    Patrick

  2. #2
    Join Date
    Jun 2008
    Location
    Moscow, Russia
    Posts
    3,050
    Blog Entries
    1

    Default Re: Duplicate Raid 1 system disk


  3. #3
    Join Date
    Oct 2008
    Location
    Grenoble, France
    Posts
    19

    Default Re: Duplicate Raid 1 system disk

    Quote Originally Posted by Lazy_Kent View Post
    Yes, /etc/mdadm.conf is optional. But, if I have fully understood the man page, if this file doesn't exist I have to add specific options to allow mdadm to identify the raid config.
    Moreover, when I backup the master PC, I have this /etc/mdadm.conf file and it will be restored on all the PCs. And the uuids could be differents in this file and in the raid1 partitions...

    One simple way could be a "dd" on /dev/sda to clone all the raid partitions. It works fine but restoring on the second PC with
    dd if=mybakupfile of=/dev/sda
    runs for 6 to 7 hours!
    sda is 250Gbytes but I only use 5 Gbytes with the OS.

  4. #4

    Default Re: Duplicate Raid 1 system disk

    Quote Originally Posted by samontetro View Post
    One simple way could be a "dd" on /dev/sda to clone all the raid partitions. It works fine but restoring on the second PC with
    dd if=mybakupfile of=/dev/sda
    runs for 6 to 7 hours!
    sda is 250Gbytes but I only use 5 Gbytes with the OS.
    Well, so why not just clone those 5GB?
    I hope, you did not format the array like 250GB for /.
    So you have to be able to "identify" the 5GB like /dev/sda<x> or /dev/mapper/<vg>-<lv> (for lvm) or whatever - otherwise it would be hard to mount and use them. And that you can copy using dd (dd does not need disks as input/output, it even works with plain files for both of them - to a certain extent).
    If not: there is a program simply called "dump" which should be able to dump a complete filesystem. Unfortunately, i doubt this is contained on rescue media.

    Concerning the UUIDs in /etc/mdadm.conf i would suggest you to install some dummy system on each of the destination machines containing the correct disc setup (partitioning, raid, lvm) but only minimal software content. I hope (!), this is faster than a regular install. Then save the mdadm.conf, containing the UUIDs for this machine.
    After this, copy the "data", but, before booting the cloned system, mount it from some rescue system and replace the mdadm.conf file by the one saved before from the dummy system.

    Please be aware: i never had your problem, so what i explained is pure theory.
    (But as you can not - yet - really "damage" anything on the other three systems, it should not be dangerous to try out)

  5. #5
    Join Date
    Oct 2008
    Location
    Grenoble, France
    Posts
    19

    Default Re: Duplicate Raid 1 system disk

    I've done some progress with this duplication of OpenSuse system on software Raid1. This is the procedure I use but I have still a question...

    1) Backup:
    - boot with a live cd (knoppix in this case) and become root:
    su -
    - start nfs:
    /etc/init.d/portmap start
    /etc/init.d/nfs-common start
    - mount a NFS partition for backup
    mount -t nfs backuphost:/the/backup/filesystem /mnt
    - backup one disk with dd, compress ans split the backup in 2Gb files. The -bs option is important: less than 2 hours with bs=1k, 4 hours with default value!
    dd if=/dev/sda conv=sync,noerror bs=1k |gzip -9 -c | split -d -b 2000m - /mnt/sda.gz

    2) Restore
    - boot the new PC with a live cd (knoppix in this case) and become root:
    su -
    - start nfs:
    /etc/init.d/portmap start
    /etc/init.d/nfs-common start
    - mount read-only the NFS partition with the backup
    mount -t nfs -o ro backuphost:/the/backup/filesystem /mnt
    - restaure /dev/sda with dd (same remark about block size):
    cat /mnt/sda.gz* |gzip -d -c |dd of=/dev/sda bs=1k
    - dupicate /dev/sda on /dev/sdb
    dd if=/dev/sda of=/dev/sdb bs=4k

    3) reboot
    But after rebooting, the Raid is not available. /proc/mdstat shows that I have only /dev/sdbx partitions available. So I have to readd all /dev/sda? partitions to the Raid:
    mdadm --manage /dev/md0 --re-add /dev/sda2
    mdadm --manage /dev/md1 --re-add /dev/sda6
    etc.
    looking in /proc/mdstat is strange as the recovery percent can reach 140% for a /dev/md? device ????

    Any idea about these behavior (raid not available, recovery info) ?

    4) with OpenSuse 11 you should edit /etc/udev/*-persitent-net-rules to remove information about the previous network interface (the one of the backed up system) and change the renamed network interface name of the current PC from eth1 to eth0 to allow the network scripts to work.

    Patrick

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •