Data migration, from RAID 10 to RAID 1 + SSD

Hi,

I’m currently running a mixed RAID 10 setup with 4 hard disks ( 2 x 500 GB, 2 x 320 GB ) . I’ve recently purchased an Intel 520 240 GB SSD and am thinking on cutting down on the number of disks in use. I intend to end up with the SSD running standalone and a ‘media’ RAID 1 array based on the two 500 GB disks. Of course, the transition will be fun, and that’s what I want to ask advice on.

First of all, some information about my current setup:

$ cat /proc/mdstat

Personalities : [raid10] 
md0 : active raid10 sdd2[4] sdc1[6] sda1[5] sdb1[1]
      606256128 blocks super 1.2 256K chunks 2 far-copies [4/4] [UUUU]
$ df -h | grep -v tmpfs

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1        95G   49G   46G  52% /
/dev/md0        569G  336G  205G  63% /mnt/md
# fdisk -l

Disk /dev/sda: 320.1 GB, 320071851520 bytes, 625140335 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c1830

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   625139711   312568832   fd  Linux raid autodetect

Disk /dev/sdb: 320.1 GB, 320071851520 bytes, 625140335 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000f2c4b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   625139711   312568832   fd  Linux raid autodetect

Disk /dev/sdc: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c05f1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   606261247   303129600   fd  Linux raid autodetect
/dev/sdc2       606261248   608364543     1051648   82  Linux swap / Solaris

Disk /dev/sdd: 500.1 GB, 500106780160 bytes, 976771055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0007a5e3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1   *        2048   201326591   100662272   83  Linux
/dev/sdd2       201326592   826464255   312568832   fd  Linux raid autodetect

Disk /dev/md0: 620.8 GB, 620806275072 bytes, 1212512256 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 1048576 bytes

My current plan is the following

  1. Backup everything to another machine
  2. Go to runlevel 3
  3. Turn off swap ( so that /dev/sdc2 is unused )
  4. Remove /dev/sdc1 from the RAID 10 array . Now /dev/sdc is completelty available
  5. Create a degraded RAID 1 array ( /dev/md1 ) with /dev/sdc only
  6. Transfer everything from /dev/md0 to /dev/md1
  7. Shutdown, remove /dev/sda and /dev/sdb from the system
  8. Install the SSD drive
  9. Fresh installation of 13.1
  10. Sync any data that is needed from /dev/sdd1 ( the former / partition )
  11. Drop all data from /dev/sdd , and add it to the new RAID 1 array

At this point I should be all set. Is there anything I’m missing? Can I do things in a safer/faster way?

Sounds ok to me. Key is the backup of important data. Personally I’d just forget trying to transform but back the data install new with new partitions and restore data to my new RAID 1

I went for the degraded arrays trick because my backup was over the network and I wanted things to finish quicker when restoring. All in all, the migration was uneventful and now I’m done.

Thanks!