How to start a RAID1 with a single disk? (Preferably via YAST)

Hello everyone

I’m currently fiddling around with creating a home-server, to replace my aging Synology Diskstation setup. Having run several Synology’s over the past 7 years, I’ve finally gotten around to wanting to ditch the Synology’s, and the limitations imposed by their hardware/software.

Therefore I’ve tried assembling a (relatively) low-wattage system, based on an i3-3250T, and a ASUS P8P67 Deluxe motherboard (which I raided from my desktop).

I’ve thrown it into a case which has 4 hot-swap-bays, for the data-drives I’m about to run in the server. The main drive is being backed-up to an external drive, as well as the most important config-files/docs etc. are being backed up to an offsite backup.

My question comes from my desire to migrate the data from my Synology DS209, currently running a RAID1 with 2 3TB-drives (WD & Seagate) to my new home server.

I could just move the drives directly to the servers hot-swap-bays, and openSUSE would recognize it, and be able to use it. (Did that with an even older 207, which I then proceeded to wiping and create a clean SW RAID1)

Synology however formats the drives with 3 partitions (/, /tmp? & data). And the RAID1 from the DS209 is even Ext3, and I’d like to upgrade that to Ext4.

I do not however, have enough free space to make a complete duplicate of all data on the drives, and I’m therefore looking for a way to start the RAID1 with 1 drive.
I’ll yank the drive from the DS209, insert it to the new server, format and prep it for becoming one half of the RAID1 in the server.

Then I’m hoping to copy all the data from the DS209 to the new server’s (50%) RAID1. Then when this is done, yank the last drive, and plug it into the server, and then hopefully build/sync a complete software RAID1 from the 3TB-drive in the server.

I’m of course well aware that this is a risky operation, as there will only be one drive with the data, until the copy&rebuild (sync) is completed, but not having much other choice at the moment, this is what I’m looking for.

I don’t however, see an option like that in YAST Partition Manager.
It’s doable, and rather painless, through the Diskstation Manage webinterface (DSM) the DS209 (& the 207), and actually how I build the RAID1 there in the first place.

Does anyone have any good ideas, or did I loose everyone, after the third line? :wink:

Regards

/Bawl

Although what you suggest may work,

Since forever the <recommended> way has always been to backup your partition (or disk) using whatever you prefer, then restore to the new disk subsystem.

Involves far fewer “moving parts” and typically guarantees integrity and success in the least amount of time.

TSU

Thank you for taking the time to reply to my question.

I am of course aware of the ultimate way of doing this, and that would be to have a redundant RAID1 (redundant-redundant lol!) to which I could copy all the data, and then later restore them from there.

Alas I’m in a pinch and haven’t got sufficient disk-space to backup all the data, and that’s why I’m forced to look at this as hack/quick-fix/last resort.

I could of course wait until I’m (financially) able to acquire an additional 2x3TB drives to mount in the new server, but that point is somewhere out in the future and I’d like to start messing about with my server sometime before that… :wink:

On 2014-02-27 18:46, Bawl wrote:

> I could of course wait until I’m (financially) able to acquire an
> additional 2x3TB drives to mount in the new server, but that point is
> somewhere out in the future and I’d like to start messing about with my
> server sometime before that… :wink:

You don’t need a full mirror on the destination. You can use a degraded
mirror, one disk. Once populated, you remove one disk from the old
mirror, and format it for the new mirror, and let it do the mirroring…
if it fails, you still have one side of the old mirror intact :slight_smile:


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

if you are running OS13.1 why not go with BTRFS for data store?
in fact I am building a file server for my home on BTRFS all around
OpenSuse 13.1 is a fully BTRFS system, I have my server booting from BTRFS system("/") partition

BTRFS supports RAID 0/1/10 from the start, you can start as single device and expand to multi-device later as needed on the fly and on fully working system.
I had test the setup where I installed the server on single HDD with BTRFS and after initial config and update added second HDD to the FS and convert it to RAID 1 system disk on the fly on the working server, no reboot and not even significant slowdown of the system.

the only issue so far I have seen is that for everything but a simple single device setup you need to use CLI.
Yast does not support multi-device btrfs setup or even recognize it (I have a 2 disk raid1 volume in my test system where I have data and all, and Yast showing the disks as empty with no FS ,but CLI utils like fdisk/df show BTRFS volume and all)

Exactly what I’m looking for, but I can’t seem to find that option in YAST Partition Manager.

Any hints as to what I’m looking for???

That sounds most intriguing, and more or less exactly like what I was looking for, but what about the sticky warning about BTRFS?:

https://forums.opensuse.org/showthread.php/478909-Advisory-June-2013-New-Users-beware-of-using-the-BTRFS-filesystem

Also I’d of course like it configurable through YAST, but CLI could be a way out, if no GUI-option exists.
(I may not be a complete n00b, and I prefer CLI for certain things, but I still cling to the GUI-crutch when trying out new things). :wink:

well I think that warning is a bit outdated as BTRFS have been accepted as a stable FS in many distros including OS13.1. with 13.1 release you can use it for bootable system disk without any workaround ans such. for me it means it is stable and usable. I have been researching and playing with it for the last 5 month and it seams ok. I also have seen other people using it for data storage for a while with no ill effects. the major issues all around agree on is that lack of good utilities make it less user friendly and more difficult to manage/recover
but so far I did not see major failures reported from people testing/using it.
also RAID 5/6 is still in testing/dev mode which is a bit downer for me as I am so looking forward to use it, however RAID 0/1/10 is there and seams to be stable as per all reports.

not sure if you research the BTRFS yet so here is a brief write-up for you :
it is a File system just like any other before it (similar to ZFS but with some things better implemented IMHO) with advanced capability built in as opposed to bolted on top of it.
when you use BTRFS you do not need software RAID and/or LVM as all this add on systems have been incorporated directly into BTRFS as native features.
it is a CoW type file system which makes it self healing and more robust against corruption, it allow you to have a transparent RAID setup using one or more mixed type and size hardware devices.
as someone put it" you can have a raid setup using HDD ,iSCSI and NAS, USB flash devices lamp into a single volume, you would be an idiot but you can do that."

you can choose to have the data striped using RAID 0/1/10 and soon to be 5/6 style setup without the need for multiple devices or partitions.
unlike normal RAID where you need to partition the disk to have real raid setup
BTRFS will use the configuration on single raw device simply multiplying the data as per protocol used.
i.e. if you choose RAID 1 with single HDD you will have 2 copies of data stored on the disk in different locations. it of course does not protect you from device failure but will protect you from data corruptions do to bitrot or bad cluster on the disk as system uses the checksum and metadata internally to verify data
by default the meta-data is stored using RAID1 always regardless of setup used.
meaning even if you do not setup FS as RAID , meta-data will still be mirrored on the disk.

what I like best about using BTRFS is that you can easily add device to the setup and convert rebalanced the data stored on the volume on the fly without stopping any work on the system. even if you expanding system volume. yes you read it correctly even mounted system volume can be expanded on live system without reboot or halting the system.

I tried that myself. I had a system setup booting using single HDD with BTRFS
opensyses 13.1 is fully BTRFS system. during initial install you have the option to select using BTRFS as default and setup “/” as a bootable BTRFS volume
so tat what I did.
after install I had whole system setup on sda1 1TB BTRFS partition
when I got the system running and updated I plugged in second 1TB hardrive into case (I use supermicro 24 disk hot swap case) which was recognized as “sdb”

using
btrfs device add /dev/sdb /dev/sda

and run rebalancing as
btrfs balance start -dconvert=raid1 -mconvert=raid1 /dev/sda1

it converted the single drive setup into RAID1 2 device setup on live system
with no reboot and I was even browsing the web for help at the same time on the system (yeah I build a server with GUI, so what :slight_smile: )
https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices

I’ve found this guide to doing it under Arch Linux, and I suppose it should be doable under openSUSE as well.

https://wiki.archlinux.org/index.php/Convert_a_single_drive_system_to_RAID#Create_the_RAID_device

Does anyone have any immediate reasons as to why this shouldn’t work.

It looks like ‘mdadm’ which isn’t exactly Arch exclusive… :wink:

I’m gonna start “decrypting” the wiki, and see if I can’t get some sense out of it, for a relative CLI RAID-n00b… :wink:

Tank you for your long and very informative answer. It seems like you’ve definitely found something smart for your needs, and I might just keep an eye on BTRFS in the future. But for now, the (adjusted) tutorial in the Arch Wiki linked in the above post, seems to do exactly what I was asking, so I’m gonna try that out, and see if it can help me get the migration done.

Thank you all for your help.

I hope my question is now solved, and that I will be able to mark the thread as such in the near future.

I didn’t expect to see the solution to what I was looking for, posted on the ‘mdadm’ page on Wikipedia, but what do you know… It was…

https://en.wikipedia.org/wiki/Mdadm#Creating_an_array

It describes more or less the same as the link to the Arch Wiki, that I posted above, but it’s right there on the Wikipedia-page.

Maybe I should have read that page a bit earlier.

Anyways, I learned something new. and overcame yet another fear of “CLI-dirtywork” :wink: and maybe someone else did too.

Thank you all for your help.