Convert install to RAID 1?

Hi all
I have an install of Tumbleweed which is not yet customized (except I installed zram) and does not have much data to be saved. So I could do a re-install choosing to use both the (new, identical SDDs) as a mirror. I did a vanilla install to one SDD before adding another as I had read a post (which I now cannot find) which let me think that this might be the best way to make a system where failure of one drive would not affect the system running. So the partitions are as follows


fdisk -l
[FONT=monospace]Disk model: KINGSTON SKC600M
<snip>
[/FONT]**Device    ****    Start****      End****  Sectors**** Size****Type**
/dev/sda1       2048     18431     16384    8M BIOS boot 
/dev/sda2      18432   1067007   1048576  512M EFI System 
/dev/sda3  495923200 500118158   4194959    2G Linux swap 
/dev/sda4    1067008 495923199 494856192  236G Linux filesystem
[FONT=monospace]Partition table entries are not in disk order. 


**Disk /dev/sdb: 238.47 GiB, 256060514304 bytes, 500118192 sectors**
Disk model: KINGSTON SKC600M 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 4096 bytes 
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
<snip>
[/FONT]

I would be happy to have the swap partition as part of the RAID 1 array so as to ensure failure of one disk did not affect functioning.
I booted from a rescue stick, started the partitioning tool in YAST and clicked “Add RAID”, adding everything into the RAID. I get an error "System cannot be installed … there is no device mounted at ‘/’. Am I missing something obvious? I see that there are BTRFS balance commands that could be used to convert a data partition to RAID, but I would like the whole system to be mirrored. Perhaps I could clone the first disk onto the second but how would I modify the fstab to work with the RAID array? Currently my fstab is as folows:-

~> cat /etc/fstab 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /                       btrfs  defaults                      0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /var                    btrfs  subvol=/@/var                 0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /usr/local              btrfs  subvol=/@/usr/local           0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /srv                    btrfs  subvol=/@/srv                 0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /root                   btrfs  subvol=/@/root                0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /opt                    btrfs  subvol=/@/opt                 0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /home                   btrfs  subvol=/@/home                0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /boot/grub2/x86_64-efi  btrfs  subvol=/@/boot/grub2/x86_64-efi  0  
0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /boot/grub2/i386-pc     btrfs  subvol=/@/boot/grub2/i386-pc  0  0 
UUID=0adf022b-f763-40dd-8f50-fc5a5ad0d738  swap                    swap   defaults                      0  0 
UUID=5a76215b-74bd-479e-9f2f-99a94573a618  /.snapshots             btrfs  subvol=/@/.snapshots          0  0 
UUID=320D-29E0                             /boot/efi               vfat   utf8                          0  2 
d

The background to this is that my SSD (not an NVMe) failed only 3 months after running tumbleweed. I didn’t lose any data but I have still not got back up to ‘productivity’ with my new SSD and want more than a backup strategy. Any thoughts please? :\

You could save your current installation by creating a RAID1 using only one SSD. When complete, rsync your current installation to it, reinstall Grub, then try booting from it. If it works, add the original SSD to the RAID while booted to it. Don’t expect details from me, as I only use EXTx and FAT32 filesystems for Linux. :stuck_out_tongue:

Forgot to mention in addition to installing Grub, totally rebuild fstab with the new filesystem UUIDs.

My RAID1 on this PC is actually 6 RAID devices for /home and various data types, across 12 partitions on 2 HDDs, with the OS itself on 120GB SSD on 18GB filesystem. When an SSD croaks, I swap in the last clone I made of it.

I didn’t think of that… I suppose the first step that may not work is making a RAID1 with only one disk? Since I posted I have googled some more and see that BTRFS raid can handle disks that differ as I understand you are doing.

A 2-disk RAID1 with a failed disk works. :stuck_out_tongue:

What I am missing here is what the RAID forming firmware/software is. There are many products that can create RAID. I have heard of (not knowing details for most):

  • firmware RAID (probably not the subject here), which is unknown to the operating system because that sees only the resulting devices;
  • Linux Software RAID with mdadm;
  • LVM with RAID;
  • doesn’t have Btrfs RAID functionality within the file system?

So as long as it is not clear what is to be used, I guess that much will be unclear to many here.

And going back to this post, I read that BTRFS uses the same UUID for the RAID array as for (one of?) the original disk(s)? So would I have to alter the UUID references in my fstab?

Thanks for clarifying my thinking Henk
*Firmware RAID:- is not an option for me (this is a mini PC with no further option of hardware expansion) so yes, the first option to be discarded.

*Linux Software RAID with mdadm:- This does need to be considered as an option. Since I posted I read this site https://www.complang.tuwien.ac.at/anton/btrfs-raid1.html headed “BTRFS and RAID” that suggests to use an md RAID1 for a swap partition

*LVM with RAID:- Clarification of my thinking here would be welcome…:question:. I understand that BTRFS manages its subvolumes (which I have in my installation) similar to what an ext4 filesystem running on an LVM can do? I see suggestions that putting RAID on top of LVM negates some of the advantages of LVM such as having disk space re-allocated (say between /var and /opt) as needed. I wonder if I put RAID on top of my installation whether I would have similar problems?

*doesn’t have Btrfs functionality within the file system?:- Again further clarification for me would be welcome. My understanding is that BTRFS can manage RAID 1 well. I really do want to stick with BTRFS (I have relied on snapshots to get me out of trouble on my other PC and it has proved rock solid on my Netgear NAS for years). But now that I have thought about it I am not sure if BTRFS raid has an advantage compared to md RAID in my scenario (two identical disks with my aim to mirror the BIOS boot, the swap and also the Linux filesystem which has a BTRFS filesystem)).

I only tried to avoid that people posting in this thread use in their minds a specific RAID implementations, at the same time assuming that others posting here have the same implementation on their minds. Which is possibly not the case and can lead to endless confusion.

This possible confusion is of course started by the OP by not mentioning what he is asking about and just saying RAID 1.

Thus my advice to specify what the subject of this thread is.

Now that my thinking has moved past the “I can just press the button marked RAID” stage (I assume RAID on installation of Tumbleweed is managed by BTRFS and I further assumed that ‘everyone must be using it’:dont-know:), I have been reading various postings old and new on the internet. It sees to me that RAID managed by BTRFS has the advantage that there is better data integrity protection compared to RAID managed by MDADM (error checking by BTRFS would be able to correct an incidence of bitrot on the fly?). I am not sure if the ability of BTRFS RAID to work with disks of different sizes applies to a RAID1 mirror setup. As for me, I am reacting to an unexpected complete drive failure and looking for protection in case lightning strikes twice, so these advantages were not what I was looking for. I am aware that RAID would not protect me from all causes of drive failure.

On the other hand, recovery in the event of a drive failure seems to be easier under RAID managed by MDADM; there is documentation in the internet for this older RAID implementation. Also it seems that with root on RAID by BTRFS the system would not boot until the array was mounted in degraded mode and that the failed disk would then best be replaced without further rebooting? So now I am thinking rather that I should look at running a TW installation on a MDADM RAID implementation, using the BTRFS filesystem with its advantages of snapshots and subvolumes (this taking advantage of is volume manager abilities). I already have (I believe!) the advantages of bitrot protection through BTRFS RAID on my NAS (which is just as well as the hard drive on my original desktop is thirteen years old…).