Replacing drives on a raid one set up.

I though that I had better clarify this aspect. The home directory on my machine is on a raid 1 set up with nothing else mapped to it. At some point maybe soon I should replace the drives before they have problems. I don’t have any spare sata connections but could back up the home directory to another disk that is connected.

When I do change them I would like to fit a larger size. In fact I’d probably have to.

What’s the best way to proceed / how do I go about doing it ? I don’t have problems using YAST’s partitioner but that’s about as far as my knowledge goes.

John

It may not matter much, but for the record, do you mean with “home directory” /home or the home directory of one of your users?

Actual /home. I’m the only user off that. I have wondered where roots home is as in the past I have used the desktop as root. If that isn’t actually on /home it might be part of a solution - for me anyway being very much a desktop person.

However the need to do what I will want to do shouldn’t be that uncommon really and maybe just having /home on it will help. I went this way to gain some redundancy. I had tried a small nas but found that the cooling in them is often inadequate so the disks don’t last long. I looked at adding a separate server but that’s a more complicated solution.

John

Again, it is a very minor detail what “home directory” in fact is, because your question basically comes down to “a partition oon RAID 1” (not being the root partition). But it is always, and specially when we talk about computers, wise to call things by there proper names/designations/whatever, to avoid any confusion.

And yes, normally /home contains the home directory/ies of the “normal” user(s).
I prefer to not have read about your abuse of the system by running the desktop as root :’(.

The home directory of root is /root. It is specially placed directly in the root partition because when it would be on /home, like other home directories, you would have great problems logging in the system as root for repair purposes when /home is faulty and/or will not mount for some reason.

Root’s home is /root and it lives on the root partition (/) not on the home partition (/home). Needs to be there for emergency recovery purposes.

Note that logging into a GUI as root can damage your system as well as bad security practice. Best advice here is DON’T DO IT. You never have to I have not logged into a GUI as root for years. It just is not needed.

Oh yes, and maybe it would be a good thing to explain what kind of RAID you have. There is e.g. hardware RAID, Linux Software RAID, RAID using Logical Volume Manager and maybe more.

E.g., when you have a hardware RAID device with hot swappable disks, it is easy. Replace one disk and sync the mirror. Replace the other disk and sync the mirror. Ready. Not one second downtime.

It’s linux raid so soft. I understand that if I plugged in an identical disk it would repair itself however I don’t think that the identical aspect is possible and in real terms an increase in size wouldn’t be a bad idea. It might be possible to fit a larger drive one at a time and use partitioning but then both would need to be grown which doesn’t sound like a good idea.

:|As to using a graphical desktop as root I had a debate with the KDE boys about that a long time ago, also I think with some suse people. General opinion eventually was that it shouldn’t be a problem and that when some one is the root user they need to take care what ever they are using. At one point kde was severely limited for a root user hence the discussion on the mailing list. Not being able to use the desktop as root is a little anti kde really anyway. Looking at root home it still does seem to have what would be needed to run the desktop and I don’t think I would ever run a browser as root so file managers that can also browse the web are probably a bad idea. Apart from that if there are security issues then KDE just has them and that is down to KDE not the user. All such a user is likely to need is dolphin, an editor and yast plus perhaps few utilities so in real terns it’s no different to using run as root from a normal users desktop. I started the debate because no applications of any sort were directly available which just wastes people time linking to them. :)The desktop was also covered in bomb graphics just in case some one forgot that they were using it.

John

When we were talking in private, I would just switch subjects and leave it to you to do with your system to your liking. But this is a public forum where all sorts of Linux users (noobs and gurus) are reading. For the beginning users we have to stress that logging in as root, certainly in the GUI is seen as bad practice. And there is no need for it.

I’ve used Linux and KDE for a long time but a PC for me is a tool for doing other things. I’ve run 2 servers professionally one based on DEC and the other on Netware.

As far as Linux is concerned I am an odd sort of noob. I only use bash and underlying utilities if I have to. I usually manage via a certain amount of use of the web when they are needed. As I don’t need to use these often little is retained.

For this particular task much of the info that is about is out of date or so old that things may well have changed. There is some excellent doc’s for leap 42.2 on creating a raid array but as far as I can see none at all on repairing or doing the sort of thing that I want to do. The man pages on mdadm are the largest and most complex i have ever seen. I tried to get a bad block report on the raid and had an error message stating that there wasn’t a report available. The man page mentions that there can be variable aspects that seem to relate to the storage format. I’d assume as mine was recreated recently it uses the intel one. Rather a lot to look at though so I may be confused. There is also an older standard.

:’(I have managed to determine that the array is degraded but not sure in what way or how badly. There doesn’t seem to be any mechanism for informing a desktop user that this has happened.

Several hours on the web haven’t helped much. I can do it piece meal by removing a drive adding a larger one, partitioning it to match the other. Rebuilding and then doing the same with the other and later growing it which it seems can take some time. However the info I have based that on may well be out of date. Hard to say. The other point is that I don’t know which disk is degraded or if it’s both.

:)One interesting aspect about web content on the subject is that soft raid isn’t dismissed out off hand any more but most of the comments about the various modes still assume a certain style of usage.

John

What I would most probably do is making an extra backup of /home, Replace the hardware, set up a new soft Linux RAID 1, create the file system on it and restore the backup. Simple and not too error prone IMHO.

Others may prefer a different route to success.

It looks like the error occurred on one of the disks 25 days ago.:slight_smile: I only power down if I am going to be away for a while so in real terms that might be 29 days ago. An odd question that may be important. Is there anyway I can tell how long ago my current install was ?

The disk run time is shown as 1163 days which ties in with when the initial raid was built in 12.3. Bit over 3 years ago. The disks are 3.5" 10k sata 2 enterprise class so should last as long as scsi. That has been a very very long time in the past.

The mdadm report seems a little odd to me


dhcppc0:/home/john # mdadm --detail /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sat Jun 22 17:08:19 2013
     Raid Level : raid1
     Array Size : 244197184 (232.88 GiB 250.06 GB)
  Used Dev Size : 244197184 (232.88 GiB 250.06 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Feb 27 23:48:27 2017
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : linux:0
           UUID : 81701053:ecc4e234:1aee9f74:12959d39
         Events : 6204376

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       49        1      active sync   /dev/sdd1

I assume the removed is from when I re installed. I installed leap twice just a couple of days apart but the creation date fits in with the initial build on 12.3. Or maybe I am running on one disk.

John

I’ve answered that myself. Uname -a dates the kernel to the 19th feb. This suggests that the install used a defective raid array. The install partitioning was a little odd too. It didn’t seem to account for the fact that /home was on a different disk so didn’t show that it would be formatted. I guessed that it would be and it was so took no notice. It was completely clear of what had been on it which I had backed up anyway.

Using smartctl on the disks was interesting. That showed that one of the raid drives was faulty so running on one disk. Not able to run a block scan but it looks like a typical head crash. Some one did turn the machine off at about the right time while I was using it. These days it’s in a dual socket and someone used the wrong switch.

Smartctl on the disk HP supplied with the machine really was interesting. Over 3 plus years 300 million read errors all corrected with ECC recovery so no recorded errors. Write catch disable which may mean that it doesn’t have one. They generally are mean about what comes with machines. The machines themselves are usually very good. It has swap which is never used plus /var and /tmp.on it.

;)I have a vision of how best to renew the raid but time for bed. I’ll run though it in the morning hoping for comment as I am not sure about some assumptions I will need to make to keep it simple.

John

As there is such a mass of info about using mdadm I wont feel comfortable using it so I’m inclined to use yast2 as there are clear instructions on building raid with that.

I could back up the home directory using linux’s back up utility as I understand that will retain permissions and not get confused by links etc.

If I then log in as root and just remove the current /home directory I’d guess that I will have problems rebooting to a desktop as home wont be there any more. I could create a temporary home on the spare disk but would that need to have anything in it? I’d hope not but maybe just my user directory?

Then power down change the disks and reboot and use YAST2 to build the raid, restore the directory mapping and then restore the files.

I could then do something similar to replace the spare drive. If I ever do any really large scans it might be a good idea to increase the swap partitions size at the same time.

I’m not too sure what to make of the smart report on the disk supplied by HP. Amazing numbers but reliable in an odd sort of way.

;)As I mentioned this is a “vision”.

John

:’( Ok Silence. Perhaps I can be helped in stages.

I have found some kernal org docs on mdadm but much more to look at. One entry is here

https://raid.wiki.kernel.org/index.php/Replacing_a_failed_drive

It seems to be mainly aimed at entire disks and the comments about the suggested re add first are worrying.

Anyway the first thing neglecting that aspect is to obtain a copy. There seems to be one very basic method of doing that. cp -a -r as root with some comments about -r in man that I don’t fully understand but may mean that it wont do what I need - a full copy including and links and “rights etc”, copied links correctly etc.

mv looked interesting but does delete the source. Exactly what I want to do eventually but it doesn’t alter mount points. The backup option seems to change names.

I have a partition on the spare drive which is /home2. The idea at the moment is to but a mirror image of /home into that.

John

The best way to replace both disks seems to be to move /home and use rysnc to copy the files across and also to make a backup.

I expected to find /home specified in fstab but it isn’t. I need to know how the home directory is specified when it’s on a raid 1 array.

There is also this entry is fstab


dev/disk/by-id/ata-VB0250EAVER_Z3TCNPGW-part4    /home/home2

This isn’t entirely correct home2 is a partition on a separate disk with other partitions on it. When browsing files at this level home is shown with it rather than off it. In other words /home2 as far as the disk is concerned.

The general idea is edit files to make home2 home, remove the current home on the raid having disabled it, reboot or remount. Then remove the old raid drives, fit the new ones and build a new raid. Then rsync the date back onto it and convert it back to /home and the other back to home2. The pc should be hotplug but not sure if I will risk it.

:’(Easy if /home was currently in fstab but it isn’t.

John

:frowning: I’m not sure if I am being ignored or no one knows.

The mystery deepens. I tried to find out what was going on with uuid’s. File run/udev/data mentions both of the disks to tried both. Searches for both produced no signs of anything assigning them to /home only that the mdadm name was linux:0 and the partition name HomeOnRaid. So tried searching linux:0.

The only signs of any assignment was in var/lib/hardware/udi/. The active disk has 2 files in that, The uuid mentioned in mdadm.conf ties in with the active disk. The 2 files on the active disk are.

hwinfo.res.size = { '3,488394368,512' }
hwinfo.res.diskgeometry = { '61049296,2,4,1' }
hwinfo.hwclasslist = '00002000004001'
hwinfo.sysfsid = '/class/block/md0'
hwinfo.unixdevicelist = { '/dev/md0', '/dev/disk/by-id/md-name-linux:0', '/dev/disk/by-id/md-uuid-81701053:ecc4e234:1aee9f74:12959d39', '/dev/disk/by-label/HomeOnRaid', '/dev/disk/by-uuid/1c299175-94d4-4406-96e0-f2529590ba96', '/dev/md/linux:0' }
hwinfo.unixdevice = '/dev/md0'
hwinfo.baseclass = 262 (0x106)
hwinfo.active = 'unknown'
hwinfo.needed = 'no'
hwinfo.available = 'yes'
hwinfo.configured = 'no'
hwinfo.hwclass = 'disk'
hwinfo.model = 'Disk'
hwinfo.uniqueid = 'oCyc.Fxp0d3BezAE'

and

hwinfo.res.size = { '3,488394368,512' }
hwinfo.res.diskgeometry = { '61049296,2,4,1' }
hwinfo.hwclasslist = '00002000004001'
hwinfo.sysfsid = '/class/block/md127'
hwinfo.unixdevicelist = { '/dev/md127', '/dev/disk/by-id/md-name-linux:0', '/dev/disk/by-id/md-uuid-81701053:ecc4e234:1aee9f74:12959d39', '/dev/disk/by-label/HomeOnRaid', '/dev/disk/by-uuid/1c299175-94d4-4406-96e0-f2529590ba96', '/dev/md/linux:0' }
hwinfo.unixdevice = '/dev/md127'
hwinfo.baseclass = 262 (0x106)
hwinfo.active = 'unknown'
hwinfo.needed = 'no'
hwinfo.available = 'yes'
hwinfo.configured = 'no'
hwinfo.hwclass = 'disk'
hwinfo.model = 'Disk'
hwinfo.uniqueid = 'kT8g.Fxp0d3BezAE'

and the /run/udev/data files for it, there are none for the inactive disk

:disk/by-path/pci-0000:00:1f.2-ata-4-part1
S:disk/by-id/scsi-SATA_WDC_WD2500HHTZ-_WD-WXL1E92HTLN7-part1
S:disk/by-id/scsi-350014ee6037c52b6-part1
S:disk/by-id/wwn-0x50014ee6037c52b6-part1
S:disk/by-id/ata-WDC_WD2500HHTZ-04N21V0_WD-WXL1E92HTLN7-part1
S:disk/by-partlabel/primary
S:disk/by-id/scsi-SATA_WDC_WD2500HHTZ-0_WD-WXL1E92HTLN7-part1
S:disk/by-id/scsi-1ATA_WDC_WD2500HHTZ-04N21V0_WD-WXL1E92HTLN7-part1
S:disk/by-id/scsi-0ATA_WDC_WD2500HHTZ-0_WD-WXL1E92HTLN7-part1
S:disk/by-path/pci-0000:00:1f.2-scsi-6:0:0:0-part1
S:disk/by-partuuid/59815c5d-e71a-4cf0-80f5-7d42764c5990
W:24
I:5052080
E:SCSI_IDENT_LUN_ATA=WDC_WD2500HHTZ-04N21V0_WD-WXL1E92HTLN7
E:SCSI_IDENT_LUN_NAA_REG=50014ee6037c52b6
E:SCSI_IDENT_LUN_T10=ATA_WDC_WD2500HHTZ-04N21V0_WD-WXL1E92HTLN7
E:SCSI_IDENT_LUN_VENDOR=WD-WXL1E92HTLN7
E:SCSI_IDENT_SERIAL=WD-WXL1E92HTLN7
E:SCSI_MODEL=WDC_WD2500HHTZ-0
E:SCSI_MODEL_ENC=WDC\x20WD2500HHTZ-0
E:SCSI_REVISION=6A00
E:SCSI_TPGS=0
E:SCSI_TYPE=disk
E:SCSI_VENDOR=ATA
E:SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
E:ID_SCSI=1
E:ID_VENDOR=ATA
E:ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
E:ID_MODEL=WDC_WD2500HHTZ-0
E:ID_MODEL_ENC=WDC\x20WD2500HHTZ-0
E:ID_REVISION=6A00
E:ID_TYPE=disk
E:ID_WWN=0x50014ee6037c52b6
E:ID_WWN_WITH_EXTENSION=0x50014ee6037c52b6
E:ID_BUS=ata
E:ID_ATA=1
E:ID_SERIAL=WDC_WD2500HHTZ-04N21V0_WD-WXL1E92HTLN7
E:ID_SERIAL_SHORT=WD-WXL1E92HTLN7
E:ID_PART_TABLE_TYPE=gpt
E:ID_PART_TABLE_UUID=3a7151d1-971f-41d1-9f9e-191a7f1731f1
E:ID_PATH=pci-0000:00:1f.2-ata-4
E:ID_PATH_COMPAT=pci-0000:00:1f.2-scsi-6:0:0:0
E:ID_PATH_TAG=pci-0000_00_1f_2-ata-4
E:ID_SCSI_COMPAT=SATA_WDC_WD2500HHTZ-0_WD-WXL1E92HTLN7
E:ID_SCSI_COMPAT_TRUNCATED=SATA_WDC_WD2500HHTZ-_WD-WXL1E92HTLN7
E:ID_SCSI_DI=1
E:ID_SCSI_SN=1
E:ID_FS_UUID=81701053-ecc4-e234-1aee-9f7412959d39
E:ID_FS_UUID_ENC=81701053-ecc4-e234-1aee-9f7412959d39
E:ID_FS_UUID_SUB=7e922aca-a05f-e88b-c71a-b535d9489f90
E:ID_FS_UUID_SUB_ENC=7e922aca-a05f-e88b-c71a-b535d9489f90
E:ID_FS_LABEL=linux:0
E:ID_FS_LABEL_ENC=linux:0
E:ID_FS_VERSION=1.0
E:ID_FS_TYPE=linux_raid_member
E:ID_FS_USAGE=raid
E:ID_PART_ENTRY_SCHEME=gpt
E:ID_PART_ENTRY_NAME=primary
E:ID_PART_ENTRY_UUID=59815c5d-e71a-4cf0-80f5-7d42764c5990
E:ID_PART_ENTRY_TYPE=a19d880f-05fc-4d3b-a006-743f0f84911e
E:ID_PART_ENTRY_NUMBER=1
E:ID_PART_ENTRY_OFFSET=2048
E:ID_PART_ENTRY_SIZE=488394752
E:ID_PART_ENTRY_DISK=8:32
E:UDISKS_MD_MEMBER_LEVEL=raid1
E:UDISKS_MD_MEMBER_DEVICES=2
E:UDISKS_MD_MEMBER_NAME=linux:0
E:UDISKS_MD_MEMBER_ARRAY_SIZE=250.06GB
E:UDISKS_MD_MEMBER_UUID=81701053:ecc4e234:1aee9f74:12959d39
E:UDISKS_MD_MEMBER_UPDATE_TIME=1485888708
E:UDISKS_MD_MEMBER_DEV_UUID=7e922aca:a05fe88b:c71ab535:d9489f90
E:UDISKS_MD_MEMBER_EVENTS=5878496
E:UDISKS_IGNORE=1
G:systemd

S:disk/by-path/pci-0000:00:1f.2-scsi-7:0:0:0-part1
S:disk/by-id/wwn-0x50014ee6037ece43-part1
S:disk/by-id/scsi-SATA_WDC_WD2500HHTZ-0_WD-WX61EB2FH749-part1
S:disk/by-id/ata-WDC_WD2500HHTZ-04N21V0_WD-WX61EB2FH749-part1
S:disk/by-partuuid/c59d3713-9a92-4879-a713-5118a781b049
S:disk/by-id/scsi-0ATA_WDC_WD2500HHTZ-0_WD-WX61EB2FH749-part1
S:disk/by-id/scsi-1ATA_WDC_WD2500HHTZ-04N21V0_WD-WX61EB2FH749-part1
S:disk/by-id/scsi-350014ee6037ece43-part1
S:disk/by-id/scsi-SATA_WDC_WD2500HHTZ-_WD-WX61EB2FH749-part1
S:disk/by-partlabel/primary
S:disk/by-path/pci-0000:00:1f.2-ata-5-part1
W:8
I:3771298
E:SCSI_IDENT_LUN_ATA=WDC_WD2500HHTZ-04N21V0_WD-WX61EB2FH749
E:SCSI_IDENT_LUN_NAA_REG=50014ee6037ece43
E:SCSI_IDENT_LUN_T10=ATA_WDC_WD2500HHTZ-04N21V0_WD-WX61EB2FH749
E:SCSI_IDENT_LUN_VENDOR=WD-WX61EB2FH749
E:SCSI_IDENT_SERIAL=WD-WX61EB2FH749
E:SCSI_MODEL=WDC_WD2500HHTZ-0
E:SCSI_MODEL_ENC=WDC\x20WD2500HHTZ-0
E:SCSI_REVISION=6A00
E:SCSI_TPGS=0
E:SCSI_TYPE=disk
E:SCSI_VENDOR=ATA
E:SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
E:ID_SCSI=1
E:ID_VENDOR=ATA
E:ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
E:ID_MODEL=WDC_WD2500HHTZ-0
E:ID_MODEL_ENC=WDC\x20WD2500HHTZ-0
E:ID_REVISION=6A00
E:ID_TYPE=disk
E:ID_WWN=0x50014ee6037ece43
E:ID_WWN_WITH_EXTENSION=0x50014ee6037ece43
E:ID_BUS=ata
E:ID_ATA=1
E:ID_SERIAL=WDC_WD2500HHTZ-04N21V0_WD-WX61EB2FH749
E:ID_SERIAL_SHORT=WD-WX61EB2FH749
E:ID_PART_TABLE_TYPE=gpt
E:ID_PART_TABLE_UUID=09cf0368-8ae4-4c22-8d8d-7101056765ed
E:ID_PATH=pci-0000:00:1f.2-ata-5
E:ID_PATH_COMPAT=pci-0000:00:1f.2-scsi-7:0:0:0
E:ID_PATH_TAG=pci-0000_00_1f_2-ata-5
E:ID_SCSI_COMPAT=SATA_WDC_WD2500HHTZ-0_WD-WX61EB2FH749
E:ID_SCSI_COMPAT_TRUNCATED=SATA_WDC_WD2500HHTZ-_WD-WX61EB2FH749
E:ID_SCSI_DI=1
E:ID_SCSI_SN=1
E:ID_FS_UUID=81701053-ecc4-e234-1aee-9f7412959d39
E:ID_FS_UUID_ENC=81701053-ecc4-e234-1aee-9f7412959d39
E:ID_FS_UUID_SUB=2bf178b8-c772-f7a6-0bc7-fabab2e47241
E:ID_FS_UUID_SUB_ENC=2bf178b8-c772-f7a6-0bc7-fabab2e47241
E:ID_FS_LABEL=linux:0
E:ID_FS_LABEL_ENC=linux:0
E:ID_FS_VERSION=1.0
E:ID_FS_TYPE=linux_raid_member
E:ID_FS_USAGE=raid
E:ID_PART_ENTRY_SCHEME=gpt
E:ID_PART_ENTRY_NAME=primary
E:ID_PART_ENTRY_UUID=c59d3713-9a92-4879-a713-5118a781b049
E:ID_PART_ENTRY_TYPE=a19d880f-05fc-4d3b-a006-743f0f84911e
E:ID_PART_ENTRY_NUMBER=1
E:ID_PART_ENTRY_OFFSET=2048
E:ID_PART_ENTRY_SIZE=488394752
E:ID_PART_ENTRY_DISK=8:48
E:MD_DEVICE=md0
E:MD_DEVNAME=linux:0
E:MD_FOREIGN=no
E:MD_STARTED=yes
E:UDISKS_MD_MEMBER_LEVEL=raid1
E:UDISKS_MD_MEMBER_DEVICES=2
E:UDISKS_MD_MEMBER_NAME=linux:0
E:UDISKS_MD_MEMBER_ARRAY_SIZE=250.06GB
E:UDISKS_MD_MEMBER_UUID=81701053:ecc4e234:1aee9f74:12959d39
E:UDISKS_MD_MEMBER_UPDATE_TIME=1488227801
E:UDISKS_MD_MEMBER_DEV_UUID=2bf178b8:c772f7a6:0bc7faba:b2e47241
E:UDISKS_MD_MEMBER_EVENTS=6204374
E:UDISKS_IGNORE=1
G:systemd

John

For any one else I found a page that is more recent and in some ways helpful

http://www.mece.ualberta.ca/~clange/Linux/openSUSE/42.1/instructions/openSUSE42.1_install.html#RAID

Not that much help for me though. It mentions another way of examining the array. I get this


dhcppc1:/home/john # mdadm --examine  /dev/sdc1 /dev/sdd1
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 81701053:ecc4e234:1aee9f74:12959d39
           Name : linux:0
  Creation Time : Sat Jun 22 17:08:19 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 488394480 (232.88 GiB 250.06 GB)
     Array Size : 244197184 (232.88 GiB 250.06 GB)
  Used Dev Size : 488394368 (232.88 GiB 250.06 GB)
   Super Offset : 488394736 sectors
   Unused Space : before=0 sectors, after=360 sectors
          State : active
    Device UUID : 7e922aca:a05fe88b:c71ab535:d9489f90

Internal Bitmap : -8 sectors from superblock
    Update Time : Tue Jan 31 18:51:48 2017
       Checksum : 73294ada - correct
         Events : 5878496


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 81701053:ecc4e234:1aee9f74:12959d39
           Name : linux:0
  Creation Time : Sat Jun 22 17:08:19 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 488394480 (232.88 GiB 250.06 GB)
     Array Size : 244197184 (232.88 GiB 250.06 GB)
  Used Dev Size : 488394368 (232.88 GiB 250.06 GB)
   Super Offset : 488394736 sectors
   Unused Space : before=0 sectors, after=360 sectors
          State : clean
    Device UUID : 2bf178b8:c772f7a6:0bc7faba:b2e47241

Internal Bitmap : -8 sectors from superblock
    Update Time : Sat Mar  4 11:27:18 2017
       Checksum : b2cfc538 - correct
         Events : 6204388


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing, 'R' == replacing)

I’m not sure what this means - notice state clean and the checksums on one of the drives. There is a possibility that my fist leap 42.2 install caused the problem. It’s hard to be sure either way from the point in time when it failed.

Indications from several different places is that the raid mount to /home should be in fstab however it isn’t. I’ve spent a considerable amount of time trying to find out how the mount is achieved with no luck what so ever. uuid’s and names just lead to dead ends. I’ve come to the conclusion that I need to know in order to fully understand what is going on and what I will hopefully be doing so does anyone know?

That needs to come before I do anything else. It might happen again even if I reinstall. Rebuild and grow tutorials vary and it seems can lead to problems much later on.

John