RAID broke on upgrade

Hi everyone!

I upgraded my system from OpenSUSE 11.1 to 11.4. I used to have a RAID setup that worked perfectly in 11.1 but is now broken. I had 4 drives, 2 1 TB drives and 2 2 TB drives. The 1 TB drives were split into 2 partitions each, with 900 GB of each being set up in a RAID 1. The 2 TB drives, on the other hand, were each just 1 partition, so the entire drive was set up in RAID 1. When I upgraded to 11.4, the 900 GB partition RAIDs were re-established just fine, but the 2 TB RAIDs were not, and I can’t figure out how to remount them. Also, should I be able to mount a single partition of the RAID? I tried this:

sudo mount -t ext3 /dev/sda /home/raid/

but it says that it’s already mounted or the /home/raid/ is busy. I know it’s not already mounted, and I don’t see any reason why /home/raid/ would be busy. But shouldn’t that work? Or does putting 2 drives in a RAID make it so you can’t mount them individually? If so, what’s the point of it? I thought a RAID 1 setup was useful because it basically creates 2 identical drives so you always have a backup. How can you restore it if it breaks?

Inside my partitioner, it shows 3 pertinent drives:

/dev/sda
/dev/sdd
/dev/mapper/pdc_djieaaddgf

All are shown as 1.82 TB drives. /dev/sda and /dev/sdd have type WDC-WD20EVDS-63T, and /dev/mapper/pdc_djieaaddgf has type DM RAID pdc_djieaaddgf. None of them list filesystem types or mount points. How can I get this RAID mounted again without erasing it?

Thank you for your help in advance!

I am not able to assss your problem description fully, but I would suggest that you add some information. particular more computer information instead of you own conclusions/guessings. E.g. when you say " I know it’s not already mounted,", you better post the output of

mount

to show what the computer “thinks” instead of what you think.

Also your start with “I upgraded my system from OpenSUSE 11.1 to 11.4.” is not very explanatory because there are many, many ways of going from 11.1 to 11.4 given different names like “update”, “upgrade”, “installed”, “installed newwith keeping of /home”, etc. Thus when you think it is important that we know about this fact (and why would you have mentioned it else), tell what you did.

And providing information about your partiitioning:

fdisk -l

and about your top be mounted file systems

cat /etc/fstab

and about your RAID are of course a source of information needed by anyone trying to help you. I hope you understand this.

On 07/22/2011 06:06 PM, hyperutila wrote:

> I thought a RAID 1 setup was useful because it
> basically creates 2 identical drives so you always have a backup. How
> can you restore it if it breaks?

i can’t help with your current problem except to note that lots of folks
install RAID because they think is the way to make sure there is a
viable backup…

but, in fact RAID1’s mirror copy is specifically to ensure two things:

  1. read reliability while in service…that is, while running if one
    disk dies the other just keeps going (while you replace the failed
    drive)–reliably…

  2. read performance can go up markedly…the system can read from both
    drives at the same time, approximately halving the time needed to read
    any data…

HOWEVER, RAID1 is worthless as a backup…why, because if xyz file is
corrupted during a write on drive one it is immediately mirrored
(corrupted) onto drive two…

i’m not aware of any RAID set up which replaces a good backup…that is
not what RAID is for…

> Thank you for your help in advance!

sorry, i can’t help at all…never used RAID (mine is fast enough, and
knock-on wood, with a good backup external to the machine, reliable
enough…)


DD
Caveat-Hardware-Software
openSUSE®, the “German Engineered Automobiles” of operating systems!

Sorry about the lack of information. I upgraded by creating an installation DVD and running the installer but choosing to upgrade instead of do a clean install. Here’s the output from fstab:

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e7b65

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1           16065  3907024064  1953504000   fd  Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00014911

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *       16065    20980889    10482412+  83  Linux
/dev/sdc2        20980890    31455269     5237190   82  Linux swap / Solaris
/dev/sdc3        31455270  1953520064   961032397+  fd  Linux raid autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e7b65

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1           16065  3907024064  1953504000   fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000766b2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1           16065  1922064794   961024365   fd  Linux raid autodetect
/dev/sdb2      1922064795  1953520064    15727635   83  Linux

Disk /dev/md1: 984.1 GB, 984088809472 bytes
2 heads, 4 sectors/track, 240256057 cylinders, total 1922048456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/mapper/pdc_djieaaddgf: 2000.0 GB, 1999999991808 bytes
255 heads, 63 sectors/track, 243152 cylinders, total 3906249984 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e7b65

                     Device Boot      Start         End      Blocks   Id  System
/dev/mapper/pdc_djieaaddgf1           16065  3907024064  1953504000   fd  Linux raid autodetect

Here’s fstab. It got recreated in the upgrade:

/dev/md1             /home/BackupSpace    ext3       defaults              1 2
proc                 /proc                proc       defaults              0 0
sysfs                /sys                 sysfs      noauto                0 0
debugfs              /sys/kernel/debug    debugfs    noauto                0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

Finally, here’s the output from mount:

devtmpfs on /dev type devtmpfs (rw,relatime,size=890908k,nr_inodes=222727,mode=755)
tmpfs on /dev/shm type tmpfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
/dev/sdc1 on / type ext3 (rw,relatime,errors=continue,user_xattr,acl,commit=15,barrier=1,data=ordered)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/md1 on /home/BackupSpace type ext3 (rw,relatime,errors=continue,commit=15,barrier=1,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
gvfs-fuse-daemon on /home/dan/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,relatime,user_id=1000,group_id=100)
/dev/sr0 on /media/openSUSE-DVD-x86_640024 type iso9660 (ro,nosuid,nodev,relatime,uid=1000,gid=100,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks)

Thank you so much. Let me know if you need more information to diagnose this.

Thanks for the info. Most people might be satisfied with what you showed, but I am a bit more …, I like to see the statements that were used also. Same amount of copying/pasting, only a few lines smore. Now I am wondering if you realy have no entries for the root partition and Swap in your fstab, or that you did a lousy copy, or that you listed the wrong fiile. Riddles!

E.g… see what I do:

henk@boven:~> cat /etc/fstab
/dev/disk/by-id/ata-Hitachi_HDT725032VLA380_VFJ201R23XUEXW-part2 /                    ext4       acl,user_xattr        1 1
/dev/disk/by-id/ata-Hitachi_HDT725032VLA380_VFJ201R23XUEXW-part3 /home                ext4       acl,user_xattr        1 2
/dev/disk/by-id/ata-Hitachi_HDT725032VLA380_VFJ201R23XUEXW-part1 swap                 swap       defaults              0 0
proc                 /proc                proc       defaults              0 0
sysfs                /sys                 sysfs      noauto                0 0
debugfs              /sys/kernel/debug    debugfs    noauto                0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
/dev/disk/by-id/ata-Hitachi_HDT725032VLA380_VFJ201R23XUEXW-part5 /mnt/oldsys                    ext4       ro,acl,user_xattr        0 0
/dev/disk/by-id/ata-Hitachi_HDT725032VLA380_VFJ201R23XUEXW-part6 /mnt/oldsys/home                ext4       ro,acl,user_xattr        0 0
henk@boven:~>

which shows that I was a normal user doing this, what I did and the complete output including the next prompt. All without any further explanation fom me needed.

And for your method of upgrading: there are people here who assure that it is a working method. But I doubt a bit if they persevere that when you jump from 11.1 to 11.4.

Oh wow, you’re right. I missed 2 of the lines from the output of fstab. Here’s another crack at it:

dan@linux-yxcg:~> cat /etc/fstab
/dev/disk/by-id/ata-WDC_WD10EACS-00D6B0_WD-WCAU40305422-part2 swap                 swap       defaults              0 0
/dev/disk/by-id/ata-WDC_WD10EACS-00D6B0_WD-WCAU40305422-part1 /                    ext3       acl,user_xattr        1 1
/dev/md1             /home/BackupSpace    ext3       defaults              1 2
proc                 /proc                proc       defaults              0 0
sysfs                /sys                 sysfs      noauto                0 0
debugfs              /sys/kernel/debug    debugfs    noauto                0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

as for fdisk, this is the command I ran to get the output:

dan@linux-yxcg:~> sudo /sbin/fdisk -l

and for mount:

dan@linux-yxcg:~> mount

The rest is all correct.

Apart from the fat that I hope that somebody sheads light on the fact that you tried to “upgrade” from 11.1 to 11.4 by using the Upgrade feature from the 11.4 DVD, seeing your info (after two days!), I can say:

According to your fstab on boot the system has to:
. use* /dev/sdc2* as Swap;
. mount the root partition* / from /dev/sdc1 *(which is the same as /dev/disk/by-id/ata-WDC_WD10EACS-00D6B0_WD-WCAU40305422-part1);
. mount /home/BackupSpace from /dev/md1.
And according to the output of the mount command (which shows what is acutaly mounted), the last two mounts are active. (I guess that the Swap is also used correctly, but that is not of our concern now).

In your post #1 you tried to

mount -t ext3 /dev/sda /home/raid/

which I do not understand because */dev/sda is a whole disk, which is (according to your fdisk -l *output) partitioned into one partition (sda1). Thus */dev/sda *does contain no file system of any type (let alone of type ext3) and thus you can not mount it anywhere. It is also not clear to me if the directory /home/raid does exist.

We now have the* fdisk -l *output. What you could do now is telling us for every one of the partitions on real disks(sda, sdb, sdc and sdd) what you think they are intended for (we allready have that for sdc1 and scd2). That is much more informative then talking about (… 2 1 TB drives and 2 2 TB drives…).

Also we dearly miss any information about those RAID devices. Now I am not an expert on these and I hope other will join here, but looking in the man page for mdadm I would guess that

mdadm --query /dev/sd<partition>

on any of the Linux raid autodetect partitions would give information about what the system thinks about the RAID configuration. As I see it now, the system detects only one RAID device (/dev/md1) and uses it (mounts it as an ext3 on /home/BackupSpace), but I can not see out of what pieces that RAID is formed.