After updating with zypper the Raid does not Mount LVs

Hi,

I made a zypper upgrade and found some problems while upgrading dozens of packages and then a verify and solved a couple of problems it found. While upgrading I aborted on one of the two or three problems it found, and when starting zypper again, it would ask me for a new libstdc++.so.6 which I found in /usr/lib64/gcc/x86_64-suse-linux/4.5/libstdc++.so.6.0.14 and tricked it to use it.

After rebooting, I get to a <repair filesystem># console.

This is what I get from the console booting:

Loading required kernel modules
..doneSetting up hostname 'terra'..done
Setting up loopback interface ..done
Starting MD Raid grep: error while loading shared libraries: libpcre.so.0: cannot open shared object file: No such file
or directory
..unused
Waiting for udev to settle...
Scanning for LVM volume groups...
  Reading all physical volumes.  This may take a while...
  No volume groups found
  No volume groups found
Activating LVM volume groups...
  No volume groups found
..done
Waiting for /dev/vg00/usr
/dev/vg00/var


/dev/vg00/home .............................. timeout!
Checking file systems...
fsck from util-linux-ng 2.16
Checking all file systems.
[/sbin/fsck.xfs (1) -- /usr] fsck.xfs -a /dev/vg00/usr
/sbin/fsck.xfs: /dev/vg00/usr does not exist
[/sbin/fsck.xfs (1) -- /var] fsck.xfs -a /dev/vg00/var
/sbin/fsck.xfs: /dev/vg00/var does not exist
[/sbin/fsck.xfs (1) -- /home] fsck.xfs -a /dev/vg00/home
/sbin/fsck.xfs: /dev/vg00/home does not exist
..failedblogd: no message logging because /var file system is not accessible
/usr/share/kbd/keymaps/i386/qwerty/us.map.gz is unvailable, using /etc/defkeymap.map instead.
Loading keymap Loading /etc/defkeymap.map
..doneno /usr/sbin -> Numlock off.
Stop Unicode mode
..done
fsck failed for at least one filesystem (not /).
Please repair manually and reboot.
The root file system is already mounted read-write.


This is the output of /etc/fstab


/dev/md1        /               ext3    acl,user_xattr       1 1
/dev/sda2       none            swap    sw
/dev/sdb2       none            swap    sw
/dev/vg00/usr   /usr            xfs     defaults             1 2
/dev/vg00/var   /var            xfs     defaults             1 2
/dev/vg00/home  /home           xfs     defaults             1 2
proc            /proc                proc       defaults              0 0
sysfs           /sys                 sysfs      noauto                0 0
debugfs         /sys/kernel/debug    debugfs    noauto                0 0
usbfs           /proc/bus/usb        usbfs      noauto                0 0
devpts          /dev/pts             devpts     mode=0620,gid=5       0 0

And this of fdisk -l


Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x90b7be5d


   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         523     4194304   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2             523         784     2097152   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sdb3             784      243202  1947222104   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.


Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x1a0ada96


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         523     4194304   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2             523         784     2097152   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             784      243202  1947222104   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.


Disk /dev/md1: 4294 MB, 4294901760 bytes
2 heads, 4 sectors/track, 1048560 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000


Disk /dev/md1 doesn't contain a valid partition table


I’m desperate, the box is a mail and web server, I was such a fool upgrading. I need to mount even without RAID the volumes so the system can server at least mail.

Please help.

Thanks.

You omitted a few important things, although I have a gut feeling as to what you did;

  • What oS version was it originally? ( Was it 12.1 or 12.2 because the error sounds familiar, also what you did was an extremely bad idea - you should never, ever symlink to libs or ‘trick’ things. Never. )
  • What oS version did you change your repos to and tried to upgrade?

The good thing is it’s unlikely you destroyed any data on the LVM, an upgrade from a DVD/USB stick of the target oS version you did the upgrade towards is most likely going to succeed or at least you can salvage all the data using the rescue or live media.

Edit:
I still don’t understand how you could possibly think ignoring critical error messages during an OS upgrade was a good idea.

Hi.

Thanks so much for a reply. I fell totally helpless.

I was thinking of upgrading since a couple of weeks ago because of a gcc vulnerability I tested we had. The Version is SLES 11. I added an official repo and decided to update. The box is a dedicated server.
I was totally stupid, no other way to put it. Bu I really didn’t get critical messages during the update, just a few irrelevant (for my needs) packages just two or three that had to be ignored.
I went to https://en.opensuse.org/Package_repositories#Oss and added the http://download.opensuse.org/distribution/11.4/repo/oss/
I just did a zypper update and about 700 packages needed to be updated, some removed and a few new ones. The process went really well, but at one point I aborted an issue with an irrelevant package. When I launched zypper again, it wouldn’t run because of a lower version of libstdc++ it needed 6.0.11, the thing is that 6.0.14 was installed so I linked it to that and it went on …

Now my problem is huge. I’ve been trying to mount the raid and the logical volumes manually.


(repair filesystem) #  cat /etc/mdadm.conf
CREATE owner=root group=disk mode=0660 auto=yes


MAILADDR opeixe@cronis.net


DEVICE containers partitions
ARRAY /dev/md1 level=raid1 num-devices=2 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md3 level=raid1 num-devices=2 devices=/dev/sda3,/dev/sdb3

In /etc/lvm/backup/vg00




contents = "Text Format Volume Group"
version = 1


description = "Created *after* executing '/sbin/lvextend -l +76288 /dev/vg00/usr'"


creation_host = "s16813999"     # Linux s16813999 2.6.32.59-0.7-default #1 SMP 2012-07-13 15:50:56 +0200 x86_64
creation_time = 1365703431      # Thu Apr 11 20:03:51 2013




vg00 {
        id = "e51mlr-zA1U-n0Of-k3zE-Q5PP-aULU-7rTXhC"
        
        seqno = 8
        status = "RESIZEABLE", "READ", "WRITE"]
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0


        physical_volumes {


                pv0 {
                        id = "12pHfn-ibCI-pS8a-YOcc-LVNy-UMyp-lg9tG2"
                        
                        device = "/dev/md3"     # Hint only


                        status = "ALLOCATABLE"]
                        dev_size = 3894444032   # 1.81349 Terabytes
                        pe_start = 384
                        pe_count = 475395       # 1.81349 Terabytes
                }
        }


        logical_volumes {


                usr {
                        id = "SxlDZT-KYf9-q4jS-i5kz-FzRl-Xttk-ilJLuP"
                        status = "READ", "WRITE", "VISIBLE"]
                        segment_count = 2


                        segment1 {
                                start_extent = 0
                                extent_count = 1024     # 4 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 0
                                ]
                        }
                        segment2 {
                                start_extent = 1024
                                extent_count = 76288    # 298 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 206848
                                ]
                        }
                }


                var {
                        id = "lTHXSr-wUea-gqLI-n2KX-OBEE-fGRt-JLYWbk"
                        status = "READ", "WRITE", "VISIBLE"]
                        segment_count = 2


                        segment1 {
                                start_extent = 0
                                extent_count = 1024     # 4 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 1024
                                ]
                        }
                        segment2 {
                                start_extent = 1024
                                extent_count = 50176    # 196 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 156672
                                ]
                        }
                }


                home {
                        id = "853Lhz-J6DX-DTgc-zleK-RHIb-XDOA-tHguo9"
                        status = "READ", "WRITE", "VISIBLE"]
                        segment_count = 2


                        segment1 {
                                start_extent = 0
                                extent_count = 1024     # 4 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 2048
                                ]
                        }
                        segment2 {
                                start_extent = 1024
                                extent_count = 152576   # 596 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 4096
                                ]
                        }
                }


                srv {
                        id = "7KKWlv-ADsx-WeUB-i8Vm-VJhL-w0nX-5MhmP2"
                        status = "READ", "WRITE", "VISIBLE"]
                        segment_count = 1


                        segment1 {
                                start_extent = 0
                                extent_count = 1024     # 4 Gigabytes


                                type = "striped"
                                stripe_count = 1        # linear


                                stripes = 
                                        "pv0", 3072
                                ]
                        }
                }
        }
}

I have mounted the raid with

mdadm --create  --verbose /dev/md3 --level=raid1 --raid-devices=2 /dev/sda3 /dev/sdb3

But when I try to get a mdadm --examine /dev/md3 I get No md superblock detected. So I can’t vgcfgrestore the logical volumes.

In my hope is that I can get the system to boot and reinstall what ever services I need to keep it going. I see no reason for the data to be lost.

I need help. I’m lost.

Please what can I do?

Well, I have to admit - in the 20 years of using Linux I have never seen this one before.

What you did was install openSUSE 11.4 packages (which hit End of Life in 5th of November 2012) on a SLES (Which is the Enterprise version of SUSE which you should have a subscription for and thus include the glibc vulnerability patches) system. Without knowing your experience with Linux systems, I can’t really give any other advice than; find a person who has extensive experience with recovery and/or Linux systems in general and ask them to perform a complete backup of the system using a rescue and/or Live system.

Also we mainly deal with openSUSE here and not SLES which forums you can find at http://forums.suse.com/ - you should open a help thread there and explain your situation. They may be able to give more specific SLES related advice as to what tools on the SLES 11 installation media you can use to recover the OS, settings and data. Although I imagine they will give you the same advice to seek a professional Linux administrator to deal with it.

For future reference;
Never mix SLES and openSUSE packages - it will end up badly, like here.