Hello,
first of all opensuse good job, i like it and use it…
Where to start…
I have an old PC assembled from waste hardware but it does it’s job as a nfs and smb fileserver and a developement PC for web things.
Also it was an opensuse 11.4 zypper dup’ed it to 12.1 and 12.2 of course I will do it for 12.3 as well.
Included going from system V to systemd and from grub to grub2.
Well, today I was reminded by the updater (in KDE :shame:, usually has RL 3 only) that there were various updates, so I cancelled the Update and went to the terminal where I did
zypper up
and trillions of updates were listed … I just pressed y - I had to go .
Came home → kernel panic - the system did not boot, neither in the usual kernel nor in failsafe. :’(
But why was this?
facts: bootloader grub2, systemd, suse 12.2
#uname -a - Linux 3.4.28-2.20-desktop #1 SMP PREEMPT Tue Jan 29 16:51:37 UTC 2013 (143156b) i686 i686 i386 GNU/Linux
on a hyperthreaded intel P4
I thought the old harddrive just went to heaven or the Motherboard
the Harddrive /dev/sda and its partitions are setup as LVM so the structure
LVM system
system/root ‘/’
system/var
system/home
system/swap
- a partition /boot where all the grub stuff is
- a single drive /dev/sdb set to /media (its the data storage which is exported)
do not ask me why this LVM is like it is, why have ‘/’ on an LVM? well, anyway i guess it was proposed]
The failure booting the system was that the LVM was not recognised by grub2 or the new vmlinux
Let’s say the kernel update did something wrong (at least for grub2)
soooooo…
got into rescue# by CD
from recovering a soft raid I remebered something but LVM is slightly different.
figured the LVM out with
vgscan
→ the answer was that the name was system
did a
vgchange -a y system (when i remember correct)
then discoverd the LVM devices listed in
/dev/mapper/system-root
/dev/mapper/system-var
/dev/mapper/system-home
mounted all into /mnt started with the system and added var and home to it proper place
added the boot partition which is a seperate one into /mnt/boot
and finally the drives were all mounted and assembled as it would be in the system itself
so what now
well
mount --bind /proc /mnt/proc (I think)
and
mount --bind /dev /mnt/dev
followed by
chroot /mnt/
took me into my old filesystem on the assembled LVM drive
then an issued mkinitrd made it a bootable system again.
what waste of 3hrs. time, well but its fun
If this happens to you with a MD raid device the procedure would be similar.
Assemble the raid figure it with mdadm, mount it chroot into it and make a new boot image.
Hmmm… but, the question is?
Why was the running update doing it wrong?