OpenSUSE 11 Physical to Linode to VMware /dev/sda missing

Hello everyone,

This is my first time posting here so i appreciate everyone’s help in advance.

I have an OpenSUSE that was migrated from a physical box to Linode without any issues. I was given the task to take that box off linode and put it on a vmware server we have in another location. Simple job i thought.

I shut the system down and booted into a rescue CD. Mounted xvda and rsync’d the files over to the new ext3 partition. I edited the /boot/grub/menu.lst file and /etc/fstab after everything was moved over. grub-install /dev/sda and rebooted the machine.

Here is the issue im having, Grub boots up fine :slight_smile: when it boots it complains it cannot find /dev/sda2 (/ partition) and when it asks if i want it to try to find it by UUID it shows an OLD OLD disk that hasnt been active since on the physical machine.

I completely deleted any UUIDs from menu.lst and fstab. Anyone know what going on here? Any help or advice is appreciated i’ve been googling this for awhile now and i need to finish this up. I’m losing my mind on this and i don’t know where to turn.

Thanks again! I will post menu.lst and fstab below.

Menu.lst

Modified by YaST2. Last modification on Fri Jun 25 19:42:38 EDT 2010

default 0
timeout 8
gfxmenu (hd0,1)/boot/message
##YaST - activate

###Don’t change this comment - YaST2 identifier: Original name: linux###
title openSUSE 11.0 - 2.6.25.20-0.7
root (hd0,1)
kernel /boot/vmlinuz-2.6.25.20-0.7-pae root=/dev/sda2 resume=/dev/sda2 splash=silent showopts vga=0x314
initrd /boot/initrd-2.6.25.20-0.7-pae

###Don’t change this comment - YaST2 identifier: Original name: failsafe###
title Failsafe – openSUSE 11.0 - 2.6.25.20-0.7
root (hd0,1)
kernel /boot/vmlinuz-2.6.25.20-0.7-pae root=/dev/sda2 showopts ide=nodma apm=off acpi=off noresume nosmp noapic maxcpus=0 edd=off x11failsafe vga=0x314
initrd /boot/initrd-2.6.25.20-0.7-pae

fstab:

/dev/sda1 swap swap defaults 0 0
/dev/sda2 / ext3 acl,user_xattr 1 1
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0

If you did not do a full binary copy of the partitions the UUID would not be the same UUID is a label (semi-unique) that is generated at the time the partition is created. a full binary copy will preserve it but a rsync does not since that is just a file copy. So other possibilities, you could add your own labels and mount by label but again labels would not be preserved on a rsync. In any case you need to mount to the now existing partitions who’s ID’s no longer match the previous incarnates. You could simply use the dev names ie sdX# where X is the drive and # is the partition. It is not the best way since these designations may change also for a lot of reasons but may get you operational and then you can change to a more stable solution.

On 2015-06-08 03:16, tdale610 wrote:

> I shut the system down and booted into a rescue CD. Mounted xvda and
> rsync’d the files over to the new ext3 partition. I edited the
> /boot/grub/menu.lst file and /etc/fstab after everything was moved over.
> grub-install /dev/sda and rebooted the machine.

You have to boot a rescue system (of same release), chroot the target,
and run mkinird.

> Thanks again! I will post menu.lst and fstab below.
>
> Menu.lst

Next time use code tags. The ‘#’ button.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

On 2015-06-08 04:56, gogalthorp wrote:
>
> If you did not do a full binary copy of the partitions the UUID would
> not be the same UUID is a label (semi-unique) that is generated at the
> time the partition is created. a full binary copy will preserve it but a
> rsync does not since that is just a file copy.

True!

I would simply edit fstab and grub config to point to the new UUIDs, or
create and use labels instead. And then run mkinird, because it keeps a
copy of fstab. Alternatively, change the UUIDs in the partitions to
match the originals; it can be done, I think, but I know not how.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

I just finished uploading the OpenSUSE 11 DVD to the datastore on the vmware server. fdisk -l returns a command line and inside /dev there isnt anything sda related. Any ideas? Like i said this is VMware ESXI 6

Wasn’t able to edit my last post, I was able to get the drive to show up by changing the how the disk was delivered to the VM inside of vmware.

I was able to mount /dev/sda1 /mnt and chroot /mnt however when i run mkinitrd i get the following returned:

Rescue:/>mkinitrd

Kernel image: /boot/vmlinuz-2.6.25.20-0.7-pae
Initrd image: /boot/initrd-2.6.25.20-0.7-pae

/lib/mkinitrd/setup/11-storage.sh: Line 273: /dev/fd/62: No such file or directory
ls: cannot access /dev/stdin: No such file or directory
ls: cannot access /dev/stderr: No such file or directory
ls: cannot access /dev/stdout: No such file or directory
Fatal storage error. Device /dev/ doees not have a driver.
Rescue:/>

Just another update, i loaded the dvd iso, went to install, did a automatic repair, it also failed on the mkinitrd. However, i was able to see it boot a little further. See the console screenshot below. Still Stuck.
http://i.imgur.com/hwefJiV.png

You need to have access to /dev, /proc and /sys from within chroot. Usual stanza is

mount --bind /dev /<*your-chroot*>/dev
mount --bind /proc /<*your-chroot*>/proc
mount --bind /sys /<*your-chroot*>/sys

Still Stuck.

And how should we know what Id “1” in your /etc/inittab is?

A few suggestions if you continue to run into issues…

First,
I’d recommend you do all your repair work on a local version of VMware instead of on an ESXi(Like VMware Workstation. Maybe Player). Would decrease complexity and possible frustration. After you complete your repairs you can always then re-deploy to ESXi.

Second,
I guess it should be completely obvious that you should be creating a new copy of the disk file with each attempt? It’s like always having a known backup in whatever state/status so you can always return to an earlier version if attempts don’t go well. Don’t use snapshotting which might create dependencies no matter what “independent” snapshots promises.

Another approach I don’t think I see you described…
Have you simply mounted the openSUSE 11.0 iso (I guess that’s what you have) into the Guest’s virtual CDROM and booted to it? If you do that, you should be able to try the automatic Repair if it can re-discover your openSUSE install.

I’d also likely mount a Gparted Live iso image in the Guest virtual CDROM and boot to it…
I’d then at least get to see if any of the partitions report any problems, inspect and verify the physical layout on the disk and verify how each is identified. With that information, you can modify the fstab and/or grub.cfg accordingly.

And BTW - Something this old is likely very susceptible to compromise. Highly recommend making plans to upgrade or re-build to at least 13.1, or the current 13.2. If re-building is your plan, then you should just do that and forget about getting 11.0 to run.

IMO,
TSU

I’ve done all this, ive been working on this for the better of 2 days now. I’m positive the issue is related to /boot and mkinitrd is broken it fails. I noticed for THIS vers. 11 there is a mkinitrd bug. I suppose i found it!! Not exactly the the bug i need floating around on the dvd i have to use but i don’t know.

I’m going to keep plugging away at this… if anyone else would like to chime in on an idea or a workaround it would be great!

Thanks,

Tom

On 2015-06-08 06:06, tdale610 wrote:

> I was able to mount /dev/sda1 /mnt and chroot /mnt however when i run
> mkinitrd i get the following returned:

Did you remember to bind mount dev, sys, proc?


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))