booting a UUID image of 12.1

I recently installed opensuse 12.1 (32 bit) on a laptop that I knew was on its last legs so whilst I still had the chance to easily do so I made an image of my install, which I’d spent weeks configuring, using fsarchiver. This weekend I started getting IO errors from that HD and so suse no longer boots but I still have my image. I have have tried a manual fsck but it really looks like the drive has had it.

I have used fsarchiver to image and restore Arch and various Debian and Ubuntu-based distro countless times before. Under those distros all I normally have to do is adjust the grub config files (menu.lst or grub.cfg and device.map) as well as fstab to suit the partition layout of the new machine to get the image to boot but this doesn’t seem to work for suse.

I’ve had a look through some of the imaging related threads on this forum and it seems that the correct way to image a suse install is to use yast to change the bootloader to use device names instead of disk UUIDs BEFORE you create your image! I’d re-create my image if I could still boot into my original install to change this but is there no way to do this afterwards, say by just editing some (yast?) config files so that I may still be able able to get my image to boot which at the moment just waits for a drive that doesn’t exist? My fstab and grub are setup to reference the correct partitions of course but no dice.

On 02/07/2012 12:36 PM, danboid wrote:
>
> suse no longer boots but I still have my image. I have have tried a
> manual fsck but it really looks like the drive has had it.

so, by that i conclude that the image you have is on a different (good)
drive/media, correct?

> all I normally have to do is adjust the grub config files (menu.lst or
> grub.cfg and device.map) as well as fstab to suit the partition layout
> of the new machine

do you have recorded the partition layout of the system as it was on the
now bad drive…like, do you have a copy of the pre-failure output of
any or all of these:


df -h
cat /proc/partitions
cat /etc/fstab
mount
sudo /sbin/fdisk -l
sudo cat /boot/grub/menu.lst

> My fstab and grub are setup to reference the correct partitions

ok good! then lets see any/all of the info you have on the pre-fail
setup, along with the new but non-booting fstab and menu.lst, please…

copy/paste those back to this thread using the instructions here:
http://goo.gl/i3wnr

caution: there is a good chance i can’t actually ‘fix’ this problem, but
maybe a real guru here can (with the info i’ve asked you to provide)


DD
Read what Distro Watch writes: http://tinyurl.com/SUSEonDW

On 2012-02-07 12:36, danboid wrote:

> I have used fsarchiver to image and restore Arch and various Debian and
> Ubuntu-based distro countless times before. Under those distros all I
> normally have to do is adjust the grub config files (menu.lst or
> grub.cfg and device.map) as well as fstab to suit the partition layout
> of the new machine to get the image to boot but this doesn’t seem to
> work for suse.

I’m not aware of fsarchiver, but to change where from you boot you need to
alter grub.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)

DenverD:

Yes, I imaged suse onto a separate, known good external HD.

This is very unlikely a grub issue because as I mentioned in my post, I have successfully imaged installs of Linux that use both grub 1 and 2 hundreds of times before without issue. I have imaged an earlier version of suse before but that was always to/from the same drives so I never encountered this issue as the UUIDs would of course match.

I’ve lost the link now but I found a SLED page that mentioned having to adjust a zipl file. I found a similarly named file in a yast subfolder and I was expecting to see something a bit like fstab, mtab or the grub config files but the file seemed to make no discernable mention of partition devices nor UUIDs so I had no idea what to change and the Novell/ SLED page didn’t give any details on what it was you had to modify within there.

It is either this zipl file or something else that is causing mtab to be misconfigured to still use the UUID of my old HD instead of the devices that I have correctly setup with fstab and seeing as I’m using GRUB2 to dual-boot update-grub does the rest, or at least it does under other distros- just not suse.

I experienced enough with imaging to have created my own custom boot cd for just that purpose (Windows cloning only but using Linux tools to do the job):

cim imaging | Free software downloads at SourceForge.net

This is a definitely a Suse specific problem, not me being a cloning or Linux newb hence I need help from someone who has successfully imaged a UUID-based Suse install onto a different drive/ partition map or at least knows suse/yast well enough to know how to work around this issue. Otherwise its a lot of work to recreate the setup I had.

This was the page I found:

New default in SLES/SLED 10 SP1: mount “by Device ID”

I’ve not got my suse install to hand but I know it didn’t have a /etc/zipl.conf like that doc mentions, instead there was a zipl file stashed away user a sub-dir of /usr/share/yast or something but as I say it didn’t list any devices or UUIDs.

I don’t have the time now, but IIRC it is possible to change this. Haven’t done any imaging for a long time, I must say.

On 2012-02-07 15:46, danboid wrote:

> I experienced enough with imaging to have created my own custom boot cd
> for just that purpose (Windows cloning only but using Linux tools to do
> the job):

I know how to clone a system from images and make it boot, but I have no
idea what you are talking about.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)

Fixed it!

Here’s the necessary voodoo in case anyone else ends up in this situation:

1 - Use a live CD like systemrescuecd to view/edit the /etc/fstab of your suse install to make sure the partition device names are correct

2 - Use SuperGrubDisk’s (I used the one under the floppy tools sub-menu of SRCD) “Detect any OS” option to boot into suse

3 - Follow the resolution (use Yast’s bootloader tool to change the root device names) given for an existing, non-mainframe system at

New default in SLES/SLED 10 SP1: mount “by Device ID”

4 - Run ‘grub-install.unsupported /dev/sda’ or whatever variation of that command is required for your setup if you need to re-install grub

On 02/12/2012 11:36 PM, danboid wrote:
> Fixed it!

good for you! and, thanks for posting the fix details…


DD
Read what Distro Watch writes: http://tinyurl.com/SUSEonDW

I’m in a similar fix.

I backed up my system to /sdb1 using YaST backup. I then created a new 12.1 disk with larger partitions (the reason for the exercise). The disk booted just fine so I started a restore from /sdb1 to my “new” /sda1. The last thing the restore did was run the Boot Loader set-up. At this point there were errors updating the boot records (possibly the MBR) so I cancelled the boot loader operation. The system was rebooted and it stopped with the word GRUB on screen.

I worked my way though the boot process and the problem was obvious once I looked at /etc/fstb and /boot/grub/menu.lst - the SuSE now uses /dev/disk/by-id and restore had restored the diskid’s of the original disk. So, GRUB could not find my new disk as it was looking for the old one.

Surely YaST restore ought to handle this correctly/better than this? I understand the benefits of using UUIDs but the restore process ought to get this right or at least offer an option like "do you want to keep the current UUIDs or the original UUIDs?

Certainty for users with no low level Linux competency the backup/restore process in YaST ought to be bullet proof.

Now I see what the problem is I will use the danboid’s action list by to (try) and put it right.

On 2012-03-07 02:26, dgoadby wrote:
>
> I’m in a similar fix.

Mmm.

Here it is usually better to start a new thread, and perhaps point to the
similar issue thread. Let us judge if it is the same thing, and do not
force us to read it again to compare :slight_smile:

> I backed up my system to /sdb1 using YaST backup. I then created a new
> 12.1 disk with larger partitions (the reason for the exercise). The disk
> booted just fine so I started a restore from /sdb1 to my “new” /sda1.
> The last thing the restore did was run the Boot Loader set-up. At this
> point there were errors updating the boot records (possibly the MBR) so
> I cancelled the boot loader operation. The system was rebooted and it
> stopped with the word GRUB on screen.

Aha.

> I worked my way though the boot process and the problem was obvious
> once I looked at /etc/fstb and /boot/grub/menu.lst - the SuSE now uses
> /dev/disk/by-id and restore had restored the diskid’s of the original
> disk. So, GRUB could not find my new disk as it was looking for the old
> one.

Actually, grub ignores the names of the disks. It is the kernel who
doesn’t. For grub you have to use references like (hd1,1), which refer to
the bios ordering of disks. Perhaps an oversimplification.

> Surely YaST restore ought to handle this correctly/better than this? I
> understand the benefits of using UUIDs but the restore process ought to
> get this right or at least offer an option like "do you want to keep the
> current UUIDs or the original UUIDs?

Perhaps. :slight_smile:

You can fill up the appropriate bugzillas.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)