I recently installed opensuse 12.1 (32 bit) on a laptop that I knew was on its last legs so whilst I still had the chance to easily do so I made an image of my install, which I’d spent weeks configuring, using fsarchiver. This weekend I started getting IO errors from that HD and so suse no longer boots but I still have my image. I have have tried a manual fsck but it really looks like the drive has had it.
I have used fsarchiver to image and restore Arch and various Debian and Ubuntu-based distro countless times before. Under those distros all I normally have to do is adjust the grub config files (menu.lst or grub.cfg and device.map) as well as fstab to suit the partition layout of the new machine to get the image to boot but this doesn’t seem to work for suse.
I’ve had a look through some of the imaging related threads on this forum and it seems that the correct way to image a suse install is to use yast to change the bootloader to use device names instead of disk UUIDs BEFORE you create your image! I’d re-create my image if I could still boot into my original install to change this but is there no way to do this afterwards, say by just editing some (yast?) config files so that I may still be able able to get my image to boot which at the moment just waits for a drive that doesn’t exist? My fstab and grub are setup to reference the correct partitions of course but no dice.
On 02/07/2012 12:36 PM, danboid wrote:
>
> suse no longer boots but I still have my image. I have have tried a
> manual fsck but it really looks like the drive has had it.
so, by that i conclude that the image you have is on a different (good)
drive/media, correct?
> all I normally have to do is adjust the grub config files (menu.lst or
> grub.cfg and device.map) as well as fstab to suit the partition layout
> of the new machine
do you have recorded the partition layout of the system as it was on the
now bad drive…like, do you have a copy of the pre-failure output of
any or all of these:
> I have used fsarchiver to image and restore Arch and various Debian and
> Ubuntu-based distro countless times before. Under those distros all I
> normally have to do is adjust the grub config files (menu.lst or
> grub.cfg and device.map) as well as fstab to suit the partition layout
> of the new machine to get the image to boot but this doesn’t seem to
> work for suse.
I’m not aware of fsarchiver, but to change where from you boot you need to
alter grub.
–
Cheers / Saludos,
Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)
Yes, I imaged suse onto a separate, known good external HD.
This is very unlikely a grub issue because as I mentioned in my post, I have successfully imaged installs of Linux that use both grub 1 and 2 hundreds of times before without issue. I have imaged an earlier version of suse before but that was always to/from the same drives so I never encountered this issue as the UUIDs would of course match.
I’ve lost the link now but I found a SLED page that mentioned having to adjust a zipl file. I found a similarly named file in a yast subfolder and I was expecting to see something a bit like fstab, mtab or the grub config files but the file seemed to make no discernable mention of partition devices nor UUIDs so I had no idea what to change and the Novell/ SLED page didn’t give any details on what it was you had to modify within there.
It is either this zipl file or something else that is causing mtab to be misconfigured to still use the UUID of my old HD instead of the devices that I have correctly setup with fstab and seeing as I’m using GRUB2 to dual-boot update-grub does the rest, or at least it does under other distros- just not suse.
I experienced enough with imaging to have created my own custom boot cd for just that purpose (Windows cloning only but using Linux tools to do the job):
This is a definitely a Suse specific problem, not me being a cloning or Linux newb hence I need help from someone who has successfully imaged a UUID-based Suse install onto a different drive/ partition map or at least knows suse/yast well enough to know how to work around this issue. Otherwise its a lot of work to recreate the setup I had.
I’ve not got my suse install to hand but I know it didn’t have a /etc/zipl.conf like that doc mentions, instead there was a zipl file stashed away user a sub-dir of /usr/share/yast or something but as I say it didn’t list any devices or UUIDs.
> I experienced enough with imaging to have created my own custom boot cd
> for just that purpose (Windows cloning only but using Linux tools to do
> the job):
I know how to clone a system from images and make it boot, but I have no
idea what you are talking about.
–
Cheers / Saludos,
Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)
I backed up my system to /sdb1 using YaST backup. I then created a new 12.1 disk with larger partitions (the reason for the exercise). The disk booted just fine so I started a restore from /sdb1 to my “new” /sda1. The last thing the restore did was run the Boot Loader set-up. At this point there were errors updating the boot records (possibly the MBR) so I cancelled the boot loader operation. The system was rebooted and it stopped with the word GRUB on screen.
I worked my way though the boot process and the problem was obvious once I looked at /etc/fstb and /boot/grub/menu.lst - the SuSE now uses /dev/disk/by-id and restore had restored the diskid’s of the original disk. So, GRUB could not find my new disk as it was looking for the old one.
Surely YaST restore ought to handle this correctly/better than this? I understand the benefits of using UUIDs but the restore process ought to get this right or at least offer an option like "do you want to keep the current UUIDs or the original UUIDs?
Certainty for users with no low level Linux competency the backup/restore process in YaST ought to be bullet proof.
Now I see what the problem is I will use the danboid’s action list by to (try) and put it right.
On 2012-03-07 02:26, dgoadby wrote:
>
> I’m in a similar fix.
Mmm.
Here it is usually better to start a new thread, and perhaps point to the
similar issue thread. Let us judge if it is the same thing, and do not
force us to read it again to compare
> I backed up my system to /sdb1 using YaST backup. I then created a new
> 12.1 disk with larger partitions (the reason for the exercise). The disk
> booted just fine so I started a restore from /sdb1 to my “new” /sda1.
> The last thing the restore did was run the Boot Loader set-up. At this
> point there were errors updating the boot records (possibly the MBR) so
> I cancelled the boot loader operation. The system was rebooted and it
> stopped with the word GRUB on screen.
Aha.
> I worked my way though the boot process and the problem was obvious
> once I looked at /etc/fstb and /boot/grub/menu.lst - the SuSE now uses
> /dev/disk/by-id and restore had restored the diskid’s of the original
> disk. So, GRUB could not find my new disk as it was looking for the old
> one.
Actually, grub ignores the names of the disks. It is the kernel who
doesn’t. For grub you have to use references like (hd1,1), which refer to
the bios ordering of disks. Perhaps an oversimplification.
> Surely YaST restore ought to handle this correctly/better than this? I
> understand the benefits of using UUIDs but the restore process ought to
> get this right or at least offer an option like "do you want to keep the
> current UUIDs or the original UUIDs?
Perhaps.
You can fill up the appropriate bugzillas.
–
Cheers / Saludos,
Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)