prior to using openSuSE 11.0, i was able to create hardware images using g4l (ghost for linux) and openSuSE 10.1.
i realize that hardware images are problematic (it was meant to be a short-term fix)
recently - we decided to upgrade to openSuSE 11.0 and re-create our base hardware image, then ghost that image to our ftp server with g4l.
the g4l process worked fine when creating the image on our ftp server.
the g4l process also worked fine when i burned the new 11.0 image to another box
however when i booted the newly imaged box up - i received the following error:
>>
Waiting for device dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2 to appear …Could not find dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2
Want me to fall back to /dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2 (Y/n)
<<
if i select Y - then the machine tries again in vane and dumps you back to /bin/sh.
if i select n - then the machine immediately dumps you back to /sbin/sh, at this point you are NO ONE on the box. most of the OS is not loaded.
can someone help me understand…
is this a known issue in 11.0?
is there something that i need to change (in our ghosting process) to circumvent this issue?
geeky2 wrote:
> hello all,
>
> prior to using openSuSE 11.0, i was able to create hardware images
> using g4l (ghost for linux) and openSuSE 10.1.
>
> i realize that hardware images are problematic (it was meant to be a
> short-term fix)
>
> recently - we decided to upgrade to openSuSE 11.0 and re-create our
> base hardware image, then ghost that image to our ftp server with g4l.
>
> the g4l process worked fine when creating the image on our ftp server.
>
>
> the g4l process also worked fine when i burned the new 11.0 image to
> another box
>
> however when i booted the newly imaged box up - i received the
> following error:
>
> Waiting for device dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2
> to appear …Could not find
> dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2
>
> Want me to fall back to
> /dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2 (Y/n)
> <<
>
> if i select Y - then the machine tries again in vane and dumps you back
> to /bin/sh.
>
> if i select n - then the machine immediately dumps you back to
> /sbin/sh, at this point you are NO ONE on the box. most of the OS is
> not loaded.
>
>
> can someone help me understand…
>
> 1) is this a known issue in 11.0?
>
> 2) is there something that i need to change (in our ghosting process)
> to circumvent this issue?
I think the initrd does not have the disk driver for your second box. You should
boot the CD/DVD and select the repair option. That should rebuild initrd.
This might be due to the change in ext3 inode size
****Inode Size on the Ext3 Filesystem Increased****
The inode size on the ext3 filesystem is increased from 128 to 256 by default. This change breaks many existing ext3 tools such as the windows tool EXTFS.
If you depend on such tools, install openSUSE with the old value.
sorry for not replying back to this thread sooner - i DO REALLY appreciate ALL of the feedback i received!
i wanted to report back on my findings after checking in to the replies / approaches suggested below.
i comment in reply order
inode size change ext2->ext3
i did an install from the CD and changed the default “back to” ext2 . then i ghosted the new image back to our ftp server. then attempted to re-image a box from the newly created image (now using ext2 instead of ext3).
results: g4l re-image process failed.
use the repair facility.
i did use the repair facility - but after allowing the repair facility to auto-correct - i still was not able to get the box to come up.
have i used clonezilla.
no - i have not used this tool. the only tool we currently use here is g4l. i will try this tool at some time in the future.
with that said, it was decided that we needed to push forward with a 100% software only install process (for a number of reasons both technical and business).
since my first posting - i have set up an installation server for 10.1 accessible via http and have been tweaking my boot CD along with my autoinst.xml file to complete the OS install. i collect add’l input from the <ask> utility and am currently writing perl utilities to lay the remaining software artifacts on the target box from a separate install CD.
I had a similar problem using systemimager (which by the way, if you learn it, is way more flexible. It allows to do incremental updates. I use it for my lab of about 35 machines. I even have a writeup for it at Installation of SystemImager | Department of Physics )
Anyways, the solution to your problem is that new versions of Suse assign drives by UUID, instead of the standard /dev/sda, /dev/sdb. Every hard drive has a different UUID, and so when imaging machines, mounting from fstab will fail. Before you make your image, change the entries in fstab from /dev/disk/by-id… to /dev/sda2 or /dev/sdb or whatever they should be (sda if only one drive). I believe there is a way to change this in Yast, but I don’t remember where. I should note that there is some advantage to using UUID, mainly having to do with preserving order of mounted USB devices. The old method can get messed up if for some reason the drives get mounted out of order.
Anyways, for the one who mentioned above, the change would be from:
/dev/disk/by-id/scsi-SATA_ST3160815AS_9RA6ECMJ-part2
to
/dev/sda2 (assuming it is the only drive in the computer)
The other partitions will also need to be changed, note that the part2 at the end of the original line is telling you which partition on the drive it is.