How to configure grub ?

Hi all,

I wanted to resize my partition of openSUSE. I did it with Gparted Live CD… and then, grub just didn’t want to work. I tried to restor it with the repairing option with the Opensuse DVD. It didn’t work. Then I tried Super Grub Disk. It works ! But there’s only “openSUSE” in the list, no more windows.

I don’t know how to add Windows in the list, somebody can help me ?

Oh, and another thing, the kernel used is “default”. What I need to change in the grub configuration for booting with the “Pae” kernel ?

Thanks for helping me.

In your system go to:
/boot/grub/menu.lst
see if there is any sign of an old menu.lst - It may look like this menu.lst~
or menu.lst.old
This may contain your old menu.

If there is nothing there, don’t worry. Post the output of:

fdisk -l

the contents of current /boot/grub/menu.lst

The PAE kernel info would be in the old menu.lst file if you have one. nevertheless,
See if the kernel PAE is installed. Just re-install it, it should appear back in the menu.

This may help too:
GRUB Boot Multiboot openSUSE Windows (2000, XP, Vista) using the Grub bootloader.

Okay, It works. But when it boot/shut down, I don’t have the openSUSE picture with the progression bar. I just have text in consol mode… how do I get that back ?

Thanks again.

Post your /boot/grub/menu.lst

Modified by YaST2. Last modification on mer. avril 29 20:59:21 CEST 2009

default 0
timeout 8
##YaST - generic_mbr
gfxmenu (hd0,5)/boot/message
##YaST - activate

###Don’t change this comment - YaST2 identifier: Original name: linux###
title OpenSUSE 11.1
root (hd0,5)
kernel /boot/vmlinuz-2.6.27.21-0.1-pae root=/dev/disk/by-id/ata-TOSHIBA_MK3252GSX_X86CC8X5T-part6 repair=1 resume=/dev/disk/by-id/ata-TOSHIBA_MK3252GSX_X86CC8X5T-part5 splash=silent showopts vga=0x317
initrd /boot/initrd-2.6.27.21-0.1-pae

###Don’t change this comment - YaST2 identifier: Original name: windows 1###
title Windows Vista
rootnoverify (hd0,1)
chainloader +1

I post here the last post in a thread, you might want to go through the whole thread. I can’t confirm if the link offering a solution will actually work:
Lost GRUB background picture after changing menu.lst - Page 4 - openSUSE Forums

Mmmh, that’s not exactly the same problem. I still have the green background in the OS selection list. But when I start “OpenSUSE 11.1”, I don’t have the opensuse logo with the loading bar, I just have text in a black background. Same thing when I shut down my computer.

FWIW I have a very similar line:

kernel /boot/vmlinuz root=/dev/disk/by-id/ata-M_blah_part7    resume=/dev/disk/by-id/ata-M_blah_part5 splash=silent showopts vga=0x31a

Yours is much the same except it has “repair=1” in it. Maybe take that out?

I tried to remove “repair=1”, nothing change… I think it’s something else than Grub.

Back to you Carl, that was my one shot :wink:

I’m kinda lost on this too John. All the things seem in place. If you never have the problem yourself it’s kind of difficult.

Possibly try this part that Malcolm posted
Lost GRUB background picture after changing menu.lst - Page 2 - openSUSE Forums

I have a few versions of SuSE Linux, plus some others, like Puppy, on my AMD64 box. GRUB has been generally reliable, but I encountered a few “anomalies” recently when I changed some repositories, updated with zypper, and installed a new ATI driver (I know, I know). In the ensuing drama, GRUB began to misbehave in some classic ways, such as the background image disappearing. I got it all resolved eventually. Some notes on my experience follow.

  1. With a few OSs choosable at boot time, I opted to go with a dedicated GRUB partition. The philosophy and pragmatics of this choice are described in detail at the great “GRUB Grotto” web site.

  2. Successive installs of any OS in a given machine tend to destabilize the startup environment, as each OS makes the tacit presumption that it is the greatest OS and should somehow dominate your life. SuSE’s boot environment setup screens are more helpful than most in this regard, but, if you accept the default choices, each new OS will muck up your nice stable prior installs.

  3. GRUB has its own disk/partition naming scheme, that it imposes on the installed machine at bootup time. Trouble is, the rules for the name mapping are vague. And they seem to vary across OS versions and between CD and normal (i.e. post-boot) operation. For instance, I had a heckuva time installing the 4.2.2 Live CD, as the install process redefined hd0 as sdb instead of sda.

  4. Do enough installs, and your drives will be sprinkled with odd boot and grub directories, some of them with quite valid, if incomplete and unintended, versions of device.map, grub.lst, and message files. And even grub.conf, although I am not sure it matters.

  5. The combination of 3 and 4 makes for some very pernicious timewasters.

  6. Safest course of action when installing a new OS is to spec “No bootloader” during the install. Then go back to your intended grub.lst and enter the new OS parameters.

  7. When reconstructing a destabilized GRUB setup, it is instructive to run the find command, from the GRUB command line, a few times. That way, you see just how many menu.lst files (for instance) inhabit your system, and you can then pick the one you want to run with. What to do with all the extraneous files is up to you–they are not “wrong” (they can actually be helpful if you get stuck in some GRUB reboot menu maze), just confusing.

The 4.2.2 Live CD and the latest ATI fglrx video driver actually are a great combination. glxgears starts out at 3900 fps, a respectable figure; after a minute or so, it shifts to 6400 fps, a new record for the box.

Hope this helps someone.

This last post seems quite informative (I am having some Grub problems also, from install/repair issues), but how does one get to the ‘Grub command line’?

Not at all. GRUB’s naming scheme is quite clear and simple. hd0 is the first disk in BIOS detection order (but can be changed via the devicemap), and 0 is the first partition on it. Nowhere does GRUB use names like hda or sda. Aha, you say: you’re wrong, jumping up, what about this root=/dev/sda2 resume=/dev/sda1 thing on the GRUB options line. That is just a kernel parameter, passed down to the kernel. GRUB doesn’t understand what’s in the parameter. For all GRUB cares, it could read resume=/dev/galacticblaster1.

Also, when you first install GRUB, you have to point to some partition names on the system doing the install. This again is where the partition name pops up. As long as the partition name points to the right device, GRUB will install its boot files. This could be /dev/hdXN or /dev/sdXN or something else, depending on the age of the kernel and distro. But internally GRUB stores (hdX,Y) names.

But your advice is good. If one intends to do much multibooting, then it’s best to maintain a separate GRUB to get on top of all the distro GRUBs conflicting with each other. One thing though, you must use a GRUB that can understand 256-byte inodes if any of your OSes use that size. openSUSE’s GRUB is new enough to, others may not be.

.

GRUB naming logic is as clear as you say; no argument. My observation is not that it is wrong, just that it is not some eternal truth. I have a machine with two hard drives: a nice new 500 Gig, and an older 80 Gig I use for backups, and general legacy stuff. I have been using the newer drive as my initial boot drive for months, and my on-disk GRUB has faithfully recognized it as hd0.

That’s why it came as a shock when, after an apparently successful install of the KDE 4.2.2 Live CD, I got dumped into a SuSE penalty box, with messages lik:" “VFS cannot open root device disk/by-id/blahblah/part6”, “unknown block (0,0)”, “unable to mount root filesystem on unknown block (0,0)”, and “hd0,11 … no such partition.” Whaaat?

Because I believed the “first drive in the BIOS boot order is GRUB’s hd0” factoid, I was slow to suspect that hd0 was now a reference to my legacy drive. So the “no such partition” complaint was exactly true. Google revealed that the “undesired reordering of hd0 and hd1 during installs” phenomenon is not uncommon; it even has its own fix, the map (hd0) (hd1), map (hd1) (hd0) sequence. I avoided the fix–seemed to invite further instability–in favor of editing my boot.lst file. That helped to get things running. But once I had a stable install, and had the Live CD out of the picture, I had to re-edit the boot.lst file, because the boot order as perceived by GRUB had returned to the real boot order as dictated by BIOS.

It is obvious that the presence of the LiveCD adversely affected the functioning of a nice stable GRUB setup. I now suspect that many GRUB malfunctions, such as the lost background image, are caused by installation CDs and DVDs either imposing a new GRUB program on an existing system, or just rearranging the drive boot order.

I am reluctant to critique SuSE’s boot setup logic, as it tries so hard to be helpful. But the fact is that the installer is presented with a confusing set of parameter choices over a series of screens and pulldown menus. And the shifting meanings of terms like boot and root doesn’t help matters. And when the install medium scrambles the boot drive ordering, that’s just unfair.

Hope this helps.

You get there two ways:

  1. Your bootup fails, and you get the black background boot selection menu. One of your options is (I am forgetting the exact prompt) “Go to GRUB command mode”. “c”? “e”? I forget.

  2. In a stable system, from a Unix command line, enter grub.

HTH

You are right that there are some traps there, that installers can mess with the order by playing with the device map. Also optical drives can sometimes get inserted in the device order. Basically it’s as you say, each GRUB will be configured by its installer to its liking and it won’t be necessarily be the same as another GRUB’s configuration. So the solution of one GRUB to rule over them all is correct. It’s also possible to use the master GRUB in the MBR and secondary GRUBs in the partition boot record, at the cost of slower bootup due to the delays.

Even when a single GRUB is involved there are traps. The other day I did an install on a server with normal SCSI RAID array and a fiber channel adaptor. The installer helpfully :sarcastic: put this at sda. I thought that wouldn’t be a problem, and installed to sdb. Problem was when it did the first reboot, it expected the SCSI RAID at sda. I had to boot up the rescue system to fix up menu.lst and fstab. The next (identical) server I did, I wised up and disconnected the fiber first.