How to make GRUB immune to HD changes?

Hello everyone,

Is there a way to make GRUB immune to hard disk reordering? My HD config changes quite a bit – drives come in, drives go out – and part of the reason I haven’t dived into Linux more is because having to rebuild GRUB each time is a pain in the ass.

Any solutions for this?

Thanks,

Tom

What do exactly mean ?
Why do you shuffle around with HD ?
How many OS did you use ?
Can HD boot order changed in the BIOS ?
dobby9

How are you handling this in Windows? That will give us an idea of what works for you, may be able to suggest a counterpart or different method to replicate the process.

Well, by shuffling the drives around I mean that I clumsily disconnect and then reconnect my hard drives in various, assorted orders.

I have five hard drives in my computer. (Yes, I have my reasons.) Every so often I will do some repairs that require me to disconnect the SATA & IDE cables and then reinstall them. SATA being, well, SATA – that is, no master/slave or primary/secondary distinction – you are supposed to be able to unplug and replug SATA cables in pretty much any order you want, since neither SATA drives nor their cables have any distinguishing features. Worse, some of my hard drives are identical models, as I once used these for RAID 0… meaning that telling them apart via BIOS/software is difficult and risky.

So every time the BIOS order for my hard drives changes, GRUB freaks out. Most recently, it did so by spitting out an Error 22 on startup, without even giving me the option of a command prompt. This happened immediately after a fresh install of SuSE 11, since during the SuSE install I unplugged all hard drives except for the one drive I wanted to install to, simply because the graphical installer has a nasty habit of overwriting the MBR of a drive that I don’t want it to – another royal pain in the ass (not the point of this thread though).

I plug my other hard drives back in and BAM, GRUB error 22. No boot. Luckily, I can boot into my Vista drive and load Windows fine, but restoring Linux requires the SuSE disc and some time spent in recovery mode.

I used to have a lot of problems during the Linux boot process itself, because fstab would identify my drives based on their BIOS order (/dev/sda1 vs. /dev/sda4, for example), however lately all the distros I play around with identify boot-critical mount points by drive serial numbers or somesuch. GRUB still identifies drives by their BIOS order – this is what’s problematic.

For the record, I have never, EVER had issues with Windows not booting because drive order changed. As a PC technician with a once-prominent PC repair chain, I cloned, copied, and imaged customers’ drives all the time, in a variety of different manners, and the only time Windows ever gave me any crap from doing this was when the MBR and bootloader were not intact – which is easy enough to fix with two commands at the recovery console. AFAIK since 2000 or XP, Windows never skips a beat when you screw with your hard drives’ wiring: it always boots up fine, and it always shows the OS drive letter as the same drive it was installed as (usually C:). Therefore, I don’t have any of the issues with Windows.

My current config is a triple-boot setup for Vista Ultimate 64bit, XP Pro 32bit, and openSuSE 11. I’m working on getting Mac OS X to run too, but that’s proving to be more of a challenge. Each operating system is installed to its own hard drive, with the exception of the MS OS’es which live in separate partitions on the same drive. Whenever I install an OS, I generally unplug all hard drives except the target disk as a precaution, and then replug it once the OS is installed and running.

These other stats are likely irrelevant, but I include them just in case: Core 2 Duo 6400 stock, Intel D965WH, 8gb DDR2, a variety of SATA and IDE hard drives (actually, I think I’m 100% SATA now…), and an Asus Radeon 4850 PCI-E hooked up to dual LCDs.

Thanks for the help,

tom

might it be easier - given your tendency to do what you do
simply to use bios to switch boot order

install suse to one drive and grub to it’s mbr - set as first boot hd

the only issue you will have though s if you have shifted other drives around that were present at suse install, as suse will likely mount some of the partitions. But suse should still boot.

But if your system is in such a state of flux, it’s going to be difficult. I don’t expect windows will do much for you in mounting your linux partitions - however good you say it is.

my point of view:

use hdd tray, like ez-swap… and your problems will vanish…

but… its up to you
good luck

Hmmm . . . Of course you won’t typically have a problem in Windows moving drives - you are using a shared system volume with each Windows instance boot volume on the same drive. What matters with all versions of Windows is the boot disk system volume, and if using ntldr, the pointers in boot.ini. Which you can fix with the RE. It is only the system volume that matters: Copying and cloning is not a factor unless it affects this volume, and “drive” (i.e., volume) letter assignment is irrelevant.

Several possibilities . . .

Give a try using Vista to control all the boot loading. It can handle chainloading nicely. To boot linux, you install grub to the root (or boot, if on a separate partition) partition’s boot sector. Then you put a boot object in bootmgr’s bcd registry hive which points to that partition. From my read of the spec, for a real-mode boot sector object there is flexibility in how the key can constructed; that is, the key could be relative (drive sequence) or absolute (drive serial-number). I don’t know if Vista’s bcd editor (bcdedit) can construct such an object; it might be that it can only be done programatically. But there are a couple of very good bcd gui’s (EasyBCD is the most popular) that easily handle adding such an object - I just don’t know whether the key is relative or absolute. If the latter, it won’t matter how you change the drives around. But you’ll probably have to just try it to determine which. Very easy to do.

Another possibility is setting up grub somewhat similar to how Windows works, using a system volume, that is a single /boot partition, to control booting for all instances. How this is implemented depends on the particular setup and personal preferences. One approach is to use a common shared kernel (or different kernels, requires a bit more work) along with shared device.map and menu.lst. Neither the grub “root” statement nor device.map’s drive alignment needs to be changed when drives move. The “root=” statement in each instance’s boot stanza uses either Device-ID (which is unique to a drive) or the Volume-Label (which might help you considerably with identifying which drive is which; you will see the label in Windows, too); regardless of where the drive moves, as long as it is in the bios map, the kernel will find it. You could set up this boot partition to be chainloaded to by ntldr (easy if on same drive, if’y on another drive) or by Vista’s bootmgr (very easy as long as Vista can find the drive). Also good to know, for on-the-fly or test situations, that you can escape out of the grub menu and edit the pointers there for booting to any instance without needing to change device.map or menu.lst.

There are of course other boot loaders; some of the free ones work better than the commercial ones. Whichever route you take, it’s necessary to understand the boot theory of operation and how the loaders actually work, as that varies. An excellent write-up is here Multibooters - Dual/Multi Booting With XP & Vista

Well, I don’t plan on keeping multiple Linux installs, so I’m not sure what a shared /boot partition would do for me.

I’d like to keep GRUB as my bootloader because I already have it chainloading Vista correctly (under the auto-generated configuration, it didn’t), and, frankly, I like the pretty graphics and plan on customizing my background screen as soon as everything else is finalized.

Also, I have yet to get Vista’s chainloader working right – and I’m still having trouble wrapping my mind around its myriad workings. EasyBCD seems to promote its own version of GRUB for Linux booting, too… not sure of what to make of that.

So, in short, there isn’t an easy fix (by editing menu.lst or some other file) to make GRUB identify hard drives based on some other characteristic? How do the versions for pen and floppy drives work then?

You said that grub freaks out when a drive is moved - that will only happen if (a) grub is installed in the boot drive’s MBR and (b) it is pointed to boot from linux on another drive which (c) you move. This is essentially what happens with XP as well, except that with XP the system volume (the equiv of /boot) must always be on the boot disk, while with grub that is not required.

But if you put the /boot partition on the boot disk (like Windows) and SuSE is on another disk, by using the Device-ID in menu.lst the SuSE disk can be moved wherever.

In any case, on an Intel machine the boot hard disk must have an IPL which is able to either directly find a boot loader (grub’s IPL stage1 finds stage2 based on the pointer) or it chains to a PBR boot sector on the active primary (the Windows method, also supported by grub).

There are boot loaders which will scan the disks looking for all installed boot sectors and present that in a menu for the user to choose from. The assumption is that the user knows which is which (IIRC some loaders display the Volume Label, which can be used for identification).

In your setup, I would put /boot on a 50MB partition on the same drive as Windows. It can even be a FAT32 partition created with Windows. If you can put it on one of the first four primaries, then you don’t need to do anything with the MBR; set the active flag on that partition and the IPL will find it. Or you can put /boot on a logical partition and install grub’s IPL to the MBR pointing to the logical. Either way, grub can boot SuSE wherever you have moved it, and will chainload all Windows instances on that drive, too.

As far as Vista is concerned, EasyBCD is only a gui alternative to MS’s bcdedit. It writes standard boot objects to the bcd hive, that’s all. It is more powerful than bcdedit, because by design MS only exposes all the bcd capabilities through the API. The linux boot objects that EasyBCD can write are supported bcd objects which MS provided precisely for the purpose of chainloading foreign OS’s. There is complexity in the registry structure of the various types of objects that bcd supports, but that really is only a programming concern as the interface tools easily do everything the end-user requires.

As far as booting from pen or USB drives, how that works is entirely dependent on the BIOS (assuming its supported), and the implementations vary. Generally, the only method that works is booting that disk autonomously, like any other disk selected from the bios. A loader on an external typically cannot chainload to a loader on another disk. Booting from an internal and then chainloading to the external depends upon the boot loader; ntldr cannot do it, Vista’s bootmgr and grub can.

On this link is an example of chainloading more than 100 OS on a box without GRUB in MBR. I hate to be facile about it, but get some colored tape or something. Colored stick on dots. I mean - only 5 drives?

If you are doing that much “racking”, you should take the advice of Roger_M since you will soon break a connector or wear one out.

grub page