I’ve got a system with quite some hard disks which was running fine for a very long time. It’s got Opensuse 13.1 and was using GRUB to boot. (I already installed Grub2 to try to fix the problem mentioned below.). To be able to connect all hard drives I use 2 PCIe - sata cards, from which I don’t use any RAID capabilities, as I use the kernel md-raid only. Recently the main board died and I replaced it with an ASUS P9X79. This worked “out of the box” except that it’s very unreliable at boot. I often get just a binking dash after the bios screen. Of course I checked all drives are detected by the bios and the correct one is set to boot. Even if I open the boot menu and select the correct boot drive, I get the blinking dash. The very strangest thing is: if I insert an opensuse live USB disk and boot, I get the boot menu from the hard drive and I can boot without a problem (often, not always).
One thing which can explain why my previous mainboard didn’t have a boot issue is that it could only detect part of all my hard drives. So it can be there’s one which has interfering boot code which was previously just not executed.
I found a nice tool here which I ran, but I don’t immediately see the reason for my issues. The output of the nice script is here. To me it looks like /dev/sdh and /dev/sdf have some conflicting boot code, but actually I don’t know anything about this. I wonder if it’s possible/wise to just install the boot code to the MBR of every drive. Note that /dev/sdi is my live USB drive, which is not the system which is booted. Incidentally, I didn’t manage to boot from the USB drive at all. I had issue doing that, but since my main system booted I didn’t try very hard.
Are the drives either all SATA or all IDE?? A mix can confuse the boot order. Only the boot disk (which ever one it is set to in the BIOS) actually needs a MBR boot (assuming you use MBR booting and not EFI booting).
It is a little unclear what your set up is. Best to show rather then explain. fdisk -l please
Please use version from GitHub - arvidjaar/bootinfoscript. It is continuation of sourceforge version which is too old and has issues to parse current grub2 (among others). Make output available.
Thank you all so far. I’ve ran the newer bootinfoscript and the output can be found here. This one seems to detect way more.
The hard drives are all SATA. (Some of which are connected via one of two PCIe expansion cards, which have a boot option rom.) There’s no IDE in this computer anymore. Note that /dev/sdi is a USB drive with a network installer for opensuse 13.1. With this drive inserted I’m magically able to boot from the HD. I’m MBR booting indeed, although the MB supports EFI. The previous (dead) MB didn’t have EFI yet. The output of fdisk -l is here.
What boot mode are you using - legacy MBR (sometimes called CSM) or EFI? You have both on your system.
P.S. sorry, have seen your last post.
OK, so you are using MBR. In this case only the /dev/sdd is bootable. Other disks either have garbage in MBR or libparted code but no active partition. So attempt to boot from any of these disks will result in exactly your symptom - hung system.
So it appears that your motherboard reorders disks. To workaround it, install grub2 in MBR of each disks, and point it to the same /boot/grub2. You can also add these disks to /etc/default/grub_installdevice so that on update grub2 is correctly reinstalled on all of them. Unfortunately I’m not sure YaST supports it.
Thanks for the help. But it didn’t work. (This is my setup now: link).
I guess it’s because some disks are 4TB, so they have a GPT disc label. To install grub on it I needed to make a grub mbr partition, but I only had some space at the end. If I disconnect the 4TB discs at boot, it works.
I guess I will try a radical new approach: UEFI. My boot drive is over 7 years old anyway, so it might need to be preventively replaced. I’ll use the opportunity to add raid1 boot at the same time. But for that I’ll need to do some experimenting first.