Hardware: Intel NUC D54250WYK (i5 Haswell), 16GB RAM, 240GB Intel 540 SSD, Centrino 6235 WiFi/BT card.
Using Sandisk Cruiser 16GB drive imaged with 4.7GB DVD install.
Searched forum and didn’t find specific how to straighten it out. It’s probably a UEFI problem. I tried disabling UEFI boot, I get a different install splash screen, but same results.
Another thing of note - my USB drive with the install set on it shows up in both the Legacy and UEFI tabs in the BIOS, but my SSD only shows up in the Legacy tab.
Failure mode is that everything seems to install just fine but it never sets up the boot system correctly. I am getting back into building a lot of different systems with different distros and this is a learning process for me, I have an expectation of myself to understand to a relatively decent degree how the new/old boot systems work at some point; enough to debug them myself for the most part, at the least.
I am sure someone here understands enough to advise - any assistance is gratefully appreciated.
Seriously, I’m still getting my head wrapped around why eth0 isn’t good enough anymore…
Thanks for your time and patience.
Update - the option isn’t on what seems to be the EFI version of installer, but once I error out (using check installation media - it bombs out looking for the optical drive) and it goes to text I can select “start installation”. Once in “start installation” menu, I can select “boot installed system”.
I get four partitions under “choose root partition”.
sda1 (156 MB, vfat)
sda3 (221 GB, ext4)
sdb1 (4.0 MB, vfat, BOOT)
sdb2 (4.3 GB, iso9660, openSUSE-13. (this appears truncated, assumedly due to fixed width of partition menu)
If I select sda3 - I am taken immediately to the login screen and the user I configured during the last install is functional along with correct password, and I can boot right into desktop. It’s actually really **** fast, just a few seconds to go from choosing partition to full desktop. I’m sort of impressed.
What’s odd is that there are three modes observed, two of them GUI and distinctly different from each other. But that’s probably not as important to understand as to figure out how to use the rescue system to set up the boot properly. Once the system is booted, I think it doesn’t matter how I got there, performance-wise (AFAIK), so I’m really not as concerned with what mode it’s booting up with as just getting it to boot consistently.
Update: Did some reading in a couple places
And going to
and checking what’s been fixed in the BIOS updates (also noting that there’s been like an update a month, fixing significant compatibility issues with boot/BIOS) I came to conclusion that my SSD not being seen in the Visual BIOS under the UEFI tab was significant.
I updated from v21 to v25 (latest) BIOS.
Now it’s seeing the SSD in the UEFI tab as well as booting to a GRUB error.
Welcome to GRUB!
error: file `/boot/grub2/x86_64-efi/normal.mod’ not found.
Entering rescue mode…
So, on to read more - I think I might be able to get away with using the install USB drive to boot into the Desktop and actually configure the bootloader now using the GUI utilities (I have always loved that about SuSE - I don’t have to hunt for some system things and they mostly work pretty well - surfing all the system logs in particular), where before I could look but it seemed the right options just weren’t being offered or some inconsistency in the system was blocking. There is still an issue and I think to fully reinstall right now might automatically yield the right boot config now that the BIOS is updated (one of the solutions in the last couple updates was fixing a Linux Mint boot issue) but I want to try a little more to do it manually too, so that I might better understand how it works, at least a little bit more than when I started.
I guess I can probably do it using efibootmgr and other stuff too. We’ll see.
Thanks for looking in on me.
There is an unmounted partition, /dev/sda1, in the partition tool it lists as FS ID: 0x103 EFI boot. I temp mounted it to see what’s in there. There is one file with a singular path. <tempmount>/EFI/opensuse/grubx64.efi
/dev/sda2 was automatically created as swap.
/dev/sda3 is mounted and has a a directory /boot/grub2/i386-pc/<dozens of .mod files>
So it looks like I somehow munge a bunch of paths and configs to massage all the bits into the proper places, of which results I am still uncertain of the long term stability of (will updates break something or put a thing in a different place, etc)…given the variety of options/configs and what I’ve seen as tips on some pages. I’m not even sure at this point that a directory named i386 vs x86_64 is an indicator of anything, also given the aforementioned variety in solutions, fixes, workarounds, patches, etc.
I’m reinstalling from scratch in the hopes that I can go back and see the difference in a proper config, should it work this time, with the updated BIOS and SSD showing in the UEFI side of the BIOS configuration.
Should it not, I don’t expect to be any worse off than I am already. But my experience with sophisticated distributions (SuSE versus, say, Linux From Scratch) has me less willing to manually wrangle configs as the distribution usually has it’s own mind of the ways it wants to configure things and is less likely to break during maintenance if allowed its head for the most part.
Thanks for staying with me.
Root cause - BIOS needed patching to get UEFI configured on hardware properly. A reinstall worked like a dream, after.
Now /dev/sda1 is mounted on /boot/efi and /boot/grub2 has the x86_64-efi directory with all the .mod files in it.
I would have been wrestling with this a while to get it into this sort of shape, especially with my unfamiliarity with wrangling EFI and bootloaders. In fact, I don’t think it would look as clean as it does - and now I have seen what a broken setup looks like in comparison with one that works properly.
I’m glad to be back with SuSE. Unless I wanted to do some major surgery on Ubuntu 12.04 to get bluez 5.x working (or any of the others that people are using standard, including Debian unstable - which didn’t have bluez 5.x either!) which was probably going to break the system pretty badly (big changes between 4.x and 5.x apparently) it was find a distro that did and use it for some work with Bluetooth - I need a low energy peer and only 5.x does that. Only SuSE 13.1 had 5.x included.
Thanks for the patience and have a great weekend (what’s left of it).
On 2014-04-07 01:16, rmzauner wrote:
> Update: Fixed.
> Root cause - BIOS needed patching to get UEFI configured on hardware
> properly. A reinstall worked like a dream, after.
Cheers / Saludos,
Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)
I’m glad you have it working.
Yes, that’s standard with EFI. You are now using the 64-bit version of grub2-efi, which puts modules in the x86_64-efi directory. For non-EFI systems, the 32-bit version of grub2 is used, since it mostly has to do 16-bit legacy booting.
In your opening post:
That initially puzzled me, which is why I did not reply at that time.
That was my confusion. All it means is that either there is no EFI partition on the SSD, or if there is an EFI partition, there is no path in that partition for “\EFI\Boot\bootx64.efi”. Booting a device with UEFI looks for that path. Booting a named operating system instead looks in NVRAM (non-volatile memory managed by the EFI firmware).
Yes. As soon as I updated the BIOS to latest, my SSD “magically” appeared on the EFI tab in the Visual BIOS. Then everything just started working.
The BIOS (or are we using NVRAM and BIOS interchangeably? I am still unsure) is GUI itself, and the home screen shows bootable devices in a tabbed format - one tab for EFI and the other for Legacy. Since the hardware wouldn’t configure properly, I think it confused the installer and it attempted to put together something that would work for legacy (but for some reason still selected the grub2-EFI bootloader mechanism) which after the BIOS update the system was able to try to EFI boot from what it thought looked like the right path but since the EFI configuration couldn’t complete properly it was broken even though I got further than booting right back into the install mode as if it weren’t even there.
On Fri, 04 Apr 2014 19:46:02 GMT, rmzauner
>Hardware: Intel NUC D54250WYK (i5 Haswell), 16GB RAM, 240GB Intel 540
>SSD, Centrino 6235 WiFi/BT card.
>Using Sandisk Cruiser 16GB drive imaged with 4.7GB DVD install.
>Searched forum and didn’t find specific how to straighten it out. It’s
>probably a UEFI problem. I tried disabling UEFI boot, I get a different
>install splash screen, but same results.
>Another thing of note - my USB drive with the install set on it shows up
>in both the Legacy and UEFI tabs in the BIOS, but my SSD only shows up
>in the Legacy tab.
>Failure mode is that everything seems to install just fine but it never
>sets up the boot system correctly. I am getting back into building a
>lot of different systems with different distros and this is a learning
>process for me, I have an expectation of myself to understand to a
>relatively decent degree how the new/old boot systems work at some
>point; enough to debug them myself for the most part, at the least.
>I am sure someone here understands enough to advise - any assistance is
>Seriously, I’m still getting my head wrapped around why eth0 isn’t good
>Thanks for your time and patience.
Just guessing here, but it sounds like our SSD has MBR only partitioning.
Try using gdisk to redo the SSD disk label to be GPT.