It’s been sitting here for 10 minutes now. Doing a fresh install of 15.4.
Another issue: while installing, there was no real progress display. I saw gigabytes-to-go display, but the packages remaining stayed at 6330 throughout. I liked the old way that showed estimated time remaining and decremented package count, too. This isn’t an improvement.
After another 15 minutes, I tried the reboot without the installation medium. I just get the “GRUB _” prompt and it still hangs. I guess I can kick off another install, but it takes a long time and it took me two tries to get this far. (The install-reinstall-rereinstall cycle takes sooooo long.)
I started an “upgrade” install and I think I may have discovered the problem. I’ve labeled my partitions and selected “mount by label”. The upgrader is unable to find some of these labeled partitions. I don’t think your mounting system likes by-label mounting very much. I’ll be back in an hour after a full UUID install.
It’s a UEFI based MoBo. I have no idea where it chooses to boot from. I only know that I am required to do my own partitioning because the automated and auto-assisted versions of disk configuration think it’s okay to reformat the two RAID-0 partitions, where I have terabytes of (backed up) photos. It would take a long time to restore. I do wish it would faithfully honor the current layout. I’ll do a 5th install and take a picture of the disk layout.
I do not have any personal experience with RAID. However, using UEFI booting should not be a problem as far as I know. But I guess RAID issues could matter as grub tries to access the boot menu.
I know about RAID problems. One install decided to reformat the RAID array. It took a week of effort to fix it. SO, if you notice, the RAID stuff does not get formatted or mounted. I’ll be doing that once I have an install. 780MB to go yet.
So, maybe this is the issue, maybe not: with BTRFS subvolumes, how do you configure them to mount at boot time? When the installer fills out a BTRFS root file system, the devices are marked “@/srv”, for example and mounted on /srv. For other disk partitions, there’s no option to set a mount point. My 6th install has the subvolume device names with an “@” as the first character. Still doesn’t show 'em being mounted tho. Is this an oversight that the installer won’t allow me to set a mount point for these subvolumes?
I’m mostly using “ext4”, though I have experimented with “btrfs”.
What I normally see is all subvolumes part of the same partition. But you have them spread around. I think that’s a feature of 'btrfs" – to distribute over several devices – but not one I am familiar with.
The details for mounting subvolumes should be in “/etc/fstab”. Presumably the “@/srv” volume should be mounted at “/srv”.
You are not using UEFI. GRUB is printed by stage1 - legacy BIOS boot block loaded by BIOS. Next it reads stage1.5 (core.img) from disk address embedded in this boot block and jumps to it. If it hangs here, it read garbage.
What to do next depends on your system settings and what bootloader was chosen during installation, but you never described it.
Given the size of sda, I’m having a hard time imagining why you are involving sdb or sdc in getting Leap installed. Given the trouble so far, I’d disconnect both those drives and install entirely on sda, after partitioning to your liking in advance of starting the Leap installer. If disconnecting is too much bother, format the non-RAID partitions on sdb & sdc as NTFS or FAT, to induce the installer to ignore those disks. Leap installations don’t need triple digit separate filesystems for /var or /usr, nor triple digits for / even without separate /var & /usr. My largest OS partitions are 18G including /var, /usr & /boot as ordinary directories, not mount points, and with 50% or more freespace even though 5 or more kernels remain installed.
This is a good time to switch to UEFI booting from GPT disk. It’s a nice, competent replacement for MBR/legacy booting. Grub2 works well, so there’s no need for an embedded systems bootloader that I can see.
This makes some sense. I’ve been “rolling forward” for well over a decade now. I’ll have to get back to this tomorrow. Too many pressing issues today. Thank you!
OK. Well, done bucking up the tree that came down in my yard and got back to this. Still, no go. I booted up in rescue mode and copied off all the data in ROOT1 then re-partitioned my drive, using GPT. I reformatted everything except the RAID stuff. I’m hung at the GRUB prompt again. I can NOT use all defaults because the default disk scheme is to reformat my RAID disks. Surely, SURELY there is some way to force a UEFI boot without giving your installer free rein to scribble all over any and all of my disks. I guess I could pull the box out of its cubby hole and perform surgery to unplug the drives then re-do that operation once I have a working install. I’d rather not.
During installation you select GRUB location that is different from what your BIOS use. There is some old GRUB bootblock that gets loaded by BIOS and it is not overwritten by installation.
Your screenshot shows EFI System Partition but you are apparently making installation in Legacy BIOS mode. You need to decide for yourself whether you want EFI or Legacy BIOS.