Installer fails on Disk partition section (fails to detect M.2 Raid 0 ? maybe)

So I got a new Clevo the other day (the new 775-TM-G) when I go to install OpenSUSE from a live USB everything works until it gets to time to recommend partititions for the disk , at this point it throws an error yast/wfm.rb:253 Client /mounts/mp_0001/usr/share/YaST2/clients/inst_disk_proposal.rb failed with ‘undefined method `name’ for nil:NilClass’ (NoMethodError). and if told to go in it skips the entire partition setup section, but at the end can’t install.

Looking through the log (/Yast2/Y2Log) I suspect this is because it’s detecting my Raid1 drive array (the ~4TB) one correctly and scanning it and partitioning it, but something is going wrong with detecting the Raid0 drive (which should be 2x1 TB for 2 TB that’s intended to be my system disk, which I can’t see in the logs at all) but I’m far from an expert at this .

(the Raid1 array are SSDs on standard sata interface but the Raid1 is M.2s on PCIe )

Is there anyway to solve this and install OpenSUSE or do I need to report this somewhere to get some kind of extra support upstream ?

The logs are here: https://www.dropbox.com/s/bi9dbl2rark652q/y2logs.tgz?dl=0

I’ve done some more poking and this does appear to be an upstream issue possibly to do with the RAID controller.

I used a Mint LiveUSB to boot from and both GParted and gdisk are reporting weiird stuff.

Neither of them appear to see the raid 0 set up but the physical size reported by the mapped disk is ~1.6TB but has the GPT for the 2x4TB RAID1 array (which it thinks is corrupt) and both RAID1 disks are detected as /sda /sdb while the RAID0 disks aren’t

Windows 10 meanwhile sees everything fine and thinks the GPT is fine too.

But this is just me guessing from the absolutely bizzare results I’m getting.

I’ve done some more poking and this does appear to be an upstream issue possibly to do with the RAID controller.

I used a Mint LiveUSB to boot from and both GParted and gdisk are reporting weiird stuff.

Neither of them appear to see the raid 0 set up but the physical size reported by the mapped disk is ~1.6TB but has the GPT for the 2x4TB RAID1 array (which it thinks is corrupt) and both RAID1 disks are detected as /sda /sdb while the RAID0 disks aren’t

Windows 10 meanwhile sees everything fine and thinks the GPT is fine too.

But this is just me guessing from the absolutely bizzare results I’m getting.

It still seems to be an issue that the installer doesn’t handle this situation more gracefully however.

Did even more digging. Part of this appears to be because Linux has no real support for fake IRST NVME RAID0 which is why the RAID0 doesn’t appear.

The solutions all seem to revolve around using manual mdadm RAID but that’s not really compatible with Dual Booting windows which does support it.

It looks like this has been an issue for a while now.

But the installer should recognise the situation and handle it more gracefully (eg explain the problem and possible solutions: mdadm or run on Hyper-V etc).

Okay after some more testing and reading I’ve identified the problem.

The PCIe SSDs are NVMe M.2 drives in RAID0 using IRST. The SATA SSDs are M2s in RAID1 also using IRST. Linux apparently has rather poor support for drive contollers in this configuration at the kernel level. In OpenSUSE this manifests as seeing only the SATA RAID1 array and not seeing the NVMe drives at all,

OpenSUSE was actually handling things better than Mint where I was poking at the disks with gparted/gdisk (it appears to have problems either with 4TB drives or it was attempting to do a software RAID1 over the hardware RAID1 and seeing only a fraction of the disk) explaining the inconsistent information I got from those tools.

The problem with the OpenSuse installer was that since it wasn’t not seeing the RAID0 NVME drive it saw only the RAID1 4TB drive, which was formatted in NTFS. So the OpenSUSE installer looked at the disk, identified it (incorrectly) as the Windows Boot Drive and tried to make a partition recommendation. Unfortunately for the installer the drive does not have an EFI partition (since it’s not actually the Windows Boot Drive), at which point the installer freaks out.

Given this, it would probably be wise to add some additional logic to the installer’s detection of Windows Disks that doesn’t assume just because a disk is the only present disk with a Windows File System that it’s a Windows Boot Drive with an EFI partition.

At an upstream level it’d be nice to actually properly recognise setups with IRST NVME RAID0 and SATA RAID1 setups , even if only to cover the case of people trying to dual boot Windows.

For myself, I nuked the empty storage partition once I recognized the issue, and removed the GPT header allowing OpenSUSE to install to my Storage Disk for now. Which is not ideal (I really don’t need RAID1ing of OpenSUSEs Swap & / partitions) but it works well enough for me for now for what I do with it.

I’ll probably nuke my OS disk latter and see if the NVMe drives are visible outside of the IRST RAID0 and once I’ve verified that, I’ll put it back to RAID0 but reserve some space on it for if the issue I suspect is causing this is fixed.

So I tried taking the NVMe drives out of RAID and restarting but they remain undetected.

It seems that OpenSUSE can’t see NVMe devices when there’s also SATA Drives in RAID mode on the same controller (at least for Intel’s consumer level IRST stuff).