There's Intel RST support in Leap?

Hi!

In order to improve the spanish installing documentation, I’d like to know the current state of Intel RST support. As I can read in the web, both SUSE and Red Hat brings support to this, but I the last time I tried (I think it was Leap 15.2), I needed to change RST to AHCI in order to access the drive (and it’s fine using BTRFS if I understand properly this), so as it’s better if you can say to a user: “go, plug a pendrive with a openSUSE Leap image and start it”, or at least make the advertisement “if you have a NVME disk for sure you must change this in the BIOS settings” I have a lot of curious about this.

Thanks for your help and have fun!!

Intel RST is so badly written that no Linux Vendor will support it. Its job is to use a fake raid to allow a small high speed drive to mirror a larger slower drive. There is no advantage to it with nvme. It allows hardware vendors to use substandard drives with a small ssd to “look” faster - many times it does look faster, but nvme is always faster. The BIOS has to be set in RAID for Intel RST to work as the small SSD does not have all the sectors so the “Mirror” drive is used for raid faults (sector not found). Thats why Samsung nvme drives blow away Intel Optane nvme drives - 100% of the Samsung is fast nvme and 5% of the Intel Optane is fast nvme. 4 years ago this was a fair idea, today it makes no sense.

My 2 cents YMMV.

mdadm supported it for a long time. Not all possible configurations are implemented - in particular, caching mode (using fast drive as cache for slow drive) is not supported.

I needed to change RST to AHCI in order to access the drive … NVME disk

It has absolutely nothing to do with Intel RST. RAID mode in BIOS hides NVMe behind standard AHCI controller so normal NVMe driver simply does not see it. Intel RST driver knows how and where to probe for NVMe in this case. There were patches for Linux kernel to support it, but they were rejected by NVMe maintainers. Recently I stumbled upon second attempt, not sure what result is.

Again - the name of option in BIOS is unrelated to actually using RAID, but because this option was introduced to allow IRST driver in Windows to take over control of NVMe it stuck.

I am eager to see links to your perfect software. I am also curious where you got Intel RST source code to assess its quality.

Its job is to use a fake raid to allow a small high speed drive to mirror a larger slower drive.

And what exactly is so “bad” about it? May be you should explain developers of Linux bcache that they are idiots and waste their time as well?

Hey - I quote the Linux Kernel Development team as they have the Intel RST linux code. They say it is garbage, who am I to argue with them.

mdadm does real raid - not pseudo raid that Intel RST uses to simulate 2 raid drives of different sizes. I have used LVM to do 2 partition of the same size on different sized drives one where the whole drive was one partition and the other had 2 additional partitions.

For 40 years, I installed 1000’s of systems with every flavor of RAID - starting with Veritos and almost every add-on card for drives that PC hardware supported. LVM is the easiest to support in my opinion. mdadm is good unless the primary boot is lost - for some reason it does not boot off the secondary unless you change the boot order. (I have not used that in 10 years so I may be obsolete in that).

bcache does not use the bios’s raid code to do it’s caching. Intel RST does. bcache has good coding - for all purposes it does what Intel RST and Optane does in windows with BIOS RAID - bcache is smart enough to not cache large file transfers so it offers better performance than Optane. I have retired all my rotating disks except backup drives so I don’t use bcache any more - 8 years ago when I had 2 tb rotating disks and a 256GB ssd, I used it, All my new machines have nvme. No bcache is needed on my systems.

Because the intent log via Intel RST can be compromised - disk corruption can occur in the right conditions - until Intel RST can guarantee intent log integrity - the nvme team will probably keep rejecting the code. NTFS has 3 intent logs so 2 of the three being good is ok, most linux file systems do not.

In the days of 3330 disk drives - I built a file system for Unix (1975) to allow 4 IBM 3330 type disks to attach to Unix. I too did not see drives bigger than 256mb - the limit of my file system - I think my code was the only one that used count, key and data fields on the drive - IBM abandoned that a few years later. I used a per track bit map for allocations. The version of fsck for my drives took 40 minutes to run. A UPS was needed to keep the system sane and allow a clean shutdown.

Hi all!!

Thanks for your comments!

Have fun!!