Does 15.4 support M.2 SSD?

I can’t find the info on my own, so I’m asking. Is the driver for M.2 already there in 15.4? Would it have to be a fresh install? Thanks in advance.

NVMe support has been supported since kernel version 3.3, while M.2 is a from factor for SSDs.

The Linux Kernel has supported NVMe drives since Kernel version 3.3 – Leap 15.4 has Linux Kernel version 5.14.

Hello: Of course, SSDs with 3D memories and other types are supported, even in raid type (I would like all the functions on this and utilities, from Intel, to be supported as well). But the M2 type SSDs in pcie configuration with more bandwidth are supported, idem the same for the sata type (what changes there, only the format changes). Even VROC, others Intel may not release the code, it’s a shame, it would be a great help, especially on linux and unix servers.



mikrios-15:~ # systemd-analyze
Startup finished in 7.146s (kernel) + 3.882s (initrd) + 12.275s (userspace) = 23.304s
graphical.target reached after 12.261s in userspace 

mikrios:~ # inxi -SMCz -xxxDPI 
System: 
  Kernel: 5.14.21-150400.24.11-default arch: x86_64 bits: 64 compiler: gcc 
    v: 7.5.0 Console: pty pts/1 wm: kwin_x11 DM: SDDM Distro: openSUSE Leap 
    15.4 
Machine: 
  Type: Desktop Mobo: ASUSTeK model: PRIME X299-DELUXE II v: Rev 1.xx 
    serial: <filter> UEFI: American Megatrends v: 3601 date: 09/24/2021 
CPU: 
  Info: 12-core model: Intel Core i9-10920X bits: 64 type: MT MCP 
    smt: enabled arch: Cascade Lake rev: 7 cache: L1: 768 KiB L2: 12 MiB 
    L3: 19.2 MiB 
  Speed (MHz): avg: 1201 high: 1204 min/max: 1200/4800 volts: 1.6 V 
    ext-clock: 100 MHz cores: 1: 1200 2: 1200 3: 1202 4: 1204 5: 1202 6: 1200 
    7: 1203 8: 1202 9: 1203 10: 1202 11: 1203 12: 1200 13: 1202 14: 1199 
    15: 1202 16: 1203 17: 1202 18: 1199 19: 1202 20: 1202 21: 1203 22: 1202 
    23: 1202 24: 1202 bogomips: 167995 
  Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx 
Drives: 
  Local Storage: total: 46.39 TiB used: 37.52 GiB (0.1%) 
  ID-1: /dev/nvme0n1 vendor: Western Digital model: WD BLACK SN850 Heatsink 
    1TB size: 931.51 GiB speed: 63.2 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 614600WD temp: 39.9 C scheme: GPT 
  ID-2: /dev/nvme1n1 vendor: Western Digital model: WDS100T3X0C-00SJG0 
    size: 931.51 GiB speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 111110WD temp: 42.9 C scheme: GPT 
  ID-3: /dev/nvme2n1 vendor: Western Digital model: WD BLACK SN850 Heatsink 
    1TB size: 931.51 GiB speed: 63.2 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 614600WD temp: 35.9 C scheme: GPT 
  ID-4: /dev/nvme3n1 vendor: Western Digital model: WDS100T1X0E-00AFY0 
    size: 931.51 GiB speed: 63.2 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 614600WD temp: 35.9 C scheme: GPT 
  ID-5: /dev/nvme4n1 vendor: Western Digital model: WDS100T3X0C-00SJG0 
    size: 931.51 GiB speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 102000WD temp: 36.9 C scheme: GPT 
  ID-6: /dev/nvme5n1 vendor: Western Digital model: WDS100T1X0E-00AFY0 
    size: 931.51 GiB speed: 63.2 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 614600WD temp: 33.9 C scheme: GPT 
  ID-7: /dev/nvme6n1 vendor: Western Digital model: WD BLACK AN1500 
    size: 1.82 TiB type: SSD serial: <filter> rev: 10271043 temp: 59° (332 
    Kelvin) C scheme: GPT 
  ID-8: /dev/nvme7n1 vendor: Western Digital model: WDS100T3X0C-00SJG0 
    size: 931.51 GiB speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> 
    rev: 111110WD temp: 42.9 C scheme: GPT 
  ID-9: /dev/sda vendor: Western Digital model: WD2005FBYZ-01YCBB3 
    size: 1.82 TiB speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> 
    rev: RR09 scheme: GPT 
  ID-10: /dev/sdb vendor: Western Digital model: WD2005FBYZ-01YCBB3 
    size: 1.82 TiB speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> 
    rev: RR09 scheme: GPT 
  ID-11: /dev/sdc vendor: Western Digital model: WD6003FRYZ-01F0DB0 
    size: 5.46 TiB speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> 
    rev: 1H01 scheme: GPT 
  ID-12: /dev/sdd vendor: Western Digital model: WD2003FZEX-00SRLA0 
    size: 1.82 TiB speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> 
    rev: 1A01 scheme: GPT 
  ID-13: /dev/sde vendor: Western Digital model: WD140EFGX-68B0GN0 
    size: 12.73 TiB speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> 
    rev: 0A85 scheme: GPT 
  ID-14: /dev/sdf vendor: Western Digital model: WD161KRYZ-01AGBB0 
    size: 14.55 TiB speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> 
    rev: 1H01 scheme: GPT 
Partition: 
  ID-1: / size: 100 GiB used: 11.61 GiB (11.6%) fs: btrfs dev: /dev/nvme6n1p2 
  ID-2: /boot/efi size: 249.7 MiB used: 320 KiB (0.1%) fs: vfat 
    dev: /dev/nvme6n1p1 
  ID-3: /home size: 1.4 TiB used: 25.91 GiB (1.8%) fs: btrfs dev: /dev/sdd1 
  ID-4: /opt size: 100 GiB used: 11.61 GiB (11.6%) fs: btrfs 
    dev: /dev/nvme6n1p2 
  ID-5: /tmp size: 100 GiB used: 11.61 GiB (11.6%) fs: btrfs 
    dev: /dev/nvme6n1p2 
  ID-6: /var size: 100 GiB used: 11.61 GiB (11.6%) fs: btrfs 
    dev: /dev/nvme6n1p2 
  ID-7: swap-1 size: 175 GiB used: 0 KiB (0.0%) fs: swap priority: -2 
    dev: /dev/sdd2 
Info: 
  Processes: 404 Uptime: 0h 23m wakeups: 0 Memory: 125.48 GiB used: 3.31 GiB 
  (2.6%) Init: systemd v: 249 target: graphical (5) default: graphical 
  Compilers: gcc: 7.5.0 alt: 7 Packages: N/A note: see --pkg Shell: Bash (su) 
  v: 4.4.23 running-in: konsole inxi: 3.3.20



**mikrios:~ #** lsblk -fm 
NAME        FSTYPE FSVER LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS              SIZE OWNER GROUP MODE 
sda                                                                                                               1.8T root  disk  brw-rw---- 
├─sda1      vfat   FAT32             37A9-911A                                                                    512M root  disk  brw-rw---- 
├─sda2      btrfs                    f668bb92-e549-419b-b801-86d8257d6cac                                          60G root  disk  brw-rw---- 
├─sda3      swap   1                 3b1336dc-985d-4c92-8f35-797e20ec48d4                                         283M root  disk  brw-rw---- 
└─sda4      btrfs        home        241b6625-98d2-4167-9f6e-985476a84c61                                         1.8T root  disk  brw-rw---- 
sdb                                                                                                               1.8T root  disk  brw-rw---- 
├─sdb1      vfat   FAT16             A0FF-36B7                                                                    300M root  disk  brw-rw---- 
├─sdb2      btrfs                    7c0c5fe6-91a9-4cb8-bf3e-adb5238f952f                                          87G root  disk  brw-rw---- 
├─sdb3      btrfs                    52784a91-09dd-4cec-aeaf-ab62b792ce57                                         1.7T root  disk  brw-rw---- 
└─sdb4      swap   1                 1abac62c-373d-4208-a7fa-57025024c4bf                                        65.6G root  disk  brw-rw---- 
sdc                                                                                                               5.5T root  disk  brw-rw---- 
└─sdc1      btrfs        6-T-WD-GOLD 9be936ea-d5cb-4fa5-811c-dde7bad8a296                                         5.5T root  disk  brw-rw---- 
sdd                                                                                                               1.8T root  disk  brw-rw---- 
├─sdd1      btrfs                    7bba6e67-716b-46fc-9bba-83d21a8ab184    1.4T     2% /home                    1.4T root  disk  brw-rw---- 
└─sdd2      swap   1                 f3d19df3-e590-4acd-b39c-ade8c8fb2fa3                [SWAP]                   175G root  disk  brw-rw---- 
sde                                                                                                              12.7T root  disk  brw-rw---- 
└─sde1      btrfs        14T_RED+    e0732760-95e6-4e29-a5b5-093acdb50f17                                        12.7T root  disk  brw-rw---- 
sdf                                                                                                              14.6T root  disk  brw-rw---- 
└─sdf1      btrfs        16T         6265d71f-8715-428d-ad93-d600fe843500                                        14.6T root  disk  brw-rw---- 
sr0                                                                                                              1024M root  cdrom brw-rw---- 
nvme6n1                                                                                                           1.8T root  disk  brw-rw---- 
├─nvme6n1p1 vfat   FAT16             3BC7-D6EA                             249.4M     0% /boot/efi                250M root  disk  brw-rw---- 
└─nvme6n1p2 btrfs                    901087e2-77f2-4684-a096-a2fcfe4d8c5b   87.9G    12% /var                     100G root  disk  brw-rw---- 
                                                                                         /root                                      
                                                                                         /boot/grub2/x86_64-efi                     
                                                                                         /usr/local                                 
                                                                                         /tmp                                       
                                                                                         /srv                                       
                                                                                         /opt                                       
                                                                                         /boot/grub2/i386-pc                        
                                                                                         /.snapshots                                
                                                                                         /                                          
nvme1n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme1n1p1 btrfs        nvme1       2301ea55-48aa-459d-8342-6455cceaea39                                       931.5G root  disk  brw-rw---- 
nvme4n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme4n1p1 btrfs        nvme4       39580307-090a-462a-8f0e-1610ec633f5a                                       931.5G root  disk  brw-rw---- 
nvme7n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme7n1p1 btrfs        nvme6       f0c1f1d3-84ee-448a-b81a-3f07e89ad2cd                                       931.5G root  disk  brw-rw---- 
nvme5n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme5n1p1 btrfs        nvme5       00e24071-a0b5-4a11-ab5e-e0c82bb14553                                       931.5G root  disk  brw-rw---- 
nvme0n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme0n1p1 btrfs        nvme0       3550898b-d4a4-4268-abae-400c72eaaac1                                       931.5G root  disk  brw-rw---- 
nvme3n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme3n1p1 btrfs        nvme3       8791e3f8-1381-4cab-bae6-33d7266f10bf                                       931.5G root  disk  brw-rw---- 
nvme2n1                                                                                                         931.5G root  disk  brw-rw---- 
└─nvme2n1p1 btrfs        nvme2       f08235b2-569b-43a9-9503-b3e53e6ebbb8                                       931.5G root  disk  brw-rw----


On this Leap 15.4 machine, it is installed in a Raid 0 of 2 M2 type nvme; and many of today’s laptops have built-in M2 cards, instead of sata drives.

Best regards .

Translator is used, sorry if there are errors, thanks.

In btrfs ssd and nvme are recognized and configured accordingly.
Even with adding ssd in your instructions.

In mine I did not put anything and the result is this:


mikrios:~ # mount |grep "dev/nvme"
/dev/nvme6n1p2 on / type btrfs (rw,relatime,ssd,space_cache,subvolid=266,subvol=/@/.snapshots/1/snapshot)
/dev/nvme6n1p2 on /.snapshots type btrfs (rw,relatime,ssd,space_cache,subvolid=265,subvol=/@/.snapshots)
/dev/nvme6n1p2 on /boot/grub2/i386-pc type btrfs (rw,relatime,ssd,space_cache,subvolid=264,subvol=/@/boot/grub2/i386-pc)
/dev/nvme6n1p2 on /opt type btrfs (rw,relatime,ssd,space_cache,subvolid=262,subvol=/@/opt)
/dev/nvme6n1p2 on /srv type btrfs (rw,relatime,ssd,space_cache,subvolid=260,subvol=/@/srv)
/dev/nvme6n1p2 on /tmp type btrfs (rw,relatime,ssd,space_cache,subvolid=259,subvol=/@/tmp)
/dev/nvme6n1p2 on /usr/local type btrfs (rw,relatime,ssd,space_cache,subvolid=258,subvol=/@/usr/local)
/dev/nvme6n1p2 on /boot/grub2/x86_64-efi type btrfs (rw,relatime,ssd,space_cache,subvolid=263,subvol=/@/boot/grub2/x86_64-efi)
/dev/nvme6n1p2 on /root type btrfs (rw,relatime,ssd,space_cache,subvolid=261,subvol=/@/root)
/dev/nvme6n1p2 on /var type btrfs (rw,relatime,ssd,space_cache,subvolid=257,subvol=/@/var)
/dev/nvme6n1p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro)

Kind regards.

If your reason for the question relates to migrating a HDD or SSD to NVME by cloning, with expectation of it working the same, the answer is not without prior preparation. At least, that’s how it was three years ago, last time I tried it. The preparation required is rebuild of initrd(s) with nvme support specifically added prior to cloning. It is not there by default, but needs to be, because it’s modular:

# lsinitrd initrd | grep nvme
drwxr-xr-x   3 root     root            0 May 16 21:26 lib/modules/5.14.21-150400.22-default/kernel/drivers/nvme
drwxr-xr-x   2 root     root            0 May 16 21:26 lib/modules/5.14.21-150400.22-default/kernel/drivers/nvme/host
-rw-r--r--   1 root     root        87313 May 11 22:00 lib/modules/5.14.21-150400.22-default/kernel/drivers/nvme/host/nvme-core.ko.zst
-rw-r--r--   1 root     root        28943 May 11 22:00 lib/modules/5.14.21-150400.22-default/kernel/drivers/nvme/host/nvme.ko.zst
# lsmod | grep nvme
nvme                   53248  9
nvme_core             172032  10 nvme
t10_pi                 16384  1 nvme_core

@HealingMindNOS
Hi
I see no mention of the hardware your planning to use,? There is support for sure, for example if using for additional storage.

Booting may be a different issue depending on your hardware, also what M2 device are you planning to use, for example the Intel board running Leap 15.4 I have doesn’t support NVMe boot (I boot from a USB device), the WD SN750 NVMe I have in this system needs some additional Grub boot options (it has a funky controller).

Perhaps you could clarify the hardware in use or planning to use?

Actually, I don’t know yet, I have an ASUS B450M Pro-S mini ATX MB. I am also having misunderstandings with ASUS tech support because their Intel QVL listings for M.2 are vague and not much to choose from for what they tested.

Thanks to ALL of you for your replies.

Hello: Intel is a thing apart with their Optane nvme, also I think and I’m not sure that it has another function (besides the small size).

On the other hand, it has nvme that are not optane, if not, how do the win and linux that come in the new laptops start? .

Asus (in my case I have an x299) the nvme can be for booting and others for other applications (IRST, etc) but they are proprietary Intel and linux sometimes does not have that, linux can use bccache , substituting what intel has not released for linux.

Initrd and initramfs are updated every time you enter the yast loader, accept everything without making changes, and initrd is refreshed; another way is to run: dracut --force .

Best regards .

Be careful:

*2 M.2_2 shares bandwidth with PCIEX16_2. When M.2_2 runs, PCIEX16_2 will be disabled.

1 x M.2_2 socket 3, with M key, type 2242/2260/2280/22110 storage devices support(PCIE 2.0 x4)*2

<https://www.asus.com/Motherboards-Components/Motherboards/TUF-Gaming/TUF-GAMING-B450M-PRO-S/techspec/&gt;
Putting the ASUS text another way:

  • The Mainboard socket 3, supports M.2 NVMe – <https://en.wikipedia.org/wiki/M.2&gt;.
  • But, when an M.2 NVMe device is plugged into socket 3, the PCIe 2.0 x16 (x4 mode) expansion slot will be disabled.
  • And, vice versa …

Not needed for using NVME drives, only for management: https://pkgs.org/download/libnvme.so.1()(64bit)

Remember that the Disk/SATA option in the BIOS has to be AHCI and not RAID for OpenSUSE to see the M.2 drive.
Most computers with m.2 form drives are shipped with Windows with RAID on in the BIOS. All Dell Latitudes are.

Hello:
Mine shares with sata7, if I connect a disk, it disables a pcie port.

In addition, some of the nvme have other functions, which are activated if I use raid, they are usually Intel functions not available for linux.
The pcie that do not share a hub with usb, or other devices and go directly to the cpu, are the ones that can be used in a VROC (virtual raid on the cpu), but for this the motherboard needs to be compatible and then buy a separate hard key, which costs a lot for what it is (I think the one for raid0-1-5-10, about $200).
That is for the x299 II deluxe, in this hp omen it uses a pcie for one operating system and the hd for another (it originally came with win10 in a 256GB nvme, I talked to hp and changed it to 1T, I left win there, since it did not I use it, seize part in a partition for win-linux, and openSUSE I put it in HD ).
Ram expanded to 2x16GB to enable hypertreading.
And of course the S.O. when loaded into ram, it’s fast, only I/O to HD is a bit slow.


HP-OMEN:~ # inxi -SMBCIz 
System: 
  Kernel: 5.3.18-150300.59.87-default arch: x86_64 bits: 64 
    Console: pty pts/1 Distro: openSUSE Leap 15.3 
Machine: 
  Type: Laptop System: HP product: OMEN by HP Laptop 15-dc0xxx v: N/A 
    serial: <filter> 
  Mobo: HP model: 84DA v: 93.24 serial: <filter> UEFI: AMI v: F.12 
    date: 03/23/2020 
Battery: 
  ID-1: BAT0 charge: 61.1 Wh (100.0%) condition: 61.1/61.1 Wh (100.0%) 
CPU: 
  Info: 6-core model: Intel Core i7-8750H bits: 64 type: MT MCP cache: 
    L2: 1.5 MiB 
  Speed (MHz): avg: 800 min/max: 800/4100 cores: 1: 800 2: 800 3: 800 
    4: 800 5: 800 6: 800 7: 800 8: 800 9: 800 10: 800 11: 801 12: 799 
Info: 
  Processes: 412 Uptime: 15d 16h 47m Memory: 31.17 GiB used: 4.67 GiB (15.0%) 
  Shell: Bash inxi: 3.3.20


It is important to read the manual of the motherboard, not all PCs behave in the same way, from there you have an example of the ACPI tables that differ between computers.

Best regards .

Please explain what this is supposed to mean. I never heard of hypertreading before. If you meant hyper-threading](Hyper-threading - Wikipedia), that’s a CPU function, not a RAM function. RAM sticks in pairs is about enabling multi-channel, usually dual-channel, sometimes triple-channel (sticks in triplets), or more (e.g. sticks in quads).

Hello:

What you say is more or less correct.
The pc came with a single 32 GB module, for Hyper-Threading, you have to use 2 memory channels and on the pc it appears as if it were 2 cpus for each core, the question is not very clear to me, but until I know put that configuration, it was not enabled or was not available. (2 modules of 16Gb) .
In itself it is an Intel technology and it is related to multi-threading, and in the simulation of 2 cpus in a single one.
If you ask me what ram has to do with this, well, I have no idea, but until I put them in pairs, Hyper-Threading was not available (idem from the p5, the last one I tried was an i5 without that and it had 4 nuclei and appeared as 4, if it had it, it would appear as 8 .

In architectures and systems I did not study Intel, but cpus up to 16bits (with multiplexed addresses), and for rare micros, well those that were programmed in nrz and were later modulated in QAM, that is, in computer science and programming I am a super novice.
technical services inc. this equipment, that change was made, no idea, and instead in my x299 the number of pairs is according to the number of channels (8 modules of 16GB each), in this case the equipment, is my home made and what I can say is that I appreciate as if I had more bandwidth. (maybe I don’t know how to explain it well, sorry) .

Best regards .

Hello:

Thanks for the link, it also made me curious, why the technical services with cpus of that type have given me 2 modules (example when I bought hp, they changed the 32 for 2 of 16) and in P5 equipment they did the same to me same (in this case I do the same, without knowing why).
I will study it, although if someone knows the answer better.

As for the M2, in the x299, the cache accesses vary, there are some that I think go directly to the cpu (VROC?) and others do not.
The hard key is not available in the distributor of my city, so I have changed the configuration in the bios, so that they behave like disks.
The boot of 15.4 is done separately in raid 0 of 2 M2 WD Black SN850 of 1 Tera (change the M2 of the AN1500 for 2 of 1T and from there I use the efi/boot and a 100Gb root).

Thanks and kind regards .


mikrios:~ # inxi -SMCIz 
System: 
  Kernel: 5.14.21-150400.24.18-default arch: x86_64 bits: 64 
    Console: pty pts/1 Distro: openSUSE Leap 15.4 
Machine: 
  Type: Desktop Mobo: ASUSTeK model: PRIME X299-DELUXE II v: Rev 1.xx 
    serial: <filter> UEFI: American Megatrends v: 3601 date: 09/24/2021 
CPU: 
  Info: 12-core model: Intel Core i9-10920X bits: 64 type: MT MCP cache: 
    L2: 12 MiB 
  Speed (MHz): avg: 1201 min/max: 1200/4800 cores: 1: 1200 2: 1200 3: 1202 
    4: 1203 5: 1204 6: 1203 7: 1203 8: 1202 9: 1202 10: 1202 11: 1200 12: 1200 
    13: 1200 14: 1202 15: 1202 16: 1201 17: 1202 18: 1203 19: 1203 20: 1202 
    21: 1200 22: 1204 23: 1202 24: 1202 
Info: 
  Processes: 395 Uptime: 0h 25m Memory: 125.48 GiB used: 2.92 GiB (2.3%) 
  Shell: Bash inxi: 3.3.20


Hyper-threading CPUs and multi-channel RAM are independent of each other. Multi-channel RAM is much faster than single-channel RAM, up to nearly twice as fast, but it has nothing directly to do with CPU “core” count.

Improved RAM speed from switching to dual 16G sticks from a single 32G stick could, probably in some cases should, make a PC feel as though CPU cores had been doubled, or hyper-threading switched from off to on in BIOS.

Hello:

It may be what you say.
Also it could be enabled when there were 2 modules.
I will search the web, until now I had not been curious about this and I was reading the links that you have left me.

Thanks for the clarifications and best regards.

Using dual-channel memory provides speedup for the whole system from 0% up to 20%. Read this paper for details: AMD Ryzen Threadripper PRO 5965WX Memory Scaling Benchmarks On Linux Review - Phoronix
Using HT on Intel/AMD CPUs gives speedup from minus 5% to plus 30%.
Different programs have different requirements and different speedups. Don’t lump it under a general umbrella.

Use CODE tags, not PHP.
Your system supports 8 memory slots and up to 4 memory channels Cascade Lake - Wikipedia
RTFM before posting.