windows 2016 failover cluster won't accept virtio disks -- not suitable

I am testing a configuration (kind of a lab exercise). I have OpenSuSE running on HP DL360. I created two Windows 2016 Standard Server VMs. The VMs run fine. I then tried to present additional storage volume that the WIndows 2016 VMs will share for Failove Clustering. the volume is just a file on Suse that I attach-disk as target bus virtio. The VMs can see the volume, no problem. But Windows Failover Clustering refuses to see them as “suitable disks.”

First, you should identify what virtualizzation you’re using… KVM?

There doesn’t seem to be published documentation for MSWindows Server 2016 and Custered Share Volumes.
So, I’d assume that the popular document describing requirements for Server 2012 is still usable

https://docs.microsoft.com/en-us/windows-server/failover-clustering/failover-cluster-csvs

Recommend you step through that document carefully, it’s possible that your problem isn’t related to virtio but something else.
You should also try setting up network connections, I don’t know how well direct attached storage connections (ie virtual SATA, IDE or SAS connections).

TSU

WSFC on shared disk requires SCSI-3 persistent reservation support. This is not possible using virtual emulated disks. QEMU supports SCSI-3 PR on pass-through disks (when you attach real physical disks, not files); it is also possible to use iSCSI storage inside guests.

Your comment sent me on a little bit of Internet research (which is only as good as whatever I can muster on any given day, and subject to the whims of key words I use).

First, I was able to come up with additional documents more recent than, but still including the WinServer 2012 article I referenced. Several contained iscsi references, but upon closer inspection

  • I couldn’t find any that referenced a specific version of iscsi although iSCSI-3 might be considered current state of art
  • The only articles which made even a passing comment about persistence (but not necessarily “persistent reservation support”) was when deploying a SQL cluster on the shared storage. This kind of use is a special type of load with special requirements more demanding than other uses because a running database server is a single very large file rather than many small files, the database may be written to incrementally modifying the file and there is a very high necessity to sustain the integrity of the file.
  • The following cautions against iscsi use in general <because of the nature> of iscsi as a block level device. But, there is no requirement that iscsi be used for failover clustering and I don’t see similar warnings for example when configuring SMB or direct attached storage in the following reference
    Storage Spaces Direct overview - Azure Stack HCI | Microsoft Learn

So,
Although** further testing might be needed for verification,**
I agree that if iSCSI is used, and specifically for a load similar to or actually supporting a RDBMS like a SQL Server that special considerations are required.
But, for less demanding scenarios and in particular if iSCSI is avoided in favor of either Storage Spaces Direct or a mounted network connection of a file system like SMB, I don’t know that there should be a problem.
I’ve also been musing a bit about configuring the iSCSI device as a raw file system rather than emulated… but it’s only a thought. No one should configure this without careful consideration of implications including the increased possibilities of data corruption.

But,
Before ending this post altogether…
Here is what I found possibly to be <the> best reference describing the virtio-iscsi device… That it is, indeed designed to support iSCSI target version 3.3, and its features and objectives. From its description, I don’t know that virtio-iscsi is that different than a physical machine’s access to the iscsi object and** it does have a pass through option**, but the reader can make own conclusions…

And, just noting that many articles make the very obvious comment that full emulation QEMU should be avoided because of the extremely high overhead and latency involved.

TSU

Arvidjaar – Thanks so much for the response. I suspected the problem is what you suggested. My final production version will use two 3PAR FC-connected arrays, which are connected together via a software raid 1 configuration in the underlyling host. I was just trying to play around with WSFC to learn about it , by using emulated disks.

My final design is this:

(3PAR array a using FC) + (3PAR array b using FC) ==> (one volume via OpenSuse Raid 1 in host) ==> (6 Windows 2016 VMs in six physical servers, sharing via WSFC). Since it is not SCSI pass-through, I might still have a problem, but I did notice the Windows has AllowBusTyoeRAID in registry. It’s worth a shot !!