Well, THAT maybe could actually be an idea I may have to investigate, as from my tests it’s easier to add SSH to a windows 10 using the “Linux Subsystem for Windows” and mount it from the opensuse host via simple mount command. And I should be able to get at least some rudimentary scripts working which cover the auto-mount/-unmount.
The only thing I’m a bit worried about is that ZFSonWindows seems to be a bit behind current ZFSonLinux - and, at least from my tests, hence doesn’t support when the pool was initially created on linux. But if the pool is created on windows when mounting it on linux all you get is an info about it doesn’t have some features enabled and hence can be upgraded.
This actually also fits my plans to actually do use a 2nd physical controller.
About for hardware support: Well, at least my current board/cpu combo actually is not able to correctly load the current AMD-Vi IOMMUv2 - but as mentioned a friend of mine has some spare parts lying around I may gonna have a chance of testing the upcomming weekend. If his parts are also not compatible I have to invest in some newer hardware anyway.
Well, I only tested with windows 7 as guest OS - not yet with windows 10 - so I don’t know if this makes a difference. I also wasn’t able to figure out how to emulate the not yet for real existing 2nd HBA and drives attached to it - maybe someone has some hints on that?
@tsu2 (didn’T quote as it would be too long for a post):
Well, the main issue isn’t that windows maps a network share to some drive letter - but it does this in a way which is detectable by said anti-piracy drm that it’s not a local attached storage but a mounted network share. From an enthusiasts point of view externalizing your stuff on your maybe existing NAS doesn’t sound that way off to me - but it seems some of the big players in the game industry either just don’t a **** about such “more advanced” setups - or try to “prevent” such being used on purpose. So, it’s no question about using an UNC path (\server\share) vs a mapped drive letter - but the base fact that windows somehow tags it as remote instead of local. The only solution I was able to get working was using iSCSI - which is mounted as it would be a local attached dirve - but I hadn’t in mind to play the recursive loop yet as mentioned by malcolm. I also don’t have any experience yet about differences between using win7 as guest vs usin win10 - it could actually make some differences I’m not yet aware of.
I guess testing to use a win10 guest instead of win7 is the next step I should give a try and work my way from that. Maybe there actually is a way “abusing” this “linux subsystem for windows” to somehow mount a host path like it would work on linux guests - which maybe could be a solution.
Have to test that … but as it’S gettin late already here and I have to be up early for work I guess that’s somethin I’ll try upcomming weekend together with the additional hardware of my friend …
I’ll respond back about the results.
Hi
Yes, hardware selection is important, my Intel DQ77MK MB is 8 years old but works perfectly, upgraded the CPU from a 4 core i5 to a 4 core 8 thread Xeon. I started working on my setup last June… but have had it how I like for awhile now (5 qemu machines, 2 each on their own drive, and the third for WinX, fourth is a backup drive)… (https://forums.opensuse.org/showthread.php/524942-GPU-passthrough-Various-virtualization-technologies).
My second controller is a IO Crest 4 Port SATA III Mini Pci-E Controller Card. I have a NVMe device in the x16 slot which also holds a M.2 SSD, board needs to boot from a disk, no support for booting from the NVMe…
I would look at upgrading to WinX, it’s still free… download the iso image, upgrade let it validate and should be good to go…
So, after a lot of tinkering I got it working, at least somewhat. Turns out that this message “iommuv2 is not supported” seems to be only some sort of information that some additional features in the v2 are not supported on my platform, but all the other stuff is working. I encountered some new errors, but I guess they fit in a new topic better than keep it here, so I may should postpone this thread to keep it to the original question until I get back to it.
Although I’m a AMD user for at least the past decade and would like to stick to it, but I guess for all that VM stuff I may have to change to intel, maybe even to xeon as well.
I did use win10 this time, and although it took quite some additional time as even using kvm still causes quite some performance impact, but at least as far as I was able to test the passed thru gpu it seems to get almost it’s native performance.
Next I’ll try is to see what the linux subsystem for windows is capable of in terms of zfs and if it fits my needs a bit better than the windows implementation - but this is still a task for the upcomming weekend as my current day-to-day time is very short between work and sleep.
So far, I’ll report back when I got new updates. Thanks anyways so far for all that good ideas.
There will likely be s substantial difference for your purposes running either WSL vs WSLv2.
The former shims Linux commands to the Windows kernel while WSLv2 is a true, independent Linux kernel.
Am both surprised and unsurprised by your AMD virtualization experience.
I don’t run on an AMD APU, but it’s my understanding that virtualization support has been a high priority feature from the beginning with the Zen architecture.
Maybe it’s incomplete…
Well, the architecture I use is far older than Zen: I’m usin a FX-8350 on AM3+ - which is part of the Piledriver sub-architecture of Bulldozer - according to wikipedia published around mid-2012. The date I bought the hardware was back around the mid-2013 central europe floods (which actually got as high as only just about short of overflowing the 7,50m high flood walls of my hometown) - so aside from the architecture being already over 8 years old my hardware itself is also already about 7 years old - and, luckly aside from 3 hdd crashes, still runs just like the first I booted the completed system. In addition to that I only have pro-sumer grade hardware - so, I guess it’s to be expected not fully compatible with modern virtualization techniques - and it took me quite some tinkering to get it even running at all after I already considered my hardware to be not compatbile at all.
Although Zen my have shifted AMDs priorities about feature compatibility I guess the professional xeon line from Intel is the better way to go for building a VM host - hence I may have to change to this platform and use modern parts to take advantage of modern stuff - let alone the way increased performance in respect to my current setup. Who knows - maybe I’ll just convert my current system into a big power hungry NAS supplement instead of trying to build all into one case. But from as far as I read using an actual remote NAS has its own disadvantages - hence local storage seems to be the preferred option.
But as said: The next task I may try to figure out this upcomming weekend is how to correctly set up the VM - as I encountered quite some new interesting issues - but this should be its own thread.
As for now I keep the idea to set up the zfs inside the vm with a passed thru HBA in mind - and will figure out if using the ZFSonWindows driver fits my needs - or if I should consider using ZFSonLinux within WSL(v2) …
Thanks anyways so far …