Page 1 of 3 123 LastLast
Results 1 to 10 of 25

Thread: access host zfs pool from within windows guest

  1. #1

    Default access host zfs pool from within windows guest

    So - as I got a bit further about my VM plans I come to a point where I have to solve how to access my storage managed by leap host from within my windows VM.
    I planed to use BtrFS - and I was able to mount a BtrFS within windows using WinBtrFS - but as I plan to use 8 drives in a raid6 like setup after a long argue I ended up using ZFS with its RAID-Z2 which seem to fit my needs better. As I'm using ZFS on Linux there's also a windows-implementation ZFS on Windows (but it seem to be a lower version so it doesn't work when the pool is created with leap - but when it's created with windows it also works on leap) - but I only tested it so far using two different VMs accessing the same vhd files - not running a leap on bare-metal and qemu running a windows guest.

    As I searched for this topic on google it seems that many use SMB shares - as qemu "passthrough" only works with other linux guests but not windows. I also thought about passing through the drives or even the controller to the client - but then I remembered why I want to switch to linux on bare metal: to finally get rid of this windows-only proprietary **** ...

    So, the simple question: Is using SMB the "best" option for access a zpool on the host by a windows client - or is it just the most common used one?

    Why do I ask this: As I'm a windows' kid since late 90s I know there's lots of software which just doesn't play well when using a mounted network share instead of a local direct attached drive - for what ever reason (just as an example: as modern games still use rather questionable DRM which still have some kernel-level **** init (someone remember SecuROM?) they just not work when using a network share without any useful logs).
    iSCSI seem to be in the middle as it's mounted as local drive although it's running over network - but iSCSI also require exclusive access - so I wouldn'T be able to mount the same LUN twice at the same time, although same is true for the zfs pool.

    The goal should be to be able to have access to the filesystem from both the host as well as the vm at the same time.
    Anyone has any suggestions?

  2. #2
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    29,857
    Blog Entries
    15

    Default Re: access host zfs pool from within windows guest

    Hi
    What about sshfs access? Then of course just run as a cloud server, eg FreeNAS, (well now TrueNAS), nextcloud etc?
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  3. #3
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    29,857
    Blog Entries
    15

    Default Re: access host zfs pool from within windows guest

    Quote Originally Posted by malcolmlewis View Post
    Hi
    What about sshfs access? Then of course just run as a cloud server, eg FreeNAS, (well now TrueNAS), nextcloud etc?
    I do run qemu with it's own SATA controller (PCIe-mini) for my vm's
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  4. #4
    Join Date
    Jul 2018
    Location
    Loma Linda, Mo
    Posts
    367

    Default Re: access host zfs pool from within windows guest

    Virtualbox - create a Virtualbox share on the host and just mount the Virtualbox share in Windows just the VirtualBox guest additions for windows - no SMB needed.
    Opensuse 15.2 with VirtualBox VM's (XP, 10 & OpenSUSE 15.0)
    Pi4 with Ubuntu MATE 20.04
    Unix since 1974 (pdp-11 in "B" , Interdata 7/32 in "C") (AT&T, Tandy, Convergent, IBM, NCR, HP flavors)
    Linux since 1995 (mandrake, redhat, fedora, centos, now OpenSUSE)

  5. #5

    Default Re: access host zfs pool from within windows guest

    Quote Originally Posted by malcolmlewis View Post
    Hi
    What about sshfs access? Then of course just run as a cloud server, eg FreeNAS, (well now TrueNAS), nextcloud etc?

    I do run qemu with it's own SATA controller (PCIe-mini) for my vm's
    Quote Originally Posted by larryr View Post
    Virtualbox - create a Virtualbox share on the host and just mount the Virtualbox share in Windows just the VirtualBox guest additions for windows - no SMB needed.
    I think you both missed the point:

    a) it doesn't matter to use sshfs-win or vbox "internal" share - they both end up in a network share - exactly the same I end up with just using SMB in the first place - so they do have the exact same issue with applications that don't play well with network shares - in addition to the additional overhead they both come with
    b) about vbox: the whole point of using kvm qemu is to passthrough my main gpu - virtualbox isn't able to support such kind of passthrough - so vbox is out of the game
    c) why using a vm in the first place: to run linux on bare metal and its capabilities when it comes to use more than one physical drive as one logical volume - yes, windows 10 does support this "windows storages", which, when fiddle with powershell, does have something raid6-ish alike - but read up about all the issue with that **** - as with vbox: out of the game as well
    c.2) using ZFSonWindows - yea, maybe an option, but it's a few versions behind master - and also is pretty ugly to use as it somewhat "emulates" a local drive but uses the network share api (when you open the properties it's like working on a network share - that's even worse as one can'T set ANY file attributes at all) - also: when you mount a zfs volume windows keeps telling me this "the system config changed - please reboot" - but there'S no way to config ZoW in a way it auto-mounts pools and volumes on boot

    So, yea, thanks for the input I guess - but they don'T fit my needs.

  6. #6
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    29,857
    Blog Entries
    15

    Default Re: access host zfs pool from within windows guest

    Hi
    No, there could be additional work for the smb share, but if that's what you want run should be fine

    I sshfs here, on windows guest (qemu gpu and sata passthrough) install the apps, done, on host no work to do as ssh is already running. On linux machines no work to do, sftp, scp or ssh...
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  7. #7
    Join Date
    Jun 2008
    Location
    San Diego, Ca, USA
    Posts
    12,906
    Blog Entries
    2

    Default Re: access host zfs pool from within windows guest

    Your options...
    Network Share - Uses network share protocols like SMB.
    Mount remote network filesystem - NFS, SSHFS

    You also have block level distributed storage like iscsi.
    And, if you are either very large or have plans to be very large, ceph.

    The above are network based solutions which are flexible and work in a variety of situations both virtual and physical.
    They may be affected by network conditions if you're on a busy network. On a home network, you'll probably not see any performance issues.

    Then you have "Shared Folders" which is implemented by a non-network protocol that is restricted to Guests and the HostOS they are running on. Do not be misled that you usually access a Shared Folder in Windows as a networked object, that's just a convenient metaphor that Users understand... If you sniff your network wire, you won't see any traffic associated with Shared Folders.

    If you use kvm-qemu, I wrote the following awhile back for setting up Shared Folders and only recently I've heard maybe needs some adjustment

    https://en.opensuse.org/User:Tsu2/virtfs#Overview

    Misc comments
    SMB is known to be a chatty, less performant protocol. If you like SMB features like discovery and organization, go ahead, else other protocols might be considered.
    If you prefer or are fluent in ZFS, go ahead else BTRFS clusters probably has a lower bar for learning... but be aware that there is a BTRFS parity bug deploying RAID that use a parity bit (like RAID 5 and 6) in large disk arrays (>4). Instead, you are advised to deploy BTRFS RAID which is mirroring for large disk arrays
    https://en.opensuse.org/User:Tsu2/systemd-1#BTRFS_RAID
    I must misunderstand something, your reference for using ZFSonWindows is confusing... Isn't Windows your Guest and openSUSE your HostOS? I wasn't aware that ZFS clusters had a native remote access that is anything more than using common remote access like ssh, sftp, etc.

    TSU
    Beginner Wiki Quickstart - https://en.opensuse.org/User:Tsu2/Quickstart_Wiki
    Solved a problem recently? Create a wiki page for future personal reference!
    Learn something new?
    Attended a computing event?
    Post and Share!

  8. #8

    Default Re: access host zfs pool from within windows guest

    Thank you for your additional input. May let me address your questions:

    My final plan is to run opensuse as host os on the bare metal and run a windows vm using kvm/qemu. Reason for it: Use the linux host to handle my multiple physical drives and provide it as one big volume while my powerful main gpu is passed through to the windows vm so I can use it for gaming. Why I just don't use linux for gaming: Because I have some favourite games which use DRM technologies which only works on windows - along with DirectX only variants, so no OpenGL or vulkan which would run natively on linux. As windows just doesn't offer any reliable solutions for raid itself, and even implementations like WinBtrFS or ZFSonWindows are limitted by windows itself, it's just no option for me.
    My current setup, using the fakeraid of my sb950/fx990 asus crosshair v formula-z, limits me to win7 only, as the raid-driver is only available for win7. Also: when the board dies I would require another compatible one. Using software based solutions would get me away from that bottleneck.
    In addition to the mentioned limitation I already learned that typical hardware raid can't handle "silent" bit errors and can lead to data corruption or loss when the controller things the parity is wrong while the actual error comes from a corrupted data block - so the correct parity which could be used to restore the correct data gets overridden with a corrupted one. BtrFS and ZFS provide additional protection against such errors.
    Why I would like to use ZFS rather than BtrFS is a personal choice based on tests I did, as I wasn't able to find objective information which would advise one over the other based on my needs.

    So as I plan to use linux to manage the storage I somehow have to make it accessible to the vm so I can access it from within the windows vm. My first idea was to use iSCSI, but from my tests I was only able to mount it only once at the same time, but not on both the host and the vm at the same time. Hence I'm looking for a way how to make the data accessible to both systems at the same time.
    One of the reasons is to be able access files from both systems like some work I have to do which can be done on opensuse so I don't have to boot up the windows vm.

    SMB sure is an option, but there're some applications and games which, for whatever reason, doesn't play well with network shares but only local attached drives. To avoid some issues I thought there's a way to make some folder of the host available to the guest, but this seem to be limited to linux guests.
    Virtualbox has a way one can create "pointer" files which can be added like disk images but actually access physical drives - but as vbox doesn't provide gpu passthrough it's no option.

  9. #9
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    29,857
    Blog Entries
    15

    Default Re: access host zfs pool from within windows guest

    Quote Originally Posted by cryptearth View Post
    Thank you for your additional input. May let me address your questions:

    My final plan is to run opensuse as host os on the bare metal and run a windows vm using kvm/qemu. Reason for it: Use the linux host to handle my multiple physical drives and provide it as one big volume while my powerful main gpu is passed through to the windows vm so I can use it for gaming. Why I just don't use linux for gaming: Because I have some favourite games which use DRM technologies which only works on windows - along with DirectX only variants, so no OpenGL or vulkan which would run natively on linux. As windows just doesn't offer any reliable solutions for raid itself, and even implementations like WinBtrFS or ZFSonWindows are limitted by windows itself, it's just no option for me.
    My current setup, using the fakeraid of my sb950/fx990 asus crosshair v formula-z, limits me to win7 only, as the raid-driver is only available for win7. Also: when the board dies I would require another compatible one. Using software based solutions would get me away from that bottleneck.
    In addition to the mentioned limitation I already learned that typical hardware raid can't handle "silent" bit errors and can lead to data corruption or loss when the controller things the parity is wrong while the actual error comes from a corrupted data block - so the correct parity which could be used to restore the correct data gets overridden with a corrupted one. BtrFS and ZFS provide additional protection against such errors.
    Why I would like to use ZFS rather than BtrFS is a personal choice based on tests I did, as I wasn't able to find objective information which would advise one over the other based on my needs.

    So as I plan to use linux to manage the storage I somehow have to make it accessible to the vm so I can access it from within the windows vm. My first idea was to use iSCSI, but from my tests I was only able to mount it only once at the same time, but not on both the host and the vm at the same time. Hence I'm looking for a way how to make the data accessible to both systems at the same time.
    One of the reasons is to be able access files from both systems like some work I have to do which can be done on opensuse so I don't have to boot up the windows vm.

    SMB sure is an option, but there're some applications and games which, for whatever reason, doesn't play well with network shares but only local attached drives. To avoid some issues I thought there's a way to make some folder of the host available to the guest, but this seem to be limited to linux guests.
    Virtualbox has a way one can create "pointer" files which can be added like disk images but actually access physical drives - but as vbox doesn't provide gpu passthrough it's no option.
    Hi
    So my qemu setup is similar, I just use physical drives for the qemu machines as well as a gpu. In the WinX Pro system, I just use sshfs (or net use) it then shows up as a network drive and accessible on the host and guest. Zero configuration required to connect, since ssh is running on the host.

    The other thing I do, if no qemu system is running is unbind the second controller and then have access to the qemu disks if needed, but mainly to access the backup drive I have on the controller. Partitions I don't want to see are hidden with a udev rule.
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  10. #10
    Join Date
    Jun 2008
    Location
    San Diego, Ca, USA
    Posts
    12,906
    Blog Entries
    2

    Default Re: access host zfs pool from within windows guest

    Comments...
    No matter what method you use to grant access to a filesystem by multiple devices, there will always be a possible contention problem (ie two machines access the file, make different changes and attempt to save their changes), but each method might address the issue more efficiently, automatically or transparently to the User.

    Your iscsi issue doesn't make sense. It's fundamentally a distributed network storage so multiple machines have access to the same storage pool. Sounds to me you might have set up two iscsi targets instead of a target and any number of initiators. Once setup, all hosts should be able to read/write to the storage.

    Be sure you have the right hardware setup before you attempt a GPU passthrough... In particular, you have to have multiple GPUs because any kind of hardware passthrough involves granting a Guest exclusive access to that device removing access from the HostOS.

    Regarding BTRFS, I'm not aware that there is any special parity bit autorepair, but there is a feature similar to what you describe for ordinary data storage.

    The Shared Folders I describe can also work and IMO supports your stated requirements, and it's not limited to Linux guests... As I described, the shared folders are typically accessed in Windows Guests as networking objects, but they aren't actually accessed on the network wire...It all happens internally on the HostOS.

    TSU
    Beginner Wiki Quickstart - https://en.opensuse.org/User:Tsu2/Quickstart_Wiki
    Solved a problem recently? Create a wiki page for future personal reference!
    Learn something new?
    Attended a computing event?
    Post and Share!

Page 1 of 3 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •