PXE remote install on remote cifs

I’m sorry for the title, I couldn’t come up with something better. If anyone has a suggestion how to improve it, please feel free to throw it in …

So, as I came to a conclusion on how to upgrade my local storage (currently: RAID 5 using 5 drives - planed: RAID 6 using 8 drives (it’s just the best option for my usecase and on what I’m able to afford)) I want to test it before doing it for real. I’m very familiar using PXE to set up a remote install environment (using it for several machines for several years now), and I also have some experience using iSCSI (used it couple of times now, even two times with windows 7 - was a bit slow, but it worked), so I guess it shouldn’t be that hard for me to pull it off if my plan is possible.

What I’d like to try: As far as I remember there once was a tutorial on the SDB with a title similar to like “How to (remote) install opensuse using a network share / cifs / smb / iSCSI”, but I’m unable to find it. So, if my memory serves correctly here it should be possible to install and boot from some remote network storage while the local system doesn’t have any physical drive. I know how to do it with iPXE using SANBOOT (require setting up iSCSI) with windows (the iPXE sanboot driver just emulates a drive for the windows install / boot loader), but isn’t there any more direct option if both systems running opensuse leap 15.x? iSCSI would be a possibility for the testing, but isn’T there a way like to set up some share on my one opensuse machine (which also hosts the dhcp and tftp for PXE boot) and just access it via some kernel option in grub?

Why? I’d like to use my in place network stuff to take advantage of the available space in the network (about 3TB of free space available to anything in my network) before I trash my windows 7 install (quite honestly: I’m just not aware of a solution able to boot via PXE which is able to back up the system drive so I can trash it - and later restore it like nothing happened - if anyone has an idea please share it with me).

This is not urgent - so if anyone needs some time to come up with a solution or need further information, please, take your time and ask me anything you’d like to know. It’s just an idea for upgrading my current storage to something bigger and more fault tolerant.

Thanks to anyone in advance …


install= option. See SDB:Linuxrc - openSUSE Wiki

It seems there’s a misunderstanding here: I’m not looking for a way to change FROM where to install the system, which obviously will be the online repo, but rather TO where to install it.

As an example: When I use iPXE SANBOOT it persistant stack kinda “emulates” a local drive (at least it worked that way for windows), and I can only assume the same is true for linux (didn’t tested yet), but I’m looking for a possibility to not use iSCSI but rather a “simple” cifs share.

I am not aware of any support for installation on network share, be it NFS or CIFS. Usually diskless systems are prepared on server and resources are made available to clients. You do not really install or update client individually, you do it on server.

Otherwise dracut has cifs module that supports root on CIFS, so it looks doable.

As I’ve written in my initial post: I recall to may have seen some SDB tutorial explaining how to install opensuse on a remote storage and boot from it using PXE - but I’m not able to find it - so it’s possible that my memory is wrong.

Maybe someone can think of another idea if I may try to re-phrase why I’d like to do it:

Currently I’m running an ASUS Crosshair V Formula-Z and using its fakeRAID provided by the AMD SB950 chipset. From what I was able to dig up it seems that at one point in time AMD did provided a Linux driver, or at least some stuff to build a kernel module - but it seems no longer available, at least I’m not able to find it. So the RAID I’m running right now is based on a Windows only driver and only works with the AMD SB950. So, if at any point in time my motherboard dies I’m not able to recover my data, at least not without paying a lot of money to someone able to restore such an array.
To get around this issue a friend of mine suggested to me to switch over to a software RAID. As I also do play games I’m kind of “stuck to windows” (mostly for the two reasons of a) not many games have native linux ports and b) those Windows-only DRM **** which just doesn’T work with wine at all (already tested this)), so just completely switching to Linux isn’t an option for me. But as Windows only support RAID levels 0 and 1, and 5 only on server versions, I’m out of look to use a software RAID on Windows. I tried other options like WinBTRFS (failed even with help of its dev) and SotrageSpaces (again: Windows-only), and although I was able to set up something working as my plan (8 drive RAID level 6) it’s far from a reliable solution (there’re news about StorageSpaces broken on the current Windows 10 v.2004 update already caused data loss).
So, the next idea I came up with is to just throw a 2nd cheap GPU in my rig and use a rather simple setup of opensuse (yes, could be any distribution - but suse is the one I know best) on the bare metal and have my Windows run in a VM using passthrough for the devices like my main gpu and the usb root ports. As I have several systems in my network with lots of free storage this idea came up in my head: Why not just use the available storage in my network and use PXE to boot the rig instead of trashing my current Windows install just for testing?
I used iPXE and its SANBOOT capability in the past to replace a broken main system drive of my roommates computer, and I’m sure it would also work the same for Linux - but as Linux seem to have the ability to boot up from a remote storage like a NFS/CIFS share after grub was booted via PXE there has to be a way to initially install the system into that share. The installer only offers to install on local storage, which I could provide via iPXE and its SANBOOT, but not on a remote storage share.
Yes, maybe I could just back up my Windows 7 setup and restore it after my test - but that’s a step I’d like to avoid but instead just use the available storage in my network directly - but also while avoiding iSCSI - as, for some odd reason, it has extrem low performance on my network (as iSCSI is an industry standard I guess it’s some issue on my network causing it).

It’s not like “this has to be possible” - if it’s not possible the way I’d like to use - ok, fine, then it’s not possible, but this would require either a long time consuming backup and restore of my Windows setup or using low performance iSCSI.

Does the following clarify what you’re asking for?
You’re not asking for a PXE install which is where you typically run a minimal client on a target machine which installs on the local machine using sources from a remote installation source…

You instead want to run a thin client on a local machine which runs a “diskless” system, ie the OS is actually running on the remote server while the thin client merely displays the UI.


Why not simply use live image with persistent storage for testing?

there has to be a way to initially install the system into that share.

Nobody installs “into this share”. You prepare image (which can be done anywhere, e.g. using Kiwi), transfer it to server and export it. I do not think you can use NTFS exported via CIFS for Linux root directly (filesystem semantic is too different) but you certainly can loopback mount image off CIFS share.

A follow up to my post,
Just making it clear that if a thin client deployment is what you’re after, that’s not the same as PXE.
And, more often than not involves a special type of server application called a terminal server… Although there are other solutions like VMware Horizon that deliver a similar experience.


I’ve re-aranged the order so my reply makes a bit more sense …

I guess it’s on me expressing myself badly, but let me try to clear it up for you:
This is not about setting up a thin client but rather a full runlevel 5 install - but just on a remote storage rather than on a local one.
Reason: As described I currently have 6 drives in my system, 5 of them not useable as they make up the RAID array and the 6th one as the current main system drive on which windows is installed on. As this is all just about a test to check if it’s possible what I’d like to do I’d rather wont trash my windows install but preserve it - or, from a logical point: just see it as an unusable/not connected drive. As I also can’t use any of the other drives (which would degrade the RAID which in trun would require a rebuild - which last time a drive failed took about 14 hours) I just don’t have any local drive to install linux on. Hence my question: How, if possible at all, to install opensuse on a remote storage?
Why PXE? Well, cause all PXE is supposed to do is to load GRUB via network - and GRUB itself is then able to load the kernel and initrd via the network also. The question is: If I somehow manage to do an install on a remote storage is grub or, the kernel or the initrd able to mount it (like a SMB share) and use this as the root of the FS?

Well, a live image would be a possible solution as it’s easy for me to boot any type of image via PXE. The important question would be: How to tell it to use the remote network storage on boot to load all the modifications I have to make in order to enable virtualization? I guess it surely is possible to mount /boot in a way so that mkinitrd is able to modify the boot menu and initrd so that it will be able to mount a network share - but this has to happen before the kernel loads as setting up a system for pass through virtualization requires not just a kvm kernel but also some special boot settings so that the hardware I’d like to pass through to the VM is ignored so it’s actually available to pass through (I already read about that topic).

So, to summerize: I’d like to do a normal install - but instead of writing the files on to a local disc I’d like to use storage available in my network.
Sure, I could use iPXE and its SAN capability - but for what ever reason this has very poor performance on my network (only about 5-10 MB/s read/write although GBit ethernet and s-ata3 drives able to perform faster than 100MB/s).
Again: This is just to test my plan if it works - and other ideas, like trashing my current windows install and restore a backup afterwards, use one of the raid drives and do a rebuild afterwards, are not an option - mostly due to their required time (a raid rebuild takes about 14 hours) - which I see way to long just for a simple test, as even reinstall windows (just the OS and the basic drivers) takes about 3 hours - let alone re-installing all my software and applications.
TLDR: I have enough free space available within my network I’d like to use as a simple drop-in replacement instead of a local drive. Using iPXE + SAN to “emulate” a drive does work - but has underwhelming performance even slower than an old P-ATA/33 drive. And as I’m sure Linux is able to mount a remote storage (no matter how, CIFS/NFS, SMB, iSCSI, FCOE, etc) as root - there just has to be a way to also do the same for installing onto such one, like “mount //host/share /mnt” and install into there …

// EDIT:
I’m refering to something similar like this: SDB:Booting from the Network with GRUB - openSUSE Wiki
But just using PXE instead of a floppy for the initial load of GRUB

states this:

NFS as a Root File SystemOn (diskless) systems, where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.

When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section, “Activating the Network Device” and choose On NFSroot in the Device Activation pane.

So, it’s clearly possible - question is: How to perform the setup/install?

First, NFS works differently than CIFS, what might be possible on NFS might not be possible using a CIFS share.

I’m trying to imagine the architecture you’re trying to set up deploying the system files from a remote share.
That generally describes what is known as a “diskless workstation” where a thin client which supports initial boot downloads and runs an <image> that runs in RAM.
Note that an image is not the same as a filesystem tree of individual directories and files.
If an image is writable or stored in multiple of which at least one or some is writable, then state may even be saved.

And, note that because of this distinction between an image and a normal installation, your question about installing into a network share does not make sense.

Images can be created a number of different ways… some are pre-built and downloadable like the LiveCD ISO files, you can also use or modify images provided by and built by Kiwi.