A little background on how I got here... I was running down an issue with the weekly automated FSTRIM basically blowing up when trying to process several SATA SSD's (containing guest VM's) when connected to an LSI SAS/SATA HBA. After LSI confirmed that NONE of their SAS/SATA HBA's support FSTRIM on ANY SATA SSD, I grabbed a couple of Areca 1330-8i HBA's (we needed to be able to service a total of 16 SSD's (Guests) on our VMServers). While the Areca cards appear to be working great (as they usually do), and particularly in their support of FSTRIM, I also noted that none of my "autostart" guests were being AutoStarted at boot time as would be expected. They did automount with the LSI Card but not the Areca card(s)...

So, basically, I needed to find out why my "automount" Guest VM's weren't getting automounted at boot time.

"cat /var/log/messages | grep libvirt" showed this:

2021-05-06T10:50:31.061690-05:00 sundance libvirtd[1988]: Cannot access storage file '/VMDisks/Mail/Mail.qcow2' (as uid:442, gid:443): No such file or directory
2021-05-06T10:50:31.136053-05:00 sundance libvirtd[1988]: internal error: Failed to autostart VM 'Mail': Cannot access storage file '/VMDisks/Mail/Mail.qcow2' (as uid:442, gid:443): No such file or directory
2021-05-06T10:50:31.164148-05:00 sundance libvirtd[1988]: Cannot access storage file '/VMDisks/Web/Web.qcow2' (as uid:442, gid:443): No such file or directory
2021-05-06T10:50:31.178212-05:00 sundance libvirtd[1988]: internal error: Failed to autostart VM 'Web': Cannot access storage file '/VMDisks/Web/Web.qcow2' (as uid:442, gid:443): No such file or directory
2021-05-06T10:50:31.208746-05:00 sundance libvirtd[1988]: Cannot access storage file '/VMDisks/Dimension/Dimension_Data.qcow2' (as uid:442, gid:443): No such file or directory
2021-05-06T10:50:31.225231-05:00 sundance libvirtd[1988]: internal error: Failed to autostart VM 'Dimension': Cannot access storage file '/VMDisks/Dimension/Dimension_Data.qcow2' (as uid:442, gid:443): No such file or directory

(Note that the mount point for all disks on the 1330 controllers are mounted under "/VMDisks")

But the filesystems WERE mounted (...at least by the time I could log in and look). So I checked the libvirtd.service file to see if it was supposed to wait for "local-fs.target" ("After=local-fs.target") and it was as well.

After doing a bunch of research I found this: https://bugzilla.redhat.com/show_bug.cgi?id=1725389 ...which seemed pretty relevant!

Then another posting which basically led me to examine the manpage for systemd.mount which says this:

nofail

With nofail, this mount will be only wanted, not required, by
local-fs.target or remote-fs.target. Moreover the mount unit is not
ordered before these target units. This means that the boot will
continue without waiting for the mount unit and regardless whether
the mount point can be mounted successfully.

Now, we have always set the "nofail" option in fstab for those "secondary" filesystems which are not required for the system to boot. We don't want the O/S to NOT come up just because one of the less critical filesystems may have a problem and/or can not be mounted. I'ld rather try and resolve that type of an issue with a fully booted system and all its tools at my disposal...

Anyway, it looks like the O/S is operating as designed - even if I don't agree with it. (It seems to me it would be better for systemd.mount to WAIT for "nofail" mounts to complete (or fail) before it moves on). I think this behavior is unfortunate. Its also curious that the LSI driver (mpt3sas) must have completed the mounts faster than the Areca driver (arcsas).

Anyway, I expect I'm not the only one who may be subject to this less-than-optimal (IMHO) behavior. So, I worked up a secondary "AutoStartVMs.sh" service to identify all "autostart" guests, wait for their filesystems (image files) to appear, and then start them (if they have not already started). There is a optional "Delay" to force a little wait (to give the mounts time to complete) before the whole process begins. There is also a configurable number of "attempts" which will be made for each guest (looking for their image files), and note the "sleep 10" delay between "attempts" if you want to change it). So far, the simple 60 second delay I am using in my configuration seems to be more than enough to allow all the associated filesystems to mount. I've never seen it have to wait or retry and certainly once the first guest filesystem becomes visible I would expect all of them to be visible.

I have this running on my systems and it appears to mitigate the issue I have described. Maybe it will help someone else as well. I also think its important to document this behavior if other folks (as myself) are surprised at how it is implemented!


AutoMountVMs.sh:

Code:
#!/bin/bash
#

function main() {

    local Delay=60

    # Get some runtime info...
    ThisDir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
    ThisScript=$(basename "$0")
    FullScript=$(echo "${ThisDir}/${ThisScript}")

    # Calculate our log file name
    LogFile=$(echo ${FullScript} | rev | cut -f 2- -d '.' | rev)
    LogFile+=".log"

    # Get list of AutoStarted DOMAINS
    DOMAINS=$(virsh list --all --autostart | tail -n +3 | awk '{print $2}')

    TeeLog "${LogFile}" "Beginning VM AutoStart Process --------------"
    TeeLog "${LogFile}" ""

    if [ "$Delay" -gt "0" ]; then
        TeeLog "${LogFile}" "(Delay ${Delay} seconds...)"
        TeeLog "${LogFile}" ""
        sleep $Delay
    fi

    for DOMAIN in $DOMAINS; do
        StartDomain $DOMAIN
    done

    TeeLog "${LogFile}" "VM AutoStart Process Complete ---------------"
    TeeLog "${LogFile}" ""

    exit 0
}

function StartDomain() {

    local DOMAIN=$1
    local Attempts=10
    local Success

    local State=$(virsh list --all --autostart | grep "$DOMAIN" | awk '{print $3}')
    if [ "$State" = "running" ]; then
        TeeLog "${LogFile}" "$DOMAIN is already RUNNING!"
        TeeLog "${LogFile}" ""
        return
    fi

    TeeLog "${LogFile}" "Starting $DOMAIN..."

    SOURCES=$(virsh domblklist "$DOMAIN" --details | grep disk | awk '{print $4}')

    for SOURCE in $SOURCES; do
        Success=false
        for (( i=0; i<Attempts; i++)); do
            if [ -f "${SOURCE}" ]; then
                TeeLog "${LogFile}" "$SOURCE is online..."
                Success=true
                break
            else
                TeeLog "${LogFile}" "Waiting for $SOURCE..."
                sleep 10
            fi
        done

        if [ $Success == false ]; then
            TeeLog "${LogFile}" "$SOURCE not found! (FAIL)"
            TeeLog "${LogFile}" ""
            return
        fi
    done

    virsh start $DOMAIN

    TeeLog "${LogFile}" "$DOMAIN Started!"
    TeeLog "${LogFile}" ""
}

function TeeLog() {
    local logfile=$1
    local message=$2

    if [ -z "${message}" ]; then
        echo "" | tee -a ${logfile}
    else
        echo "$(date +'%Y-%m-%d %H:%M:%S') ${message}" | tee -a ${logfile}
    fi
}

main "$@"
...and here is the service definition file:

AutoMountVMs.service:

Code:
[Unit]
Description=Start VMs which may not have AutoStarted as their file systems were not yet mounted
Requires=local-fs.target
After=local-fs.target

[Service]
Type=oneshot
ExecStart=/VMDisks/AutoStartVMs.sh

[Install]
WantedBy=default.target