Wait for script to complete during boot

Greetings All,

I am attempting to get a script running at boot time, and I am having a bit of difficulty.

The script conditionally mounts an encrypted disk based on availability.
The script will prompt the user for a password and mount the appropriate partition.

The script works fine when called from a fully booted system.
However, I need this to execute during boot, BEFORE the desktop GUI is started.

I have attempted to run this from /etc/rc.d/boot.local.
However, it appears that boot.local is run concurrent with other processes.
When the script runs, the password prompt appears briefly, but the rest of the
boot process continues before I can finish entering the password.

I need this script to finish BEFORE KDE is started.
Is there someway to get boot.local to block other processes until completion?
Or is there some other magic available to run this script in a serial fashion at boot time?

The computer is running SUSE v42.2.

Thanx in Advance.

Richard Rosa

Script that is run:

if  -e "/dev/disk/by-uuid/963525f9-73e3-4b28-9edb-c3e44db2e4d8" ]
      cryptsetup luksOpen /dev/disk/by-uuid/963525f9-73e3-4b28-9edb-c3e44db2e4d8 ehome
      cryptsetup luksOpen /dev/disk/by-uuid/d1039572-21f9-4bac-8f0d-5ab2567adbd2 ehome
mount /dev/mapper/ehome /ehome 


#! /bin/sh
# Copyright (c) 2002 SuSE Linux AG Nuernberg, Germany.  All rights reserved.
# Author: Werner Fink, 1996
#         Burchard Steinbild, 1996
# /etc/init.d/boot.local
# script with local commands to be executed from init on system startup
# Here you should add things, that should happen directly after booting
# before we're going to the first run level.
openvt -s -w /etc/rc.d/t420s.sh
exit 0

You are still working with old SysVinit constructs. openSUSE works with Systemd already for some time. While old rc.d. etc. are still honoured by systemd as good s possible, you better go for the new way of doing things.

Maybe start with

man systemd.unit

Wow! This is a little beyond my current knowledge of the internals of Linux.

If I am reading the man pages right, what I need to do is create a configuration file (‘unit’) that points to my script,
place it in one of the default paths and it will be executed. If this is so, can you recommend a ‘unit’ file that
I can copy as a template that would do the job?



Curious. Have you tried adding to the /etc/fstab with no fail option? Never tried with encrypted but that should work

This answer is not really to the OP’s direct question, but it tries to answer the probable goal of the OP.

Not a bad approach. Please richardrosa can’t we better try to help you with your goal then with this step? See also: http://www.catb.org/~esr/faqs/smart-questions.html#goal

The unit file is to go to /etc/systemd/system/ and the file name has to end in .system.

A short example I found:

Description=My cool daemon 


[Install] WantedBy=multi-user.target

Where the /usr/local/daemon of this example should be replaced by the path to your script f course. And the After= should most probable different. I am not sure how to guarantee that it will finish before some (which) other action.

I only wanted to point you to the fact that your are walking the wrong path. Not that I have much experience with writing systemd units. But maybe others with more on-hand knowledege will notice this thread.

The above apart from the fact that we do not really know if the whole approach is correct, because we do not know what the main question is (see posts above).

The problem with this method is that I want one disk or the other. One of the partitions is usually always present. The other will be on a disk that may or may not be attached. If the optional disk is present, I need that one mounted instead of the always present one. Either disk is mounted at the same mount-point.

I can’t see any way of specifying this process via FSTAB. Add to this, the password would be prompted twice when both disks are present.
If you know of some magic to conditionally process in FSTAB (or CRYPTTAB), I can give it a shot.



Well, when the one is always present, just put in in fstab (not FSTAB).

When the other on is sometimes present, put it in fstab after the first one with the nofail option.

Then, when the second one is not present, the mount at boot will be skipped.
When the second one is present, it will be mounted at the same mount point the first one is mounted, hiding it in the process.

No need to mount instead of the first one, just mount it over the first one.

I assume the the second one will then indeed ask the password twice. I do not know if that is a showstopper. At least it would inform you what the situation is.

I wish it was this easy…

However, the fstab processing seems to be fussy about having two devices specified for the same mount point. It emits an error message and ignores the second statement regardless if the second device is present or not.

Likewise, if the same name is specified in crypttab for two different devices, neither one is processed.

That is why I figured running a simple script should do the trick. But getting that script to work seems to be anything but simple…


Hm, you did not tell that you tried those obvious solutions already.

I can understand that mentioning the same mount point two times in the configuration is seen as an error. After all one would hide the other and why “would one want that”?

That brings us again one step further to the root problem: why would one want such a situation? Apparently what you want is not something that is foreseen by the designers ;).

BTW going for s systemd unit is of course still possible. I guess.

It is a bit of searching and maybe trial and error.

It maybe you need a .service and not a .system.

man systemd.service

Have look at what I assume is the mount service:


It has some usefull (IMHO) hints, like the Type=oneshot, etc.

I would start with e.g.

Description=Mount extra file system over existing when available


Maybe it needs some more dependencies.

That brings us again one step further to the root problem: why would one want such a situation?

This is needed because of a deficiency in the NVIDIA drivers.

I have two machines: A desk machine with an NVIDIA display and a T420s Laptop with an Intel display chip.
When I am at home, my Wife uses the Laptop occasionally for a bit of web surfing.
However, when I travel, I remove my main drive (an SSD) out of the desk machine and use it as the boot drive for the laptop.
When I ran Suse 13.2, this was NEVER a problem. However, with 42.2, the NVIDIA drivers MUST be removed BEFORE
the disk is inserted and booted. Otherwise Plasma will NOT start. For The converse, if the NVIDIA drivers are NOT installed
on the desk machine, system has a tendency to lock.

So rather than constantly trying to remember to install/uninstall these drivers when the disk is swapped (which is also very time consuming),
I figure I would set up the (small) laptop disk as the permanent boot, with some “simple” logic to mount the SSD data & home partitions in place of the
laptop’s. This saves me having to swap drivers, or trying to sync data, settings, etc.

I CANNOT believe that I am the only one in the world who needs this…

I would start with e.g.

Thanx for the starting point. I also DID try playing around with the file that launches boot.local to see if I could get it to stop until complete
(TimeoutSec=infinity) but that didn’t seem to do much of anything.

I’ve got the new unit file in place and the system log says it timed-out (no password prompt, either).
I’ve got some debugging (and some self-education) to do.

However, the Wife is calling me for lunch, and wants me to do some non-computer based chores :frowning:
I’ll probably get back to this later today or early tomorrow.

Thanx for all the assistance.


Please take into account that I have no experience with the encrypting, thus I have no idea where the asking for the encryption password is going to. But when it is simply asked on the console when a mount is done, that should work I guess.

I do not know how you installed nVidia drivers, but if you used openSUSE RPM it puts libraries in separate directories and adds ld.so.conf snippet to override default ones. So if the only reason is to remove nVidia drivers, simple script that adds/removes ld.so.conf snippet on boot depending on whether nVidia hardware is present looks enough.

Unfortunately, that was tried, without much success. It got Plasma started, but as soon as ANY graphical app (including some Web pages) started,
Plasma would crash. The only thing that reliably works (at least for me) is to COMPLETELY remove the proprietary drivers.

In any event, after MUCH trial & error, and a LOT of re-boots, I have the magic needed:
Two keywords in the UNIT section get the script executed at the right time:




in the Service section

The trick is finding where the system has finished with the /etc/fstab mounts but has not yet started spinning off
multiple, parallel processes.

There doesn’t seem to be any easy way to figure out the order in which systemd starts various services.
The best that can be done is to look at the log AFTER the system is up and make a guess.
What also makes the sequence hard to figure out is that log entries from various services are inter-dispersed.
It would seem to me that there SHOULD be a command to display the order of service start,
but It if there is, I did not discover it.

After MUCH effort I have restored some functionality to LEAP that was lost from 13.2.
Hopefully, the next release of Suse will NOT drastically change the service start order,
or another day of reboots will be on tap :frowning:

I’ve learned quite a bit in the past 24 hours…

Thanx for your help!



And thank for posting the configuration lines that did the trick. They may be very helpful to others.

I do not think that openSUSE will switch to another SysVinit/Systemd rplacement in due time ;).

But note that systemd was already used in 13.2 (and even earlier). And that systemd, certainly in the begin, was (and at least partly still is) able to handle SysVinit constructs (in init.d and rcN.d, etc.) upwards compatible. This to make transit easier. But when one lets go the opportunity, there comes a moment where one is better off having done the transition.

Did you also update ld.so cache after changing configuration?