Docker: Cannot start any container because of missing entry point

Hello all,

I have a problem with running any container at the moment, always leading to an error even with containers that were running before.

I had a container running since 2 weeks with no problem (mysql:latest). Then I restarted the server PC and afterwards the same container wouldn’t start any more with the message:

Error response from daemon: Container command 'docker-entrypoint.sh' not found or does not exist.

Also creating new containers from current images from docker hub are not running with the same error message. For example:

docker run -d --name owncloud_test-container \
-p 34567:80 \
owncloud:9-apache

Leads to:

e9ca24834ba2ad76c185033e267ce42d5722330d2a688141bf362d9a3c2f052f
docker: Error response from daemon: Container command '/entrypoint.sh' not found or does not exist..

I am using:
OpenSuse Leap 42.1 (64 bit)
Docker: 1.11.1-106.1

First think I look at when I have strange behavior is apparmor, but this should not be a problem:

# /etc/init.d/boot.apparmor status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
apparmor.service - LSB: AppArmor initialization
   Loaded: loaded (/etc/init.d/boot.apparmor)
   Active: inactive (dead)

Any Ideas?

Best
Torsten

If you’re accessing a docker container from the HostOS (not from a remote machine),

You might use nsenter
https://en.opensuse.org/User:Tsu2/docker-enter

It’s reliable and does not rely on networking in case that somehow becomes non-functional… like any of a number of reasons why your SSH now doesn’t work.

After gaining access to your running container, it should be much easier to troubleshoot why your SSH isn’t working… and who knows… As I noted in my nsenter guide it may become your primary way to access your containers.

TSU

BTW
You should know that whenever a docker container is restarted, it’s not the same as before… It’s a brand new instance with hardly anything in common with the previous container, perhaps most importantly it will have brand new ids… So anything that relied on a previous id or a configuration that was passed by command line will not persist.

To see what persists, you need to inspect your Docker file for that particular container, it’s a foundation “build” file that describes what was originally used as the base and anything that has been added to it permanently, resulting in a final configuration.

This is why, for instance although you might download and use a base openSUSE container at first, you <must> proceed immediately to creating a custom Dockerfile to configure and retain all your modifications. I created a simple tutorial to understand basic first steps (based on official documentation) using openSUSE methods which also… btw also introduces the concept of defining the ENTRYPOINT
https://en.opensuse.org/User:Tsu2/docker-build-tutorial-1

You should also consider whether your “docker-entrypoint.sh” is not being invoked correctly from wherever.

You might also find it interesting to take a few minutes to read the github project file for nsenter, it describes why it’s so complex to enable repeatable access to the inside of a container, and the individual things nsenter has to do for you to enable a single command to do it all.

TSU

Hello TSU,

thanks for the detailed answers.

First of all, yes, everything is running on one Host, nothing remote in her yet (besides Docker Hub maybe).
From my understanding docker-enter is not helping at the moment, because my problem is not an access issue, the problem is, that the containers are not starting at all.

Concerning that the docker container is not the same after restart: I understand that is the case if I call something like “docker run …”. But if I do:

docker stop my-container
docker start my-container

It is still the same container, isn’t it?

And that is what I did just with a server restart in between and than get the described problems.

Just do verify the problem I also created new containers from images directly from Docker Hub e.g. the following but it is the same for all that I tested:

docker run -d --name owncloud_test-container \ 
-p 34567:80 \
owncloud:9-apache

Without changing anything to the images on Docker Hub they should at least start running, don’t they? But they don’t start with the similar error message as I wrote above.

Yes,
If you are using the “stop/start” commands, you are able to persist the container.

Assuming your container should still be instantiated with the original command

docker run -d --name owncloud_test-container \ 
-p 34567:80 \
owncloud:9-apache

Then, I’d probably next investigate whether your command “owncloud:9-apache” is still valid.
I’d be curious what that actually is since it looks like some kind of custom command that starts up an owncloud/apache service. I’m also curious that your command doesn’t include a full, explicit path to the command.

Compare for instance to the standard command in my articles which are from the official documentation where even bash isn’t assumed to be found and a full path is specified

docker run -i -t --net=host opensuse **/bin/bash**

In theory, the way you’ve invoked your command, you should be able to use nsenter to enter your container and run “owncloud:9-apache” from any location and the command would execute (Did you add the command to your system path?)

I wouldn’t suspect anything related to apparmor unless invoking an app which does something somewhat unusual or you previously created some kind of custom apparmor rule. I have found that if you set up and install apps in ordinary ways as defined in SUSE/openSUSE documentation or use YAST applets your app won’t conflict with default apparmor settings.

TSU

Thinking a bit further about your situation,
IMO if you’re willing, you should post your Dockerfile, partly to verify you’ve constructed your ENTRYPOINT correctly.

You might also be able to get some essential info if you use the “docker inspect” command as follows

docker inspect *image_or_containername_or_id*

TSU

Thanks for the help.
I restarted the service several times in the last days without any effect, but with the last restart now the problem suddenly is gone. I don’t know, that I changed something relevant. Will come back here, when the problem comes back or I know what I did to solve it.

Best