Podman compose fails to start with rootless netns /etc/resolv.conf error

So this morning I got this ugly surprise:

podman compose up -d
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<

[+] Running 0/6
 ⠼ Container zookeeper  Starting                                                                                                    0.4s 
 ⠼ Container datadog    Starting                                                                                                    0.4s 
 ⠼ Container kafka-ui   Starting                                                                                                    0.4s 
 ⠼ Container postgres   Starting                                                                                                    0.4s 
 ⠼ Container redis      Starting                                                                                                    0.4s 
 ⠸ Container kafka      Created                                                                                                     0.2s 
Error response from daemon: rootless netns: mount resolv.conf to "/etc/resolv.conf": no such file or directory
Error: executing /usr/local/bin/docker-compose up -d: exit status 1

I kinda need this for work, and the fact that it’s not starting anymore all of a sudden is a bit problematic (I may have to go back using docker).

Has anything changed around this, recently?
Or is it my fault? Any advice is really appreciated!

Thanks :slight_smile:

My first thought is to look at the compose file, because it seems that resolv.conf is being incorporated into the container at /etc/resolv.conf, so where the docker-compose.yml file says it should be seems to be the cause of the error itself, unless I’m mistaken.

Can you share the contents of the compose file?

Hi, here’s the docker-compose file, I’ve just removed some environment vars.

version: "3.8"
services:
  postgres:
    image: postgres
    container_name: postgres
    restart: always
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: postgres
    volumes:
      - ./docker_postgres_init.sql:/docker-entrypoint-initdb.d/docker_postgres_init.sql
  zookeeper:
    image: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: confluentinc/cp-kafka
    container_name: kafka
    ports:
      - "9092:9092"
      - "9094:9094"
    depends_on:
      - zookeeper
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LISTENERS: INTERNAL://0.0.0.0:9094,OUTSIDE://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9094,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      ADV_HOST: kafka
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:master
    ports:
      - "10101:8080"
    environment:
    # ...
  redis:
    image: redis:alpine
    container_name: redis
    ports:
      - "6379:6379"
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
  mongo:
    image: mongo:5.0.6
    container_name: mongo
    ports:
      - "27017:27017"
    environment:
      MONGO_INITDB_ROOT_USERNAME: localUser
      MONGO_INITDB_ROOT_PASSWORD: localPwd
  datadog:
    image: gcr.io/datadoghq/agent:7
    container_name: datadog
    pid: host
    ports:
      - "4317:4317"
      - "4318:4318"
      - "8126:8126"
      - "8125:8125/udp"
    environment:
      # ...
    volumes:
      # - /var/run/docker.sock:/var/run/docker.sock
      - /proc/:/host/proc/:ro
      - /sys/fs/cgroup:/host/sys/fs/cgroup:ro

volumes:
  pgadmin-data:

There is no mention of /etc/resolv.conf, and this problem only appeared very recently.

To be fair, even on my home server, which I just reinstalled a couple of days ago because it was running a vulnerable snapshot of tumbleweed (xz…), I wasn’t able to run my containers with podman and had to go back to docker. And this is even considering that I took the chance of the reinstall to move over to slowroll.

The docker compose file looks very different there (jellyfin, nginx, etc), but it also does not contain any mention of resolv.conf, and yet the same problem happens. So IMO some recent update must have broken podman, and it must affect both tumbleweed and slowroll.

Weird.

Thanks in advance

Could you provide the outputs of:

ll /etc/resolv.conf
cat /etc/resolv.conf

I wonder if this might be of use - seems like a similar (or the same) issue, and it’s recent:

1 Like

Thanks, looks like it, although it’s very unclear what I’m supposed to do to get back to a working state. Some fix has been merged last week but not released yet… I guess Imma ask there.

EDIT: actually 5.0.1 has been released last week, so maybe I misunderstood…

1 Like

Sure!

 🐧 andrea 20:12:41 09/04/24  🏠  ✅  ll /etc/resolv.conf
-rw-r--r-- 1 root root 68 Apr  9 20:00 /etc/resolv.conf
 🐧 andrea 20:12:42 09/04/24  🏠  ✅  cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4

Great, I was checking if it was a symlink or some other weird config.

Could you try mounting resolv.conf in the container explicitly as read-only:

    volumes:
      - /etc/resolv.conf:/etc/resolv.conf:ro
      ...

I simplified my docker-compose file but still getting the same error:

version: "3.8"
services:
  mongo:
    image: mongo:5.0.6
    container_name: mongo
    ports:
      - "27017:27017"
    environment:
      MONGO_INITDB_ROOT_USERNAME: localUser
      MONGO_INITDB_ROOT_PASSWORD: localPwd
    volumes:
      - /etc/resolv.conf:/etc/resolv.conf:ro
 🐧 andrea 20:56:18 09/04/24   feature/si-5073 U:1  🏠/git/work  ❌1  podman compose up -d
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<

[+] Running 0/0
 ⠋ Container mongo  Starting                                                                                                       0.0s 
Error response from daemon: rootless netns: mount resolv.conf to "/etc/resolv.conf": no such file or directory
Error: executing /usr/local/bin/docker-compose up -d: exit status 1

It seems to be bind mounting resolv.conf to a non-existent path /etc/resolv.conf.
Could you try the dns option as per the docs:
https://docs.podman.io/en/latest/markdown/podman-create.1.html#dns-ipaddr

As I understand it, it might be an AppArmor issue - that it’s not letting podman read the file.

I did more testing on this by creating a normal podman mongodb container which failed btw. There were multiple issues causing this to happen:

  1. slirp4netns was not installed
  2. IPv6 was disabled or not accessible for podman

This worked and the container could be started:

podman create \
--name mongo \
--network=slirp4netns:enable_ipv6=false \
--env 'MONGO_INITDB_ROOT_USERNAME=testUser' \
--env 'MONGO_INITDB_ROOT_PASSWORD=testPassword' \
docker.io/library/mongo:5.0.6

This did not work:

podman create \
--name mongo \
--env 'MONGO_INITDB_ROOT_USERNAME=testUser' \
--env 'MONGO_INITDB_ROOT_PASSWORD=testPassword' \
docker.io/library/mongo:5.0.6

Using pod to replicate podman compose

podman pod create \
--publish 27017:27017 \
--network=slirp4netns:enable_ipv6=false \
test

podman create \
--name mongo \
--pod test \
--env 'MONGO_INITDB_ROOT_USERNAME=testUser' \
--env 'MONGO_INITDB_ROOT_PASSWORD=testPassword' \
docker.io/library/mongo:5.0.6

Even podman hello world fails for me (without disabling ipv6) using podman v5.0.1.
I’ve created a Github Issue:

It’s an AppArmor bug in OpenSuse that’s being fixed:

Podman uses pasta for rootless networking instead of slirp4netns in newer versions, apparently the default OpenSuse AppArmor profile does not allow pasta to do its thing so an update to the rules are required and is in the works.

In the meantime, you can explicitly specify slirp4netns as the network driver.

Thanks everyone, the problem is solved with snapshot 20240409 :tada:

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.