So this morning I got this ugly surprise:
podman compose up -d
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<
[+] Running 0/6
⠼ Container zookeeper Starting 0.4s
⠼ Container datadog Starting 0.4s
⠼ Container kafka-ui Starting 0.4s
⠼ Container postgres Starting 0.4s
⠼ Container redis Starting 0.4s
⠸ Container kafka Created 0.2s
Error response from daemon: rootless netns: mount resolv.conf to "/etc/resolv.conf": no such file or directory
Error: executing /usr/local/bin/docker-compose up -d: exit status 1
I kinda need this for work, and the fact that it’s not starting anymore all of a sudden is a bit problematic (I may have to go back using docker).
Has anything changed around this, recently?
Or is it my fault? Any advice is really appreciated!
Thanks
My first thought is to look at the compose file, because it seems that resolv.conf
is being incorporated into the container at /etc/resolv.conf
, so where the docker-compose.yml
file says it should be seems to be the cause of the error itself, unless I’m mistaken.
Can you share the contents of the compose file?
Hi, here’s the docker-compose file, I’ve just removed some environment vars.
version: "3.8"
services:
postgres:
image: postgres
container_name: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
volumes:
- ./docker_postgres_init.sql:/docker-entrypoint-initdb.d/docker_postgres_init.sql
zookeeper:
image: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: confluentinc/cp-kafka
container_name: kafka
ports:
- "9092:9092"
- "9094:9094"
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LISTENERS: INTERNAL://0.0.0.0:9094,OUTSIDE://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9094,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
ADV_HOST: kafka
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:master
ports:
- "10101:8080"
environment:
# ...
redis:
image: redis:alpine
container_name: redis
ports:
- "6379:6379"
environment:
- ALLOW_EMPTY_PASSWORD=yes
mongo:
image: mongo:5.0.6
container_name: mongo
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: localUser
MONGO_INITDB_ROOT_PASSWORD: localPwd
datadog:
image: gcr.io/datadoghq/agent:7
container_name: datadog
pid: host
ports:
- "4317:4317"
- "4318:4318"
- "8126:8126"
- "8125:8125/udp"
environment:
# ...
volumes:
# - /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /sys/fs/cgroup:/host/sys/fs/cgroup:ro
volumes:
pgadmin-data:
There is no mention of /etc/resolv.conf, and this problem only appeared very recently.
To be fair, even on my home server, which I just reinstalled a couple of days ago because it was running a vulnerable snapshot of tumbleweed (xz…), I wasn’t able to run my containers with podman and had to go back to docker. And this is even considering that I took the chance of the reinstall to move over to slowroll.
The docker compose file looks very different there (jellyfin, nginx, etc), but it also does not contain any mention of resolv.conf, and yet the same problem happens. So IMO some recent update must have broken podman, and it must affect both tumbleweed and slowroll.
Weird.
Thanks in advance
Could you provide the outputs of:
ll /etc/resolv.conf
cat /etc/resolv.conf
I wonder if this might be of use - seems like a similar (or the same) issue, and it’s recent:
opened 01:02AM - 26 Mar 24 UTC
closed 12:37PM - 03 Apr 24 UTC
kind/bug
network
pasta
### Issue Description
Since updating to podman 5, I am receiving the following … error upon container startup.
```
Error: rootless netns: mount resolv.conf to "/run/user/10001/containers/networks/rootless-netns/run/systemd/resolve/stub-resolv.conf": no such file or directory
```
These are rootless containers running as systemd services via quadlet
### Steps to reproduce the issue
Steps to reproduce the issue
1. create hello-world.container
```
[Unit]
Description=Hello World
[Container]
AutoUpdate=registry
ContainerName=hello-world
Image=quay.io/podman/hello:latest
LogDriver=journald
Network=hello-world.network
[Install]
WantedBy=default.target
```
2. create hello-world.network
```
[Network]
```
3. systemctl --user daemon-reload
4. systemctl --user start hello-world
### Describe the results you received
```
Mar 25 20:58:40 nas systemd[711]: Starting Hello World...
Mar 25 20:58:40 nas podman[135892]: 2024-03-25 20:58:40.545778613 -0400 EDT m=+0.014761399 container create bfde8a8e86e9ffd86e5de627a10f4b3baf5a70bccc5f71b09666f628e46ad29b (image=quay.io/podman/hello:latest, name=hello-world, org.opencontainers.image.revision=76b262056eae09851d0a952d0f42b5bbeedde471, org.opencontainers.image.source=https://raw.githubusercontent.com/containers/PodmanHello/76b262056eae09851d0a952d0f42b5bbeedde471/Containerfile, io.containers.autoupdate=registry, maintainer=Podman Maintainers, org.opencontainers.image.description=Hello world image with ascii art, org.opencontainers.image.documentation=https://github.com/containers/PodmanHello/blob/76b262056eae09851d0a952d0f42b5bbeedde471/README.md, io.buildah.version=1.23.1, org.opencontainers.image.title=hello image, artist=Máirín Ní Ḋuḃṫaiġ, X/Twitter:@mairin, PODMAN_SYSTEMD_UNIT=hello-world.service, io.containers.capabilities=sys_chroot, org.opencontainers.image.url=https://github.com/containers/PodmanHello/actions/runs/8406198111)
Mar 25 20:58:40 nas podman[135892]: 2024-03-25 20:58:40.551110562 -0400 EDT m=+0.020093348 container remove bfde8a8e86e9ffd86e5de627a10f4b3baf5a70bccc5f71b09666f628e46ad29b (image=quay.io/podman/hello:latest, name=hello-world, org.opencontainers.image.url=https://github.com/containers/PodmanHello/actions/runs/8406198111, PODMAN_SYSTEMD_UNIT=hello-world.service, artist=Máirín Ní Ḋuḃṫaiġ, X/Twitter:@mairin, org.opencontainers.image.source=https://raw.githubusercontent.com/containers/PodmanHello/76b262056eae09851d0a952d0f42b5bbeedde471/Containerfile, org.opencontainers.image.title=hello image, io.buildah.version=1.23.1, io.containers.capabilities=sys_chroot, maintainer=Podman Maintainers, org.opencontainers.image.documentation=https://github.com/containers/PodmanHello/blob/76b262056eae09851d0a952d0f42b5bbeedde471/README.md, io.containers.autoupdate=registry, org.opencontainers.image.description=Hello world image with ascii art, org.opencontainers.image.revision=76b262056eae09851d0a952d0f42b5bbeedde471)
Mar 25 20:58:40 nas podman[135892]: 2024-03-25 20:58:40.541326914 -0400 EDT m=+0.010309705 image pull 338f8d8caa62e120293bac50496b88d73298047bfe4789a5a0621fe5ceb09860 quay.io/podman/hello:latest
Mar 25 20:58:40 nas hello-world[135892]: Error: rootless netns: mount resolv.conf to "/run/user/4005/containers/networks/rootless-netns/run/systemd/resolve/stub-resolv.conf": no such file or directory
Mar 25 20:58:40 nas systemd[711]: hello-world.service: Main process exited, code=exited, status=127/n/a
Mar 25 20:58:40 nas systemd[711]: hello-world.service: Failed with result 'exit-code'.
Mar 25 20:58:40 nas systemd[711]: Failed to start Hello World.
```
### Describe the results you expected
successful container creation
### podman info output
```yaml
host:
arch: amd64
buildahVersion: 1.35.1
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: /usr/bin/conmon is owned by conmon 1:2.1.10-1
path: /usr/bin/conmon
version: 'conmon version 2.1.10, commit: 2dcd736e46ded79a53339462bc251694b150f870'
cpuUtilization:
idlePercent: 99.51
systemPercent: 0.14
userPercent: 0.34
cpus: 24
databaseBackend: boltdb
distribution:
distribution: arch
version: unknown
eventLogger: journald
freeLocks: 1815
hostname: nas
idMappings:
gidmap:
- container_id: 0
host_id: 4005
size: 1
- container_id: 1
host_id: 427680
size: 65536
uidmap:
- container_id: 0
host_id: 4005
size: 1
- container_id: 1
host_id: 427680
size: 65536
kernel: 6.8.1-arch1-1
linkmode: dynamic
logDriver: journald
memFree: 20794699776
memTotal: 33447477248
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: /usr/lib/podman/aardvark-dns is owned by aardvark-dns 1.10.0-1
path: /usr/lib/podman/aardvark-dns
version: aardvark-dns 1.10.0
package: /usr/lib/podman/netavark is owned by netavark 1.10.3-1
path: /usr/lib/podman/netavark
version: netavark 1.10.3
ociRuntime:
name: crun
package: /usr/bin/crun is owned by crun 1.14.4-1
path: /usr/bin/crun
version: |-
crun version 1.14.4
commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
rundir: /run/user/4005/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: /usr/bin/pasta is owned by passt 2024_03_20.71dd405-1
version: |
pasta unknown version
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/user/4005/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.3-1
version: |-
slirp4netns version 1.2.3
commit: c22fde291bb35b354e6ca44d13be181c76a0a432
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.5
swapFree: 0
swapTotal: 0
uptime: 3h 39m 55.00s (Approximately 0.12 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries: {}
store:
configFile: /home/container-runner/.config/containers/storage.conf
containerStore:
number: 26
paused: 0
running: 18
stopped: 8
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/container-runner/.local/share/containers/storage
graphRootAllocated: 490304405504
graphRootUsed: 180877135872
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 45
runRoot: /run/user/4005/containers
transientStore: false
volumePath: /home/container-runner/.local/share/containers/storage/volumes
version:
APIVersion: 5.0.0
Built: 1711060217
BuiltTime: Thu Mar 21 18:30:17 2024
GitCommit: e71ec6f1d94d2d97fb3afe08aae0d8adaf8bddf0-dirty
GoVersion: go1.22.1
Os: linux
OsArch: linux/amd64
Version: 5.0.0
```
```
### Podman in a container
No
### Privileged Or Rootless
Rootless
### Upstream Latest Release
Yes
### Additional environment details
- archlinux
- netplan to manage systemd networking
### Additional information
_No response_
1 Like
Thanks, looks like it, although it’s very unclear what I’m supposed to do to get back to a working state. Some fix has been merged last week but not released yet… I guess Imma ask there.
EDIT: actually 5.0.1 has been released last week, so maybe I misunderstood…
1 Like
Sure!
🐧 andrea 20:12:41 09/04/24 🏠 ✅ ll /etc/resolv.conf
-rw-r--r-- 1 root root 68 Apr 9 20:00 /etc/resolv.conf
🐧 andrea 20:12:42 09/04/24 🏠 ✅ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
Great, I was checking if it was a symlink or some other weird config.
Could you try mounting resolv.conf in the container explicitly as read-only:
volumes:
- /etc/resolv.conf:/etc/resolv.conf:ro
...
I simplified my docker-compose file but still getting the same error:
version: "3.8"
services:
mongo:
image: mongo:5.0.6
container_name: mongo
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: localUser
MONGO_INITDB_ROOT_PASSWORD: localPwd
volumes:
- /etc/resolv.conf:/etc/resolv.conf:ro
🐧 andrea 20:56:18 09/04/24 feature/si-5073 U:1 🏠/git/work ❌1 podman compose up -d
>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please refer to the documentation for details. <<<<
[+] Running 0/0
⠋ Container mongo Starting 0.0s
Error response from daemon: rootless netns: mount resolv.conf to "/etc/resolv.conf": no such file or directory
Error: executing /usr/local/bin/docker-compose up -d: exit status 1
It seems to be bind mounting resolv.conf to a non-existent path /etc/resolv.conf
.
Could you try the dns option as per the docs:
https://docs.podman.io/en/latest/markdown/podman-create.1.html#dns-ipaddr
As I understand it, it might be an AppArmor issue - that it’s not letting podman read the file.
I did more testing on this by creating a normal podman mongodb container which failed btw. There were multiple issues causing this to happen:
slirp4netns was not installed
IPv6 was disabled or not accessible for podman
This worked and the container could be started:
podman create \
--name mongo \
--network=slirp4netns:enable_ipv6=false \
--env 'MONGO_INITDB_ROOT_USERNAME=testUser' \
--env 'MONGO_INITDB_ROOT_PASSWORD=testPassword' \
docker.io/library/mongo:5.0.6
This did not work:
podman create \
--name mongo \
--env 'MONGO_INITDB_ROOT_USERNAME=testUser' \
--env 'MONGO_INITDB_ROOT_PASSWORD=testPassword' \
docker.io/library/mongo:5.0.6
Using pod to replicate podman compose
podman pod create \
--publish 27017:27017 \
--network=slirp4netns:enable_ipv6=false \
test
podman create \
--name mongo \
--pod test \
--env 'MONGO_INITDB_ROOT_USERNAME=testUser' \
--env 'MONGO_INITDB_ROOT_PASSWORD=testPassword' \
docker.io/library/mongo:5.0.6
Even podman hello world fails for me (without disabling ipv6) using podman v5.0.1.
I’ve created a Github Issue:
opened 06:10AM - 10 Apr 24 UTC
kind/bug
### Issue Description
Normal hello world fails:
```
pavin@suse-pc:~> podman… run quay.io/podman/hello
Error: pasta failed with exit code 1:
No routable interface for IPv6: IPv6 is disabled
Couldn't open network namespace /run/user/1000/netns/netns-6df6c2e3-f8e0-5fd7-d3f6-0d878ce45476: Permission denied
```
Disabling IPv6 succeeds:
```
pavin@suse-pc:~> podman run --network=slirp4netns:enable_ipv6=false quay.io/podman/hello
!... Hello Podman World ...!
.--"--.
/ - - \
/ (O) (O) \
~~~| -=(,Y,)=- |
.---. /` \ |~~
~/ o o \~~~~.----. ~~
| =(X)= |~ / (O (O) \
~~~~~~~ ~| =(Y_)=- |
~~~~ ~~~| U |~~
Project: https://github.com/containers/podman
Website: https://podman.io
Desktop: https://podman-desktop.io
Documents: https://docs.podman.io
YouTube: https://youtube.com/@Podman
X/Twitter: @Podman_io
Mastodon: @Podman_io@fosstodon.org
```
OS: `openSUSE Tunmbleweed-Slowroll 20240405`
### Steps to reproduce the issue
Steps to reproduce the issue
1. podman --log-level debug run quay.io/podman/hello
### Describe the results you received
Debug output:
```
pavin@suse-pc:~> podman --log-level debug run quay.io/podman/hello
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level debug run quay.io/podman/hello)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/pavin/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/pavin/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/pavin/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 37
DEBU[0000] Pulling image quay.io/podman/hello (policy: missing)
DEBU[0000] Looking up image "quay.io/podman/hello" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/podman/hello:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage ([overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543)
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Looking up image "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/podman/hello:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Found image "quay.io/podman/hello:latest" as "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Found image "quay.io/podman/hello:latest" as "quay.io/podman/hello:latest" in local containers storage ([overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543)
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Looking up image "quay.io/podman/hello" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/podman/hello:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage ([overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543)
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json"
DEBU[0000] Allocated lock 39 for container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5"
DEBU[0000] Container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5" has work directory "/home/pavin/.local/share/containers/storage/overlay-containers/11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5/userdata"
DEBU[0000] Container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5" has run directory "/run/user/1000/containers/overlay-containers/11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5/userdata"
DEBU[0000] Not attaching to stdin
INFO[0000] Received shutdown.Stop(), terminating! PID=1675
DEBU[0000] Enabling signal proxying
DEBU[0000] Made network namespace at /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45 for container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5
DEBU[0000] pasta arguments: --config-net --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45
DEBU[0000] overlay: mount_data=lowerdir=/home/pavin/.local/share/containers/storage/overlay/l/UYTWG2RM4JNGPSWWTPPFRVH7VQ,upperdir=/home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/diff,workdir=/home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/work,userxattr
DEBU[0000] Mounted container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5" at "/home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/merged"
DEBU[0000] Created root filesystem for container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5 at /home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/merged
DEBU[0000] Unmounted container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5"
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Cleaning up container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5 storage is already unmounted, skipping...
DEBU[0000] ExitCode msg: "pasta failed with exit code 1:\nno routable interface for ipv6: ipv6 is disabled\ncouldn't open network namespace /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45: permission denied\n"
Error: pasta failed with exit code 1:
No routable interface for IPv6: IPv6 is disabled
Couldn't open network namespace /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45: Permission denied
DEBU[0000] Shutting down engines
```
### Describe the results you expected
Expected hello world to run fine! 🥲
### podman info output
```yaml
pavin@suse-pc:~> podman info
host:
arch: amd64
buildahVersion: 1.35.3
cgroupControllers:
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.10-1.3.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.10, commit: unknown'
cpuUtilization:
idlePercent: 98.6
systemPercent: 0.56
userPercent: 0.83
cpus: 12
databaseBackend: sqlite
distribution:
distribution: opensuse-tumbleweed
version: "20240405"
eventLogger: journald
freeLocks: 2002
hostname: suse-pc
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.7.4-1-default
linkmode: dynamic
logDriver: journald
memFree: 3634827264
memTotal: 13975113728
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.10.0-1.3.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.10.0
package: netavark-1.10.3-1.2.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.10.3
ociRuntime:
name: crun
package: crun-1.14.4-1.2.x86_64
path: /usr/bin/crun
version: |-
crun version 1.14.4
commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-20240220.1e6f92b-1.2.x86_64
version: |
pasta unknown version
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.3-1.2.x86_64
version: |-
slirp4netns version 1.2.3
commit: unknown
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 7042494464
swapTotal: 8589930496
uptime: 19h 34m 26.00s (Approximately 0.79 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.opensuse.org
- registry.suse.com
- docker.io
store:
configFile: /home/pavin/.config/containers/storage.conf
containerStore:
number: 3
paused: 0
running: 2
stopped: 1
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/pavin/.local/share/containers/storage
graphRootAllocated: 498681774080
graphRootUsed: 75340529664
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 4
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/pavin/.local/share/containers/storage/volumes
version:
APIVersion: 5.0.1
Built: 1712166221
BuiltTime: Wed Apr 3 23:13:41 2024
GitCommit: ""
GoVersion: go1.21.9
Os: linux
OsArch: linux/amd64
Version: 5.0.1
```
### Podman in a container
No
### Privileged Or Rootless
Rootless
### Upstream Latest Release
Yes
### Additional environment details
Tested on two identical physical machines running openSUSE Tumbleweed-Slowroll and a not so identical VM running openSUSE Tumbleweed.
### Additional information
Settings don't seem to matter. VM network interface has IPv6 enabled but physical machines both have IPv6 disabled for the interface. Issue is reproducible on all 3 machines.
It’s an AppArmor bug in OpenSuse that’s being fixed:
opened 06:10AM - 10 Apr 24 UTC
closed 06:44AM - 10 Apr 24 UTC
kind/bug
pasta
### Issue Description
Normal hello world fails:
```
pavin@suse-pc:~> podman… run quay.io/podman/hello
Error: pasta failed with exit code 1:
No routable interface for IPv6: IPv6 is disabled
Couldn't open network namespace /run/user/1000/netns/netns-6df6c2e3-f8e0-5fd7-d3f6-0d878ce45476: Permission denied
```
Disabling IPv6 succeeds:
```
pavin@suse-pc:~> podman run --network=slirp4netns:enable_ipv6=false quay.io/podman/hello
!... Hello Podman World ...!
.--"--.
/ - - \
/ (O) (O) \
~~~| -=(,Y,)=- |
.---. /` \ |~~
~/ o o \~~~~.----. ~~
| =(X)= |~ / (O (O) \
~~~~~~~ ~| =(Y_)=- |
~~~~ ~~~| U |~~
Project: https://github.com/containers/podman
Website: https://podman.io
Desktop: https://podman-desktop.io
Documents: https://docs.podman.io
YouTube: https://youtube.com/@Podman
X/Twitter: @Podman_io
Mastodon: @Podman_io@fosstodon.org
```
OS: `openSUSE Tunmbleweed-Slowroll 20240405`
### Steps to reproduce the issue
Steps to reproduce the issue
1. podman --log-level debug run quay.io/podman/hello
### Describe the results you received
Debug output:
```
pavin@suse-pc:~> podman --log-level debug run quay.io/podman/hello
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level debug run quay.io/podman/hello)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/pavin/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/pavin/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/pavin/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 37
DEBU[0000] Pulling image quay.io/podman/hello (policy: missing)
DEBU[0000] Looking up image "quay.io/podman/hello" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/podman/hello:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage ([overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543)
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Looking up image "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/podman/hello:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Found image "quay.io/podman/hello:latest" as "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Found image "quay.io/podman/hello:latest" as "quay.io/podman/hello:latest" in local containers storage ([overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543)
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Looking up image "quay.io/podman/hello" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "quay.io/podman/hello:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage
DEBU[0000] Found image "quay.io/podman/hello" as "quay.io/podman/hello:latest" in local containers storage ([overlay@/home/pavin/.local/share/containers/storage+/run/user/1000/containers]@faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543)
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] Inspecting image faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json"
DEBU[0000] Allocated lock 39 for container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5
DEBU[0000] exporting opaque data as blob "sha256:faee43598994ffee776563b1ec48b69148777d0c909651d02b5d38dfff041543"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5"
DEBU[0000] Container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5" has work directory "/home/pavin/.local/share/containers/storage/overlay-containers/11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5/userdata"
DEBU[0000] Container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5" has run directory "/run/user/1000/containers/overlay-containers/11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5/userdata"
DEBU[0000] Not attaching to stdin
INFO[0000] Received shutdown.Stop(), terminating! PID=1675
DEBU[0000] Enabling signal proxying
DEBU[0000] Made network namespace at /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45 for container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5
DEBU[0000] pasta arguments: --config-net --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45
DEBU[0000] overlay: mount_data=lowerdir=/home/pavin/.local/share/containers/storage/overlay/l/UYTWG2RM4JNGPSWWTPPFRVH7VQ,upperdir=/home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/diff,workdir=/home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/work,userxattr
DEBU[0000] Mounted container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5" at "/home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/merged"
DEBU[0000] Created root filesystem for container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5 at /home/pavin/.local/share/containers/storage/overlay/5f2ee976d09e8cba6b184b35e091e8f91a86d8c6600b4b6bb117f5f45fec218a/merged
DEBU[0000] Unmounted container "11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5"
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Cleaning up container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container 11d3a38db676307a3deff4e09df32ecf3f4a12e1cf4605b98d1ea1f516435cb5 storage is already unmounted, skipping...
DEBU[0000] ExitCode msg: "pasta failed with exit code 1:\nno routable interface for ipv6: ipv6 is disabled\ncouldn't open network namespace /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45: permission denied\n"
Error: pasta failed with exit code 1:
No routable interface for IPv6: IPv6 is disabled
Couldn't open network namespace /run/user/1000/netns/netns-3f6e137c-4103-cdee-74f7-4cbc130c6a45: Permission denied
DEBU[0000] Shutting down engines
```
### Describe the results you expected
Expected hello world to run fine! 🥲
### podman info output
```yaml
pavin@suse-pc:~> podman info
host:
arch: amd64
buildahVersion: 1.35.3
cgroupControllers:
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.10-1.3.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.10, commit: unknown'
cpuUtilization:
idlePercent: 98.6
systemPercent: 0.56
userPercent: 0.83
cpus: 12
databaseBackend: sqlite
distribution:
distribution: opensuse-tumbleweed
version: "20240405"
eventLogger: journald
freeLocks: 2002
hostname: suse-pc
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.7.4-1-default
linkmode: dynamic
logDriver: journald
memFree: 3634827264
memTotal: 13975113728
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.10.0-1.3.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.10.0
package: netavark-1.10.3-1.2.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.10.3
ociRuntime:
name: crun
package: crun-1.14.4-1.2.x86_64
path: /usr/bin/crun
version: |-
crun version 1.14.4
commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-20240220.1e6f92b-1.2.x86_64
version: |
pasta unknown version
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.3-1.2.x86_64
version: |-
slirp4netns version 1.2.3
commit: unknown
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 7042494464
swapTotal: 8589930496
uptime: 19h 34m 26.00s (Approximately 0.79 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.opensuse.org
- registry.suse.com
- docker.io
store:
configFile: /home/pavin/.config/containers/storage.conf
containerStore:
number: 3
paused: 0
running: 2
stopped: 1
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/pavin/.local/share/containers/storage
graphRootAllocated: 498681774080
graphRootUsed: 75340529664
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 4
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/pavin/.local/share/containers/storage/volumes
version:
APIVersion: 5.0.1
Built: 1712166221
BuiltTime: Wed Apr 3 23:13:41 2024
GitCommit: ""
GoVersion: go1.21.9
Os: linux
OsArch: linux/amd64
Version: 5.0.1
```
### Podman in a container
No
### Privileged Or Rootless
Rootless
### Upstream Latest Release
Yes
### Additional environment details
Tested on two identical physical machines running openSUSE Tumbleweed-Slowroll and a not so identical VM running openSUSE Tumbleweed.
### Additional information
Settings don't seem to matter. VM network interface has IPv6 enabled but physical machines both have IPv6 disabled for the interface. Issue is reproducible on all 3 machines.
Podman uses pasta for rootless networking instead of slirp4netns in newer versions, apparently the default OpenSuse AppArmor profile does not allow pasta to do its thing so an update to the rules are required and is in the works.
In the meantime, you can explicitly specify slirp4netns as the network driver.
Thanks everyone, the problem is solved with snapshot 20240409
2 Likes
system
Closed
April 17, 2024, 4:07pm
16
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.