Podman "Temporary failure resolving" DNS problem while connected to VPN

Hi everyone,

I’m facing a weird issue which only happens while I’m connected to my corporate VPN, and only from within podman containers. DNS resolution fails.

I open a shell on an ubuntu image:

podman run -it ubuntu bash

Then I apt-get update, repos are refreshed just fine.

The relevant DNS configs are:

(container)

root@ef71e8cae5df:/# cat /etc/resolv.conf 
nameserver 169.254.0.1
nameserver 109.0.66.10
nameserver 109.0.66.20
nameserver 2a02:842a:8697:5001:b6e2:65ff:fed5:e33

(host)

 🐧 andrea 15:53:21 17/04/24  🏠  ✅  cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 109.0.66.10
nameserver 109.0.66.20
nameserver 2a02:842a:8697:5001:b6e2:65ff:fed5:e33

(container)

root@ef71e8cae5df:/# cat /etc/hosts
127.0.0.1       localhost localhost.localdomain
::1     localhost localhost.localdomain ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
192.168.1.88    ef71e8cae5df upbeat_feistel -> this is my WiFi adapter's IP address

(host)

 🐧 andrea 15:53:24 17/04/24  🏠  ✅  cat /etc/hosts
#
# hosts         This file describes a number of hostname-to-address
#               mappings for the TCP/IP subsystem.  It is mostly
#               used at boot time, when no name servers are running.
#               On small systems, this file can be used instead of a
#               "named" name server.
# Syntax:
#    
# IP-Address  Full-Qualified-Hostname  Short-Hostname
#

127.0.0.1       localhost localhost.localdomain
::1             localhost localhost.localdomain ipv6-localhost ipv6-loopback

# special IPv6 addresses
fe00::0         ipv6-localnet

ff00::0         ipv6-mcastprefix
ff02::1         ipv6-allnodes
ff02::2         ipv6-allrouters
ff02::3         ipv6-allhosts

Now I leave the container, and connect to my corporate VPN.

I create yet again an ubuntu container on the fly and try to refresh repos via apt-get update, as before.

But this time I get:

root@3fb4cf455737:/# apt-get update
Ign:1 http://archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://archive.ubuntu.com/ubuntu jammy InRelease
Ign:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
Ign:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:1 http://archive.ubuntu.com/ubuntu jammy InRelease
Ign:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
Ign:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
Err:1 http://archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'archive.ubuntu.com'
Err:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'security.ubuntu.com'
Err:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'archive.ubuntu.com'
Err:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'archive.ubuntu.com'
Reading package lists... Done
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy/InRelease  Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease  Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-backports/InRelease  Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jammy-security/InRelease  Temporary failure resolving 'security.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.

The relevant DNS configs are:

(container)

root@3fb4cf455737:/# cat /etc/resolv.conf 
search tuiad.net
nameserver 10.85.38.15
nameserver 10.85.39.15

(host)

 🐧 andrea 15:33:32 17/04/24  🏠  ✅  cat /etc/resolv.conf 
#@VPNC_GENERATED@ -- this file is generated by vpnc
# and will be overwritten by vpnc
# as long as the above mark is intact
# Generated by NetworkManager
nameserver 10.85.38.15
nameserver 10.85.39.15
search tuiad.net

(container)

root@3fb4cf455737:/# cat /etc/hosts
127.0.0.1       localhost localhost.localdomain
::1     localhost localhost.localdomain ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
192.168.1.88    host.containers.internal host.docker.internal
2a02:842a:8697:5001:aeed:f77f:14d3:1596 3fb4cf455737 elated_black

(host)

 🐧 andrea 15:57:13 17/04/24  🏠  ✅  cat /etc/hosts
#
# hosts         This file describes a number of hostname-to-address
#               mappings for the TCP/IP subsystem.  It is mostly
#               used at boot time, when no name servers are running.
#               On small systems, this file can be used instead of a
#               "named" name server.
# Syntax:
#    
# IP-Address  Full-Qualified-Hostname  Short-Hostname
#

127.0.0.1       localhost localhost.localdomain
::1             localhost localhost.localdomain ipv6-localhost ipv6-loopback

# special IPv6 addresses
fe00::0         ipv6-localnet

ff00::0         ipv6-mcastprefix
ff02::1         ipv6-allnodes
ff02::2         ipv6-allrouters
ff02::3         ipv6-allhosts

I have also tried with systemctl disable --now firewalld.service, to no avail.

Which makes me suspect that the fw has nothing to do with this, and maybe it’s more of a podman problem…

Any help is greatly appreciated.

Thanks!

EDIT: I also tried sniffing packages with Wireshark (root mode) on all adapters, with this filter “dns.qry.name==“archive.ubuntu.com””, but while I can see the request/response recorded while not connected to the VPN, as soon as I connect, nothing appears anymore. It’s probably a useless test since the issue is clearly even before any request goes out of my network, but still I wanted to point it out.

I have a hunch what might be going on here…

The container has its own network namespace by default which is not shared with the host network namespace.

When creating a new container it would pull the resolv config from the host, but when the host is on a private network (VPN) the container can’t connect to the private DNS servers in the host’s resolv config.

To test the theory, could you manually change the container resolv.conf nameservers to 1.1.1.1 or something?

Edit:

Several files will be automatically created within the container. These include /etc/hosts , /etc/hostname , and /etc/resolv.conf to manage networking. These will be based on the host’s version of the files, though they can be customized with options (for example, –dns will override the host’s DNS servers in the created resolv.conf ).

Source: podman-run — Podman documentation

Thanks, but I need the DNS servers provided by my corporate VPN because of some infrastructure I need to reach, whose names are not resolvable by public DNS (e.g. our apache kafka cluster in various TEST/QA environments).

Therefore using 1.1.1.1 is not a solution.

I don’t understand why, with the same resolv.conf, I’m able to resolve any name (within my company’s intranet or outside on the internet as well) as long as I stay in a shell on the host, but as soon as the shell is inside podman, it fails. Yet the resolv.conf is seemingly identical.

This is because the container’s network namespace is different and isolated from the host.
You can explicitly have it use the host’s network namespace:
https://docs.podman.io/en/latest/markdown/podman-run.1.html#network-mode-net

Ok, I see, thanks

I’ll try setting that to host then

I’m facing this problem with a docker compose file, actually, I just reproduced it with a simpler scenario to simplify the problem.

I guess that the equivalent to that would be to use the network mode in the compose file…I hope it won’t affect resolution of container names, right now they can talk to one another by their container_name

Anyway, I’ll give it a try tomorrow, meanwhile thanks!

1 Like

So, podman run works fine when using --net=host

Now I have to fix my actual problem on a docker compose file, though, and here it gets a bit trickier.

While Connected to VPN

version: "3.8"
services:
  nginx1:
    image: nginx:latest
    container_name: nginx1
    network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8080:8080"
  nginx2:
    image: nginx:latest
    container_name: nginx2
    network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8081:8081"

Internal name resolution fails (curl nginx2 from nginx1, for example)
External name resolution works (apt-get update resolves correctly deb.debian.org)

Now if I comment out the network_mode property in both containers, the opposite happens:

version: "3.8"
services:
  nginx1:
    image: nginx:latest
    container_name: nginx1
    #network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8080:8080"
  nginx2:
    image: nginx:latest
    container_name: nginx2
    #network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8081:8081"

Internal name resolution works (curl nginx2 from nginx1, for example)
External name resolution fails (apt-get update can’t resolve deb.debian.org)

While disconnected from VPN

version: "3.8"
services:
  nginx1:
    image: nginx:latest
    container_name: nginx1
    network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8080:8080"
  nginx2:
    image: nginx:latest
    container_name: nginx2
    network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8081:8081"

Internal name resolution fails (curl nginx2 from nginx1, for example)
External name resolution works (apt-get update resolves correctly deb.debian.org)

version: "3.8"
services:
  nginx1:
    image: nginx:latest
    container_name: nginx1
    #network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8080:8080"
  nginx2:
    image: nginx:latest
    container_name: nginx2
    #network_mode: slirp4netns:port_handler=slirp4netns
    ports:
      - "8081:8081"

Internal name resolution works (curl nginx2 from nginx1, for example)
External name resolution works (apt-get update resolves correctly deb.debian.org)

How can achieve both to work, but while connected to the VPN?

Anything specific I can change in my compose file?

Thanks

Well I think I got fascinated by the whole daemon-less and root-less aspect of podman and jumped ship from docker mostly for that, trying to make the migration as frictionless as possible, but I guess I have some reading to do:

Ah yes, pods are the way :100:
I don’t have much experience with docker :melting_face:

So, even after going the pod way I am facing the same problem :smiley:

I then went back to docker, and guess what, the problem’s still there.e

This convinced me that it must be some kind of restriction on my company firewall.

I went back once more to podman and tried to repeat the experiment while connected to another VPN (one by Surfshark I use for geoblocking avoidance), and my containers kept their ability to resolve names, both of their siblings and on the Internet.

So I came to terms with the fact that this issue must be caused by my company VPN. I dunno for example if it may refuse queries orginating more than 1 hop away, which might be the case for those coming from a container.

Thanks anyway and have a good one :slight_smile:

1 Like