I have had a 13.1 server running for a few years with no problems. Yesterday my web app was unable to access any external webservice, I couldn’t even ping an ip address. I didn’t change anything.
I turned off the firewall to allow the services to be accessed. For some reason the network was not assigned a zone, I reassigned it external where I allow 22, 80, 443 and 9292 incoming. But when I turn the firewall back on everything returns filtered on a nmap scan and all outgoing traffic is blocked.
I am not sure where to go from here. Here is the ifconfig output:
Then it’s probably time to investigate the history of this machine, where was the original image created from, something provided to you or something you created?
You provided ifconfig results, but what might be in
If your machine’s image originally came from your Provider,
How complex is your machine, would it be difficult to migrate your apps and data to a newly provided image?
If you can even launch a newly provided image without your apps, you should have a working network configuration to compare.
How difficult would it be to roll back to before you noticed problems, often times virtual machines can be backed up quickly and easily by simply cloning the machine periodically.
Also, I’d be curious what interface configuration interfaces are currently listed,
Also, is this a private openVZ deployment or one on a commercial Provider?
So, it appears that venet does not support bridge devices of any type and consistent with your brctl result…
Do you have control over the Host or only inside the container?
I’m suspecting at this point possibly at this point that the proper solution might be to remove and re-create network interfaces in your openVZ container altogether, but before doing something like this, one would have to closely read and understand the current openVZ networking architecture, for instance if it has any similarities with Docker containers (which could have provided an inspiration in either direction) which requires configuring the “outside” interface on the Host to match the “inside” interface in the container. At least in Docker, the name of an interface does not seem to matter, it only matters understanding and correctly detailing what is exposed (typically all the network properties including networkID and port).
If openVZ networking is indeed similar to Docker networking, then you might also consider that really basic iptables filtering is relatively useless, the pin-holes you create to enable network connections do essentially the same thing.