Trouble with bond + bridges + vlans

I have a host that has been working for many years in one network. I recently replaced all my switches with managed ones and setup VLANs to segregate the traffic. The host ports are configured correctly on the switch as trunk ports with the necessary VLANs. I have had 2 NICs bonded and attached to a bridge for several LXC containers. I followed what I was able to find for documentation regarding this setup and introduced VLANs. So now I have created 3 VLANs who’s real interface is the bond and enslaved them in 3 separate bridges. I created another bridge for untagged mgmt traffic and currently have the bond enslaved in it. It’s the only way I could get to the host at all. From each of the containers I can access outside. I haven’t checked whether or not the traffic is actually tagged or not. I cannot access any of the containers external from the host. This is all configured through yast. Am I required to use ebtables rules? I haven’t found any documentation stating so.

ifcfg-bond0


BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=802.3ad miimon=100'
BONDING_SLAVE1='eth1'
BOOTPROTO='none'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
USERCONTROL='no'
PREFIXLEN=''
BONDING_SLAVE0='eth0'

ifcfg-br1


BOOTPROTO='static'
BRIDGE='yes'
BRIDGE_FORWARDDELAY='0'
BRIDGE_PORTS=''
BRIDGE_STP='off'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.0.4/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'

ifcfg-br10


BOOTPROTO='static'
BRIDGE='yes'
BRIDGE_FORWARDDELAY='0'
BRIDGE_PORTS='vlan10'
BRIDGE_STP='off'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.10.2/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
PREFIXLEN='24'

ifcfg-br50


BOOTPROTO='static'
BRIDGE='yes'
BRIDGE_FORWARDDELAY='0'
BRIDGE_PORTS='vlan50'
BRIDGE_STP='off'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.50.2/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
PREFIXLEN='24'

ifcfg-br70


BOOTPROTO='static'
BRIDGE='yes'
BRIDGE_FORWARDDELAY='0'
BRIDGE_PORTS='vlan70'
BRIDGE_STP='off'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.70.2/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
PREFIXLEN='24'

ifcfg-eth0


BOOTPROTO='none'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME='82574L Gigabit Network Connection'
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='hotplug'

ifcfg-eth1


BOOTPROTO='none'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME='82574L Gigabit Network Connection'
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='hotplug'

ifcfg-vlan10


BOOTPROTO='none'
BROADCAST=''
ETHERDEVICE='bond0'
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
PREFIXLEN=''
REMOTE_IPADDR=''
STARTMODE='auto'
VLAN_ID='10'

ifcfg-vlan50


BOOTPROTO='none'
BROADCAST=''
ETHERDEVICE='bond0'
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
PREFIXLEN=''
REMOTE_IPADDR=''
STARTMODE='auto'
VLAN_ID='50'

ifcfg-vlan70


BOOTPROTO='none'
BROADCAST=''
ETHERDEVICE='bond0'
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
PREFIXLEN=''
REMOTE_IPADDR=''
STARTMODE='auto'
VLAN_ID='70'


bridge-nf-call-arptables=1
bridge-nf-call-ip6tables=1
bridge-nf-call-iptables=1
bridge-nf-filter-pppoe-tagged=0
bridge-nf-filter-vlan-tagged=0
bridge-nf-pass-vlan-input-dev=0

One correction. I do not have the bond enslaved in any bridge. I tried that, but it wasn’t needed. I had forgotten that I removed it.

I’m not clear,
If you’re configuring VLANs on a managed switch, then ports will be assigned to your VLANs so I don’t see the purpose (and may be the source of your problem) configuring VLAN tags on the device attached to each port.

I suspect that you should just remove the VLAN tags from your openSUSE, leave the bonding in place and make sure that your cables are connecting to switch ports configured for the same VLAN.

Am assuming a “flat” VLAN architecture.

TSU

I ended up figuring it out. I have configured the network for each of my containers via the config file. Once I removed this and configured them within the container itself they all were able to communicate on the network. Not sure why this didn’t work when configuring the network via the config file.

No container should be configured to access the network directly… or, at least no recommended way.

If you’re using LXC containers,
Your networking is similar to any other libvirt-managed virtualization… You create and configure bridge devices (eg br0) configured for a networking type(eg NAT, Bridged, Host-only) and attach to a physical network interface.

If you’re using Docker containers,
You need to configure a forwarding rule in your container configuration. It’s a common mistake for beginning Docker users to attach a container with its own IP address to a network interface only and find that networking doesn’t work automatically. Personally, when I do presentations on Docker I configure Docker containers to “share” the HostOS network interface to avoid this issue but is something you probably can’t avoid if you’re configuring a multitude of containers on the same machine. I describe this, and link to the necessary Docker articles in my wiki page for first-time Docker users,

HTH,
TSU