11.1 xen bridging breaks jumbo frames

The new Xen network bridging in OpenSuse 11.1 is crippling my ability to relay jumbo frames to my Xen virtual machines, preventing me from migrating them to hosts running 11.1 as I had planned.

Although the host bridges (br1 device in my case) accept an MTU value of 8500 in yast, they limit the value to 1500 bytes in actual application.

This was not a problem in 11.0. Is there any way I can adjust this, and continue using DRBD replication and iSCSI at decent speeds?

Apologies, this is an update rather than a reply, with more information.

To reproduce the problem: On a system with two NICs, install a default Opensuse 64-bit, Xfce desktop, adding Xen server to the software selection. Choose the recommended Xen bridging network configuration during the installation (or unconfigure the NICs and configure the bridges br0->eth0 and br1->eth1 manually afterwards).

Using YAST Network Devices during or after the install, under the general tab for br1, enter an MTU value of 8500.

There will be no error message shown exiting YAST. There will be no sign of problems in /var/log/messages:
Mar 5 13:37:29 crowley SuSEfirewall2: /var/lock/SuSEfirewall2.booting exists which means system boot in progress, exit.
Mar 5 13:37:30 crowley ifup: br1
Mar 5 13:37:30 crowley kernel: igb 0000:04:00.1: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Mar 5 13:37:30 crowley kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Mar 5 13:37:30 crowley kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Mar 5 13:37:30 crowley kernel: device eth1 entered promiscuous mode
Mar 5 13:37:30 crowley kernel: br1: topology change detected, propagating
Mar 5 13:37:30 crowley kernel: br1: port 1(eth1) entering forwarding state
Mar 5 13:37:30 crowley ifup-bridge: br1 forwarddelay (see man ifcfg-bridge)
Mar 5 13:37:30 crowley ifup-bridge: … ready
Mar 5 13:37:30 crowley ifup: br1
Mar 5 13:37:30 crowley ifup: IP address:
Mar 5 13:37:30 crowley ifup:
Mar 5 13:37:30 crowley avahi-daemon[4413]: Joining mDNS multicast group on interface br1.IPv4 with address
However, ifconfig shows an MTU of 1500 for br1 and the bridged eth1 device, and when booting on the console, or during an rcnetwork restart, you can see: br1
br1 Ports: [eth1]
br1 forwarddelay (see man ifcfg-bridge) … ready
RTNETLINK answers: Invalid argument
RTNETLINK answers: Invalid argument
Cannot set mtu of 8500 to interface br1.
br1 IP address:
/etc/sysconfig/network/ifcfg-br1 contains:
The problem also does not occur when the mtu value is set to less than 1500, in which case the reduced mtu is applied, again according to ifconfig.

I have the same problem.
What I have till now:

the bridge automatically use the rule: mtu = min(mnt of all member inferfaces), so setting MTU in the config file does nothing…
I think that was the easiest way to implement the bridge and give good performance. It would be better if the bridge would set the interface (tries to the given mtu, than go down till the interface accepts and than set all other to the same mtu… not the best, but would be a better solution)…

So the main problem is:
I modified the xm command files to enable mtu= setting in the vm file…
Now it can be used in the configfile and in the /etc/xen/scripts

I do not why, the virtual interface in that state only accept maximal mtu 1500!!!

But if the domain has started… I can set it to 9000 in my case…

so I have used vifname in the configfile to easily change settings by the a script afterwards.

The problem is the time between the xm create command and the ipconfig command, it will be go down to MTU 1500 for some seconds. If I have other domains running, this will be a affect to them.

So anybody does know why I am not able to set the MTU higher than 1500 in the vif-common or the vif-bridge and why can I set it afterwards?

Ok now I can try to make another unused bridge, set it in the config file, set the vif mtu, remove it from the vridge, add it to the right bridge… But it is not the best way I think… Or I hack the script, and if no bridge is set in the config file, than it does not add it to any bridge… But this is all about dirty workarounds…

So the main question is :

Why does not the virtual interface allow higher MTU setting in the initial phase aka in the vif-common or the vif-bridge scripts?

Additonal info I have to wait for 2 seconds after the domU has started and only than can I set the MTU higher…
So maybe the DomU have to communicate with the xen hypervisior or something like that?

I am reading this thread and working with the Jumbo Frames Problem.

You need to add on yast the Ethernet to Static and add (dummy address) for they can start correctly

Or you can changue directly on /etc/sysconfig/network/ifcfg-ethX


With this they works fine on the dom0 domain.

Another great tool for see if the mtu is ok is tracepath

hqxen2s:/etc/xen/scripts # tracepath mortadelo
1: hqxen2si.iscsi.promosoft.lcl ( 0.100ms pmtu 9000
1: mortadelo.iscsi.promosoft.lcl ( 1.424ms reached
Resume: pmtu 9000 hops 1 back 1

You can see the MTU 9000

Well, with this you have the MTU 9000 on the network on the xen server.

Now the problem is with the DOMU. You can add on a domu virtual machine, like a sles , the mtu on yast -> network devices, on general, but the main problem is when the virtual machine is created, they create a vif network card.

vif2.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
RX packets:27550653 errors:0 dropped:0 overruns:0 frame:0
TX packets:77210665 errors:0 dropped:4 overruns:0 carrier:0
collisions:0 txqueuelen:32
RX bytes:1593314429 (1519.5 Mb) TX bytes:116132778178 (110752.8 Mb)

The main problem is i don t know how can you add the MTU parameter when they create the network with the script network-bridge.

Anyone have the solution ?..Because if the vif interface is set to 1500 (default), then, if you add the domu machine with 9000 they don t work fine.

Thanks in advance