The server is in a datacenter and the Remote Console/KVM isn’t always available. I do have a rescue system where I can mount the disk and restore a network config that does work, but without the bridge.
Since it happened with the latest tumbleweed update, I don’t think it’s hardware related.
What did change with the update was that the interface enp-s31f6 was eth0 before.
Is there another file somewhere perhaps that is still pointing to eth0 that could cause this?
Simply returning the naming convention to what was before may not be enough…
You have to inspect the whole dependency chain to verify everything hands off to the next correctly.
Short of that…
Just create a new bridge configured however you wish(must have a different name like br5) and then configure your VMs to use the new bridge device. There won’t be any kind of system penalty having unused bridge devices. Or, once a working device has been created, delete it so that the name label can be re-used (although I wouldn’t necessarily re-use unless I was totally convinced the old device had been fully removed).
I don’t know if any official statement has been made deprecating brctl (unlike for instance ipconfig).
There should not be any problem using brctl or expecting brctl to be made available (through the bridge-utils package) for the forseeable future, it’s only if you’re scripting the creation or management of bridge devices that you might want to use ip or bridge instead so your scripts can live on without any concern for maintenance and obsolescence. To me, brctl is useful because its command options are simple, intuitive and easy to remember.
I also don’t know that there is any concept of a bridge device “starting,”
It’s a static configuration bound to an ordinary network interface so it either works or doesn’t and is “instant” when the underlying network interface is functional.
I didn’t figure it was actually starting something, yet it’s odd it fails like that and a couple of times even rebooted. The service provider offered to swap the hardware, but I am not quit yet there. I can’t imagine how this would be a hardware failure.
I’ll create a new bridge with a new name. See how that goes, tnx.
It’s just a (SUSE-specific) systemd-controlled framework used for ‘automated’ network configuration at boot. Alternative management frameworks are systemdnetworkd.service and NetworkManager (each with their own strengths).
Although I wouldn’t necessarily discourage any idea,
In general bridge devices are pretty universal…
It should not matter how a bridge device is created using any utility or virtualization technology,
The bridge device should be recognizable and usable by any application or virtualization.
Which reminds me…
I don’t think my earlier question was answered, is libvirt installed and being used?
I’m beginning to suspect by the way that virtualization was not installed using the YaST “Install hypervisor and tools” module,
This is another case where virtualization was installed using the Software Manager.