I’ve been running OpenSuse in lots of environments for many years, and IPV4 and IPV6 have always worked well. A few of my servers are at OVH locations, and although OVH has a rather strange way of dealing with networking, I’ve still had stable IPV4 and IPV6 service there.
Two weeks ago I leased a new server at a new OVH data center in Toronto. Their way of dealing with networking (what they refer to as their “version 3”), is different even than the other OVH data centers. At this location, on this server, I am encountering a problem, which is that IPV6 stops working a few hours after a reboot, and nothing short of a full reboot can bring it back again.
Server Hardware: OVH “Scale-i1” server
Operating System: OpenSuse Leap 15.6, freshly loaded, server configuration
Network Engine: wicked
IPV4 SETUP (Works)
On the IPV4 side, they use basically /32 netmasks for everything. I’ll use example addresses here for the time being, but the server has:
IPV4 address (sample): 1.2.3.4/32
IPV4 gateway (sample): 5.6.7.8/32
So to make this work, I use /etc/sysconfig/network files as shown:
ifcfg-br0:
IPADDR=‘1.2.3.4/32’
ifroute-br0:
5.6.7.0/24 - - br0
5.6.7.8/32 - - br0
routes:
default 5.6.7.8 - br0
Although this feels ugly to me, it works, and IPV4 is online and fine.
However, with IPV6, it’s even more problematic.
IPV6 SETUP (Fails)
On the IPV6 side, they issue a /56 netblock, but the gateway is just bizarre:
IPV6 netblock (sample): 2607:5000:1:1::/56
IPV6 address (sample): 2607:5000:1:1::5/56
IPV6 gateway (ACTUAL): fe80::1/128
Yes, that is the actual default gateway I’m being told to use.
Some of you may state that using a link-local address, fe80::1, as a default gateway goes against the RFCs, or is otherwise technically problematic. I thought so as well. However, there are articles posted online that claim that using fe80::1 as a default route is perfectly legitimate. I have no idea. I initially thought that my IPV6 block had just not been provisioned correctly, and pushed back on OVH, but they insisted that it is.
Some of you may point out that this is not what the posted OVH documentation says. You would be correct. In their response to my ticket, OVH support wrote, in part:
–snip–
IPV6 gateway: fe80:0000:0000:0000:0000:0000:0000:0001
The information you see on your control panel is not a mistake, and it is in fact the same for all of our 3rd generation Advance, scales and High grade servers.
–snip–
They have also acknowledged that there is, as of yet, no published documentation on this configuration anywhere. But this is how they are rolling now, at least at their new data center, so it’s what I’m forced into.
They have essentially stated that the following three commands should get IPV6 working:
ip addr add 2607:5000:1:1::5/56 dev br0
ip -6 route add fe80:0000:0000:0000:0000:0000:0000:0001 dev br0
ip -6 route add default via fe80:0000:0000:0000:0000:0000:0000:0001 dev br0
So, to make this work, I disabled autoconf and accept_ra in sysctl as per OVH recommendations, and went with the following in my /etc/sysconfig/network:
ifcfg-br0:
IPADDR_1=‘2607:5000:1:1::5/56’
ifroute-br0:
fe80::1/128 - - br0
routes:
default fe80::1 - br0
Although this also feels crazy to me, it does actually work, and IPV6 service does come up and is functional.
However, and this is the issue, after about 4 hours, IPV6 service halts. Nothing short of a physical server reboot will restore it.
systemctl restart network
systemctl restart wickedd
flushing the routes and rebuilding
removing the address and re-adding
None of it works. The routing tables are unchanged, and look correct; however, running the mtr command as they ask:
mtr -6 -r -c 10 some:outside:ipv6::address
produces only the headers, and no further output. That same command run from the outside world against my address results in a loss - somewhere - nearby:
mtr -6 -r -c 10 2607:5000:1:1::5
Start: 2025-04-15T19:52:58-0700
HOST: ovh Loss% Snt Last Avg Best Wrst StDev
1.|-- 2603:5000:2:2bff:ff:ff: 0.0% 10 0.7 0.9 0.7 1.2 0.1
2.|-- 2001:41d0:0:50::2:5348 0.0% 10 1.0 1.1 1.0 1.2 0.1
3.|-- 2001:41d0:0:50::6:892 0.0% 10 0.3 0.3 0.3 0.4 0.0
4.|-- be100-100.bhs-g2-nc5.qc.c 0.0% 10 1.1 0.9 0.8 1.1 0.1
5.|-- be101.yto-tr1-sbb2-8k.on. 0.0% 10 8.5 9.6 8.5 11.4 0.9
6.|-- 2607:5300:50::4 0.0% 10 10.8 10.9 9.9 12.1 0.6
7.|-- fdff:f003:400::17 0.0% 10 8.7 8.8 8.6 8.8 0.0
8.|-- ??? 100.0 10 0.0 0.0 0.0 0.0 0.0
It’s not clear what that hop 8 is, but when IPV6 is working just after an initial reboot, that hop 8 is still there, still showing a 100% loss… but my server shows up as hop 9.
9.|-- 2607:5000:1:1::5 0.0% 10 8.4 8.4 8.4 8.4 0.0
The only difference is that, after a few hours, when IPV6 fails, hop 9 vanishes, and the mtr stops at hop 8.
As you can probably guess from the use of “br0”, my machine is running in Xen mode. When IPV6 fails after about 4 hours, the guests are also impacted. However, IPV6 on the server itself still works. Guests can ping each other and the host, and the host can ping the guests. Critically (I think this is critical anyways), both hosts and guests CAN PING THE GATEWAY:
ping6 fe80::1%br0 (from the host)
ping6 fe80::1%eth0 (from the guests)
all show packet traffic and all show responses as normal. This makes me want to believe that the server itself is doing just fine, and that the problem exists somewhere in OVH, outside of my server.
But I don’t want to get into a “contest” with them, so I am hoping that experts here might see something I’m missing, or suggest things I can try, or give me an opinion of what might be happening here. Any insights would be gratefully appreciated! Thank you!
Glen