Are you using any ipv6 at all? If not then suggest disabling (YaST -> Network Settings -> General tab), if you do then might want to tweak postfixes main.cf inet_protocol to only be ipv4 rather than all.
IPv6 is enabled for the hosts in our network. We have an assigned IPv6 block (fd2f:4760:521f:3f3c::/64), and have given each host an address. That is the extent of the explicit use of IPv6.
I discovered the IPv6 command for the route listing:
$ ip -6 route list
::1 dev lo proto kernel metric 256 pref medium
fd2f:4760:521f:3f3c::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fd2f:4760:521f:3f3c::c0a8:4501 dev eth0 metric 1024 pref medium
default via fe80::2eb8:edff:fe5a:9d44 dev eth0 proto ra metric 1024 expires 1717sec hoplimit 64 pref medium
I do not understand why you suggested changing the configuration for postfix.
This is cache of per-destination entries. When Linux kernel resolves route to destination it puts it in cache for future use. AFAIK it is used only for IPv6 today, no more for IPv4. This case is unrelated to “routing between interfaces”. Output of
Pragmatic answer - if you hit this you may want to increase this limit. But of course it would be useful to (at least, try to) understand why you have so many entries. What does this host do? Is it Web/Mail or like server? Are you using torrents (or some other peer to peer solutions)? How may concurrent connections do you usually have?
The third entry is how many times function that allocates dst_entry was called since boot (the error message comes from this function). This happens for every remote destination after kernel has determined route to it. 0x1c2ffe2 == 29556706 which is quite a lot. Of course it depends on uptime. When system was booted (who -b)?
The second last entry is the actual number of dst_entries currently allocated. It is small (0xe == 14), but it is possible that you have some burst of activity at one point.
$ ip -6 route show cache
That is probably red herring. As far as I can tell those cached entries are created only in two cases - on ICMP route redirect and to override Path MTU value. Neither should happen very often.
The fact is that when this error message is output kernel hits limit on the number of destination entries it maintains. If you do not see it anymore it was probably caused by sudden burst of activity from many different remote systems. We likely never know.
web services – including a public server
It has only one network interface.
If that message is appearing then, the physical network connection is not the bottleneck.
Nothing – the routing being performed by the Kernel is hitting a limit which can only be somewhere in the system queues.
The figures provided by Vincent Bernat are showing that, the Kernel’s lookup time of IPv4 routing is somewhat less than 40 ns and, the Kernel’s lookup time of IPv6 routing is less than 500 ns – given enough hardware …
In other words, the Kernel can handle 25 million IPv4 routing lookups per second and, 2 million IPv6 routing lookups per second.
Given a standard Ethernet cable, the maximum amount of routing requests per second is –
1 Gb/s == 125 million bytes per second == (given 84 bytes per Ethernet packet) 1.488095 million Ethernet packets per second
[HR][/HR]If, the CPU is weak and/or there ain’t enough memory for the Kernel’s buffers, it will not be able to serve the number of Ethernet packets per second a 1 Gb/s Ethernet cable is capable of delivering …