high LAN ping latencies?

Hi i tried checking my network performance and was shocked to find high latency. I tried different ways of connecting 2 systems to see how fast they can ping each other in different circumstances through direct connection, store and forward switching and cut through switching. All cases used gigabit networking. It seems that in every configuration the latency was no different. I tried different packet sizes and intervals. Both machines used default network adapter configurations with standard 1500 MTU

64B/128B packets
pinging the machine’s network adapter: around 0.040ms
pinging another machine: around 0.400-0.500ms
pinging my router (routerboard): around 0.200ms

1470B
pinging the machine’s network adapter: around 0.040ms
pinging another machine: around 0.600-0.900ms
pinging my router (routerboard): around 0.450ms

65500B
pinging the machine’s network adapter: around 0.150ms
pinging another machine: around 6ms (73% packet loss)
pinging my router (routerboard): around 2.2ms

for testing i had a marvel adapter using core2duo, realtek adapter using AMD bulldozer, routerboard 450G and GS724T switch. The results seem to be a lot more disturbing than i thought. When i increased the packet interval to 0.01 the latencies dropped by 0.1-0.2ms. Strangely the RB450G had very good latencies compared to a lot newer systems. Are these results normal or is there any way to improve them? It seems that using a store and forward switch didnt affect the latencies even for larger packets however jumbo frames significantly increased latencies.

I tried maxing out the gigabit connection using mikrotik’s btest software on windows and managed to get the full bidirectional gigabit bandwidth (task manager showed 99% network utilisation) so the network adapters are able to handle the load. I remember back in college the network server (core2duo) running centOS had 0.100ms or lower pings which used integrated intel NIC.

First,

Did you test using a dedicated cross-over cable? You should to establish your baseline. In this configuration, if you still have high latencies then your problem is likely bad connections or bad network cards. Are you sure your cards are configured properly, eg full duplex, auto-negotiation or force full Gigabit?

Once you eliminate hardware issues, then you can take a closer look at optimizing settings.

Since you’re using Gigabit and might be interested in moving large amounts of data, you should take a look at my Guide on setting an appropriate TCP/IP Congestion Control algorithm and enlarging TCP/IP buffers. In it, I try to hit all ways you can optimize at Layer 3.
https://sites.google.com/site/4techsecrets/optimize-and-fix-your-network-connection

But, once you start introducing devices like switches and other Hosts on the network, then you may need to look at other issues… and then, that’s where the Congestion Control algorithm can try to mitigate (but not eliminate) problems. And, by default TCP/IP tuning is optimized for reliable, high quality connections transferring <tiny> files like typical web browsing or reading individual email messages (Read my Guide for scenarios which aren’t considered “default”).

HTH,
TSU

I could test using more systems if i had time. I’m trying to achieve a latency of around 0.2ms or lower. My network cards are properly configured although i think 1 laptop using marvel ports has a bit higher latency due to the fact that it’s older and using the PCI bus. There is only full duplex for gigabit. I can easily force full gigabit with the switch/router but i dont think it would make a difference in the latency. I am using auto-negotiation and full duplex. The switch and machines reported gigabit connectivity.

I’ve also deduced that both my switch and router arent the problem. I’ve presented my readings and if i use an MTU of bigger than 1500 than the latency will increase a lot more due to a store and forward switch. In windows using MTU 1500 to transmit 9000 bytes took less than 1ms while using MTU 9000 to transmit 9000 bytes took 2+ms. The switch is fast enough that MTU 1500 forwarding latency is equivalent to cut through switching latency and doesnt add latency when compared to directly connecting the systems together. Both ipv4 and ipv6 presented the same latencies on MTU 1500.

I recently tested using an i7 laptop with broadcom NIC and got 0.3ms when testing with the AMD desktop with realtek NIC.

If you’re using relatively recent hardware (eg < 5 yrs old) I have never heard that MTU is a factor.

IMO testing with more systems won’t likely help.

I’m not clear whether you actually tested using a cross-over cable. If you haven’t, that should be fundamental to verify a baseline for at least those two machines.

Don’t overlook the possibilty of poorly made cables, eg swap out cables if you don’t have a test meter (a good investment if latency and network health is critically important).

If you’re testing only tiny packet sizes(Ping) over a direct connection, then layer2 frames and layer3 window sizes likely won’t be factors.
If you’re running a load testing app (even a flood of pings), then that could be a different story.

TSU

tsu2 wrote:
> If you’re using relatively recent hardware (eg < 5 yrs old) I have never
> heard that MTU is a factor.

> I’m not clear whether you actually tested using a cross-over cable. If
> you haven’t, that should be fundamental to verify a baseline for at
> least those two machines.

I agree a test with a direct cable connection is an excellent idea.

If the hardware is relatively recent, it probably doesn’t need to be a
crossover cable since newer NICs are autosensing.

as i said in the first post, I already tested with a direct cable or crossover and it had no difference than when using a switch. I managed to reduce the latency a little bit by messing with some settings but its still sitting around 300 microseconds from one machine to another when it should be below 200 microseconds. I dont think pings are affected by firewall. It might be openSUSE’s default network settings. I found an article stating that there is an option in linux using the low_latency option but i am not sure if openSUSE has the same thing. I have also tried different cables and got the same result. I am pretty sure the problem is with software not hardware. All cables are below 10 meters in total length (from one machine to another)

i did some changes and managed to lower the latencies but they still arent satisfactory. When i ping the machines adapter i get arounf 0.05ms when it should be 0.02. What other configuraations would quicken it? Would any settings in bios help?

My switch supports flow control. Would it work if i used flow control on the switch only instead of setting it on the machines?

IMO
if you have the ability to test without going through a switch, that should be your permanent starting point. Don’t introduce any other device like a switch until you achieve a satisfactory baseline.

If you’re simply testing a small number of pings, eg <10 at a time then you should be taking a look at your NICs… their settings and drivers.

IMO it’s very unlikely your issue is higher on the OS stack.

And, of course make sure the cable(s) you’re using are high quality or well made.

TSU

I’ve tested with crossover already, The pings are now between 0.2 to 0.4ms. Its still not satisfying because i still need to get the average down to 0.2ms. It seems the latency caused by my marvel is due to the fact that its a core2duo model and it seems that sony doesnt use intel or realtek adapters.

What worries me is that pinging the machines adapter returns on average 0.05ms when it should be 0.02 on average. I had someone test using the exact same network adapter and he got 0.1ms to 0.2ms when pinging another machine, 0.02 for the network adapter using gigabit connection. I have properly shielded cables with gold heads. i use 1 or 2 pings per second and i dont think cables or switches would change the NIC’s response time locally.

Is why I said

you should be taking a look at your NICs… their settings and drivers.

Depending on the capabilities of your NICs, there are probably optimum which may not be default settings.

TSU