Slow uploadrate

Howdy,
I’m very close (probably a few days to a week away) from posting a detailed likely solution to your problem.

First step is to verify that your network issues are network loss related. I’ve been looking around for tools, so far the best I’ve found is to use ntop, run it from a command line, then open a webpage

http://hostname:3000

You can use “localhost” for the above URL if monitoring from the same machine and you don’t have to install a full-blown webserver, ntop should launch an HTTP stub.

If you’ve verified packet losses (or maybe you just want to just go make changes)

I’m preparing a more complete update, comments and analysis but the first step is to increase the TCP socket buffers, particularly the send buffers in your case. Older info can be found here

Linux Tune Network Stack (Buffers Size) To Increase Networking Performance

As noted in that article and elsewhere, the default buffer sizes in today’s Linux kernels are very tiny, only appropriate for “typical” use. Your attempts to transfer a very large file and especially over a very long distance is not “typical” use.

The next step is to make your use of the buffers more efficient, the default Congestion Control algorithm in today’s Linux kernels is cubic which by reputation is “very conservative” and assumes transferring relatively small files over excellent network links.

For now until I can post updated info, you can change the Congestion Control algorithm to one that’s more appropriate to the size of file, your available theoretical bandwith and link quality

TCP and Linux’ Pluggable Congestion Control Algorithms LG #135

If you run the command to list available algorithms, you’ll find the list of algorithms on that page are mostly but not completely current, but I’ve found that choosing just based on the general description may not work as expected… Unless you dig into exactly what each algorithm does, trial and error is probably best.

Also, you’ll find that while modifying the TCP buffer sizes is persistent across reboots, the Congestion Control algorithm is not. You may want to follow my current thread in this newsgroup on my attempt to find a persistent solution:

How to Persist TCP Congestion Control algorithm change?

Keep tuned for my updated info which will attempt to be more complete than what’s scattered across the web and may be a bit dated…

HTH,
Tony

Wow, thanks for all that info! Unfortunately I can’t say it’s been a great improvement. I will give you an example: I go to mediafire and try to upload something. The upload begins, it marks that it’s uploading at, say, less than 100 kbps, then it says “recalculating” (surely because the connection is delayed), and then it starts gradually going down. After modifying the TCP buffer the speed is above 200 kbps but the “calculating” thing keeps going, it doesn’t appear the connection is established all the way so after a minute or so I’m at 2% of a file of less than 8 MB.

Also, I’m using the “veno” algorithm and it’s doesn’t appear to have make much of a difference. I think I’m having a different problem :confused:

In my upcoming more detailed work I point out that the Congestion Control algorithm isn’t a total magic bullet and actually describe in more detail when and where I’ve found that veno makes a diff… In a nutshell, it’s mainly when dealing with a degraded wireless signal. If your wireless is line of sight, powerful over a short distance with no interference, veno won’t yield benefits. I also point out that the generalized descriptions for each algorithm aren’t completely reliable, you may have to simply use trial and error to see if any one of the algorithms will improve throughput.

Instead, I’d recommend you try one of the other algorithms that focus on the sending side, and maybe larger files. Depending on what you might see using nTop, I’d guess that hybla might be a good try if your network connection is more “bursty.”

Tony

Sorry it has taken me so long to get back with you.
Looking at your traceroutes I can see that a lot of your slow speeds come from your ISP. If you look at the 2nd, 3rd and 4th hops, those are timing out. The 1st hope is your ISP and the 5th hop is still your ISP, so that would mean that some routes in their networks are broken or bogged down.
Next we see, that what isn’t timing out, is very slow from the 5th hop and on, we’re looking at over 400 miliseconds. That’s a long time.
We can see in traceroute that when it’s someone elses networks, like speedy.com.ar, the times really improve.

There are a couple senarios that cause this. Different servers are maintained differently. Sometimes slow speeds can be due to high server load. Another cause would be signal to noise ratio. If the noise on the line is higher than the signal, than you’ll see slow speeds, latency and packet loss. These are the things to look for.

From looking at this, I would suspect a large amount of noise on your line, and lots of packet loss.
To test for packet loss, run a “continual” ping test. ping -t something.com should work. You want at least more than 20 pings to help determine packet loss. This is why I was suggesting pathping. Unfortunately, pathping is only available in Windows.

http://www.sterlingcampbell.com/kalicoremcsekit/2153/project3/pathping.gif

I appreciate your effort but I’ve re-installed SuSE today. I’ve installed the wl driver and for whatever reason is working great. My connection is now stable and I can upload at a constant speed. We shall never know what was the problem. I’m sure it wasn’t an ISP problem because the connection worked fine on Windows, so I guess there was some sort of driver conflict perhaps.