Thursday, December 9, 2010

Jim Gettys on TCP/IP and network buffering

If you're at all interested in TCP/IP, networking programming, and web performance issues, you'll want to run, not walk, to this fantastic series of posts by the venerable Jim Gettys:

Here's a little taste to wet your whistle, and get you hankering for more:

You see various behavior going on as TCP tries to find out how much bandwidth is available, and (maybe) different kinds of packet drop (e.g. head drop, or tail drop; you can choose which end of the queue to drop from when it fills). Note that any packet drop, whether due to congestion or random packet loss (e.g. to wireless interference) is interpreted as possible congestion, and TCP will then back off how fast to will transmit data.

... and ...

The buffers are confusing TCP’s RTT estimator; the delay caused by the buffers is many times the actual RTT on the path. Remember, TCP is a servo system, which is constantly trying to “fill” the pipe. So by not signalling congestion in a timely fashion, there is *no possible way* that TCP’s algorithms can possibly determine the correct bandwidth it can send data at (it needs to compute the delay/bandwidth product, and the delay becomes hideously large). TCP increasingly sends data a bit faster (the usual slow start rules apply), reestimates the RTT from that, and sends data faster. Of course, this means that even in slow start, TCP ends up trying to run too fast. Therefore the buffers fill (and the latency rises).

Be sure to read not only the posts, but also the detailed discussions and commentary in the comment threads, as there has been lots of back-and-forth on the topics that Gettys raises, and the follow-up discussions are just as fascinating as the posts.

Be prepared, it's going to take you a while to read all this material, and I don't think that Gettys is done yet! There is a lot of information here, and it takes time to digest it.

At my day job, we spend an enormous amount of energy worrying about network performance, so Getty's articles have been getting a lot of attention. They've provoked a number of hallway discussions, a lot of analysis, and some new experimentation and ideas. We think that we've done an extremely good job building an ultra-high-performance network architecture, but there's always more to learn and so I'll be continuing to follow these posts to see where the discussion goes.

1 comment: