It is a connection-based, streaming, reliable, full-duplex protocol.
- Connection-based means that you have a definite partner. You aren't just sending your data out randomly, you're sending it to the program on the other end of the connection.
- Streaming means that your data is delivered in-order. Byte 1 will be delivered before byte 2, which in turn is delivered before byte 3.
- Reliable, because the TCP/IP implementations take over a number of incredibly important details which each application would otherwise have to build on its own.
- Full-duplex, meaning that both ends of the connection can be sending and/or receiving at the same time; the network does not mandate the rules about who can send or receive when.
Under the covers, TCP/IP takes the data that you give it, chops it up into packets, check-sums the data, addresses and routes it to its destination, acknowledges delivery, and automatically re-transmits lost or damaged packets.
It also automatically and adaptively detects congestion on the network, and controls the flow of data to ensure fair use of the network resources.
However, there is one thing that TCP/IP doesn't, and, can't, provide: the network is not of infinite capacity. There is some amount of sent-but-not-yet-received data which can be outstanding, but when you hit that limit, no more data may be sent.
Therefore, if you are not careful when you are designing your application-level protocol, it is possible to fall into a simple trap:
- On my end, I start sending data to you. I keep sending and sending, and at some point all my buffers fill up and so my write() call blocks, waiting for you to receive some of that data.
- However, on your end, you are doing the same thing: you keep sending and sending, and at some point all of your buffers fill up and so your write() call blocks, waiting for me to receive some of that data.
I'm never going to receive your data, because I'm blocked writing.
And you're never going to receive my data, because you're blocking writing.
This is commonly called a "write-write deadlock".
Unfortunately, this is the sort of error that you may not notice until your application has been in production for months or years, particularly if your local testing is performed on machines which are directly connected on local-area networks with large internal buffers.
There are various ways to avoid this problem, such as having programming conventions which ensure that each partner always reads data in preference to writing data, and, when reading data, programs always read as much data as is available before blocking or writing more data.
But the best way to avoid these sorts of problems is to be aware of them while designing and analyzing your application level protocols, so that you are careful not to build programs which fall into this trap.
No comments:
Post a Comment