TCP Send buffer constraints destroy datarate


Geoffrey H. Cooper (GEOF@XX.LCS.MIT.EDU)
Mon 13 Jan 86 21:08:49-EST


The other day I was playing with Imagen's TCP with a collegue, and we discovered
an interesting thing. If we decreased the printer box' offered window from 6KB
to 2KB, the source-to-sink datarate transferring from a Vax 4.2 to the printer
increased dramatically (from 30 Kbit/s to 300 Kbit/s).

The reason for this behavior is apparently that the vax allocates only 2K of
buffer space to an outgoing TCP connection. Thus it can only send 2K before
receiving an Ack. In the printer box, ACKs are carefully delayed for
1/2 a second to increase piggybacking unless the window is opened. The window
is opened when it is half consumed (i.e., after about 3KB of data is transferred).
In the vax's case, this allows 2KB of data every 1/2 second, or 32Kbit/s. This
was just the figure I had been getting.

The general problem is that the window flow control reflects the receiver's buffer
constraints, and there is no way for the sender's constraints to be tr)n{mitted
across the connection. In the XNS Sequenced Packet Protocol, an ACK bit in a
packet performs this function; it allows the sender to explicitly request an
immediate acknowledgement for the last packet in the "send window."

I've been trying to figure ways to fix the problem. One algorithm would be to
send the ack after a dally or immediately if the connection's packet input queue is
empty after processing an input packet. This would allow an entire string of
packets to be received properly and would not tend to cause excessive ack's.
It works well if the TCP process was dequeuing packets at a lower priority
than the internet process was enqueuing them.

This doesn't work in systems like mine which do not have separate processes with
queues between them for TCP connections. In my case, the TCP module is upcalled
with each packet as a pseudo-interrupt, and all processing takes place either
during the upcall, during the client's downcall to get data, or during a timer
interrupt. I can check the interface to see if additional packets have arrived,
but it is not assured that they are from the right connection. Another possibility
is to notice that an ack is being sent after a dally, and decrease the offered window.
This would work (especially since the printer's only application is for bulk data
input), but I am concerned about the lack of generality in the scheme. For one
thing, there is no way to increase the offered window once it is decreased -- sounds
like a perfect way to develop silly window syndrome.

I'd like to see some other thought on this problem. How do other implementations
of TCP deal with this situation? Do they set tight timers? Do they get trapped
by it (be honest...)?

Please respond to the list and sign all messages (otherwise I can't tell where
they come from on usenet-news), my incoming mail is not working well.

- Geof Cooper
  IMAGEN
-------



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:35:39 GMT