Re^2: TCP Send buffer constraints destroy datarate

Geoffrey H. Cooper (geof@BORAX.LCS.MIT.EDU)
Wed, 15 Jan 86 16:53:10 est

The idea of giving the application one chance to send data before transmitting
the ack is a good one, but it doesn't solve my problem entirely. Under
that scheme, TCP would always send an ack for every packet received, either
containing client-level data or not. I need a scheme that allows the
receiver to send the minimum number of acks that will allow the sender to send
at full speed.

The idea of using the push bit is also a good one, but I fear that it would
have the same result. This is because some TCP's almost always set the push
bit, and some other TCP's never forward data to the client unless they see a
push bit (which is part explaination for the sender's behavior).{_ For
example, I think that 4.2 unix sends a push bit for every write call, since
it has no way of knowing whether there will be more client-level data or not.
This translates the push bit always being on for most connections (even
bulk data connections like FTP or sending to the printer).
Any other ideas?

- Geof
(PS, please continue to post replies to the net. I still don't receive
incoming mail well, and rely on a usenet news feed.)

This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:35:39 GMT