Re: TCP RTT woes revisited


Craig Partridge (craig@loki.bbn.com)
Mon, 15 Dec 86 13:47:40 -0500


> Have you thought of using a separate variable to measure the RTT of each
> packet so that you can update you smoothed RTT using the EACKs?

    That's precisely what I'm doing. Then the out-of-order rule is used
to discard RTTs that seem likely to cause SRTT explosion.

> When I last did RDP work, RDP and TCP were roughly the same speed. Maybe
> RDP was a bit quicker even in the LAN environment. The reason RDP did
> not dominate TCP was that the machines I was using were VAXes and the
> RDP checksumming algorithm did not run as fast as it would on a machine
> with a different byte ordering (like the 68K based workstations).

    Certainly the RDP checksum on the VAX is a real problem. On the
SUN the checksum I use is 40% faster than the TCP checksum; on the
VAX the checksum is about 3 times *slower* than the TCP checksum. (You
probably wrote a better one, I haven't compared them). And over a
perfect network, the checksum performance seems to dictate speed.

    But once there is any packet loss on the network the data handling
costs seem to become rather insignifigant, and the big issue (I believe)
is retransmission mechanisms. Unfortunately, once the network drops
packets, there seems to be a very wide variation in throughput from
test to test and it gets hard to say anything definitive. There's also
the problem of, when you get a definitive answer, is it a real difference,
or merely demonstrating an odd quirk of the particular RDP or TCP
implementation? (I.e. am I asking the right question?) One quickly
develops a healthy respect for TCP.

Craig



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:37:00 GMT