Craig Partridge (email@example.com)
Sat, 06 Dec 86 18:19:44 -0500
I'm in the midst of doing comparisons between an RDP implementation and
the 4.2/4.3 TCP implementations and have run into a problem which I'm hoping
someone else can shed light on.
I'm running tests on two machines, a VAX 750 running 4.3 and a SUN
workstation running 4.2. The two machines are on the same Ethernet
and use the same gateway. If I set up an experiment to test behaviour over
paths with long network delays (for example, bouncing packets off
Goonhilly), the TCP connections are established and then typically
fail part way through the transfer. I don't understand this because
the RDP connections work just fine, and typically complete in 1/4 the
time it takes for a TCP connection to send about 20% of the data and
faint. The experiment generally involves passing 50-100 segments of anywhere
from 64 to 1024 bytes to the protocols to send. This is on weekends so
the delays aren't that long.
The question I'm trying to answer is whether the problem is in the
RDP implementation (what anti-social things could it be doing to maintain
that connection?), or the TCP implementation (what might it be doing wrong
to die where another implementation succeeds?). If I can, I'd like to
discourage invective. I'm simply trying to figure out why this is happening
so I can identify and fix the problem and do a comparison between the two
implementations/protocols. (And soon -- hair pulling over this problem
is beginning to threaten the health of my scalp and beard).
General information on the RDP implementation: it will retransmit
up to 10 times and calculates the round-trip time based on the first
packet sent with the caveat that it ignores round-trip times of segments
with sequence numbers lower than those of segments whose round-trip time
has already been computed (this feature is an experiment which may not
stay). The maximum RTT is 2 minutes, the minimum is 2 seconds.
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:37:00 GMT