Re: RING vs. ETHER - Theory and practice.


J. Noel Chiappa (JNC@XX.LCS.MIT.EDU)
Tue 5 Aug 86 14:04:12-EDT


        Right, I was referring to selective ACK's; i.e. a bit vector or an
array of ack ranges or something which allows you to say 'I did get this
stuff but not that' and describe holes, etc. (Just out of interest, protocol
archaelogists and old fogies may remember that the Xerox PUP BSP had such
ACK's!)

        As far as the whole quesion of engineering tradeoffs on ACK's go,
there are a lot of different interacting factors and criteria. The two big
questions seem to be whether to ack packets or bytes, and whether to have
single or multiple ack's. (Following is expansion for those who aren't
familiar with the tradeoffs.)
        The correct answer seems to be conditioned by a couple of design
criteria. The first is what effective data rates you expect to see, and the
second is what packet loss rate the system has. If you want high data rates,
either a) the net has to have an extremely low packet loss rate, or b) you
need a smarter acknowledgement strategy. In case b), it would seem that
since the overhead of processing ack's on a per byte basis is too high, the
thing to do is to do ack's on a per packet basis. It seems that in a lossy
system, ack'ing on a per byte basis (which allows retransmissions to be
coalesced) is the right thing for slow connections.

        I'm not sure what the right answer is. I really don't go back far
enough to know what the discussions in the early days of TCP ('76 or so, I
would imagine) made of all the issues and tradeoffs. I talked to Dave Clark,
who does remember, and in retrospect the problem was fairly fully
understood; the impact of packet losses on high data rates transfers was
clear (although perhaps the degree to which a single loss could affect very
high speed transfers was not appreciated). Apparently, the system was
assumed to have a low loss rate, modulo congestion, which was supposed to be
handled via a separate mechanism. (The fact that the original design of this
mechanism didn't work and a new one has yet to be created is the cause of a
lot of our current problems.) The per-byte acks were part of the flow
control, which wanted to be on a per byte basis.
        I guess we won't really know if the right decision was made until
the system as a whole either is made to obey the design criterion that is
currently being violated (low loss rates) or it proves impossible to meet
that constraint. In the latter case, a different mechanism would be
indicated.

        It seems to be another case of the 'simple safety net' philosophy;
as long as some mechanism is not used much, it doesn't matter if the design
is optimal: it's used so rarely. Ack's are in precisely this boat: if you
don't lose many packets, you don't need a sophisticated ack strategy.

        Noel
-------



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:36:35 GMT