Phil R. Karn (ulysses!thumper!karn@ucbvax.Berkeley.EDU)
1 Apr 88 05:26:06 GMT
We agree. Error detection and correction on an end-to-end basis is
essential for most data applications. Doing it at the link level as well
is almost always redundant and can only be justified as a performance
enhancement under rare circumstances.
If you're running a "heavy" backbone link you've probably already spent
the money to run synchronous HDLC framing, since this shaves off 2 bits
of overhead per byte sent. In this case I don't really object to simple
link error detection (without retransmission) mainly because it comes
"for free" in the HDLC chips, not because it's really necessary.
But full ARQ protocols like LAPB at the link layer are a waste of
bandwidth and cycles. Our ARPANET gateway is probably as active as any.
It speaks 1822/HDH/LAPB over a 56kbps line to Columbia. Timers, RRs,
keepalive polling at both the HDH and LAPB layers, the whole bit. In the
28+ hours since it was last booted, it received 4.3 million HDLC frames
from the IMP. Only 48.7% contained IP datagrams; the rest were overhead
frames. (The number of CRC errors on input frames was ZERO). Of the 6.3
million packets sent, a mere 33.3% were IP datagrams. This is silly.
The only time LAPB tries to do recovery is when the link has been cut
somewhere. It can retransmit all it wants, but it won't get anywhere
until I call up AT&T and complain. DDS circuits either work perfectly
or not at all.
Fortunately, we've got plenty of excess link bandwidth and the CPU in
the dedicated Sun gateway has nothing else to do anyway, so all of this
is merely amusing and not a real problem. (I can only thank the Deities
that we don't have to run X.25!) But as links and switches get faster,
even the link overhead in HDH has got to go.
Mr. Beattie really ought to read the classic paper by Saltzer, Reed
and Clark, "End-to-End Arguments in System Design".
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:41:54 GMT