Thomas Narten (email@example.com)
Sat, 03 Oct 87 11:45:11 EST
>Speaking of checksums, it seems to me that the IP header checksum
>could be replaced with a "packet" level CRC at the link level and
>done by hardware. Most (all?) HDLC type chips provide this
>without any extra hardware (or effort).
One should be careful not to underestimate the need for end-to-end
checksums between the various protocol layers talking to one another.
I am reminded once again of the times (note the plural) that our
Ethernet with 16 bit CRCs at the link level disintegrated. The
symptoms were the scambling of random bits of data in the Ethernet
frame, apparently before the frame was sent out on the wire. Hence, no
checksum errors. Because any of the bits, including those in the
header could be trashed, all hosts on the cable were receiving
bogus packets. The protocol most effected by this was ARP, which
relies on the link level checksum for error detection. Needless to
say, when ARP gets confused nothing works.
Another example of the danger of relying on link level checksums is
given in Clark/Reed/Saltzer's "End-to-End Arguments in System Design".
There, a transient error in copying data within the gateway was not
detected because link level checksums were relied upon. The real
kicker in their example was the lack of end-to-end transport level
checksums (e.g. TCP checksum).
A few factors need to be carefully weighed.
1) Where are datagrams corrupted? Historically, it has been in the
transmission on the "wire". Perhaps now, corruption takes place
primarily within gateways and at the boundaries between the machine
and the communications device. Can we ignore them?
2) How serious are the effects of undetected datagram corruption? One
can argue that in IP's case, higher layer protocols will detect errors
and that will be sufficient. However, changing a few bits in the IP
header changes the semantics of the datagram, which might dramatically
effect the subnet. Consider the destination address changing from
directed packet to multicast.
3) How often do the errors occur? As we rely more and more on network
protocols, it may be the case that we need more, not less error
Granted, it may be necessary for performance reasons to do away with
some types of error detection.
Another possible solution involves encapsulating IP datagrams within
another protocol for packet switching within a specific network. IP
checksumming would be done only at the gateways, and within the
network packets would be processed using only the optimized protocol.
By careful design of the protocol, one is better able minimize the
effects of undetected errors on the subnet, and get the benefits of
fast (e.g. no checksums) packet switching in point-to-point networks.
This approach is used in Cypress and Blazenet.
Of course, this does not solve all problems either. For one thing, it
only works within the logical network. This is great for large
backbone networks multiplexing much traffic from multiple connections,
but might not help much for a single TCP connection, since throughput
on any one connection is still limited by gateways (the weakest links
in the chain).
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:39:34 GMT