Mon, 26 Oct 87 08:33:29 -0500
Hmm, a recent note explaining how with 802.5, TCP may have to
request that the lower levels occassionallly "re-ARP" and that
this is somehow inherently evil. I would like to present an
Another way to consider the problem is that at some point,
for reasons it cannot know about, TCP decides correctly that
the path to its destination is failing. There needs to be
a way for TCP to register a complaint with the lower levels that
it isn't happy with the level of service its getting and would
like the lower levels to "try harder." One could argue that
the lower levels should always "try their hardest," but their
connectionless nature often precludes them from getting enough
feedback to really evaluate the effectiveness of their efforts.
So, if TCP could say - "The path to host XY.Z.Z.Y seems to be
screwed - please do anything you can to remedy the situation,"
several useful scenarios become possible. Among them are
redunantly reliable local cables.
The current IP and localnet architectures make is very difficult
to get improved local reliability by the simple procedure of
laying two cables (whatever that means) and installing two
interfaces in each machine. In the simple case,
the two cables essentially MUST have separate IP network numbers
(or at least, separate subnetwork numbers)
and if one cable fails, all the TCP connections will die because
the Interfaces, not the hosts, have the Internet addresses
and there is no cleverness in the middle to reroute traffic.
The next approach is to introduce a "virtual local cable driver"
which sits atop multiple interfaces which you want to consider
the same Internet Network. The idea is that the indirect driver
can then consider which interface to use based on delivery
success. In Ring networks with back-channel non-delivery
information, this can work well. With Ethernets, this is
very difficult. One simple approach is to just send
the packet on BOTH wires!! This is a tremendous test of
your hosts' reassembly and redundant segment discard code.
It also causes the network to use twice as much CPU time as
it would otherwise.
If, on the other hand, we could get some feedback from above
that indicated we are having path problems, then we can re-ARP
on alternate cables (assuming the cache keeps wire affinity
information) and pick up before TCP starts dropping connections.
This scenario generalises to other link media like "dialup"
connections through digital PBX's and ISDN networks as well.
Maybe the real point is that error recovery and control is
link-specific, and the procedures can often keep things going
in the face of serious problems. But currently in most
implementations, the low-level link drivers do not get enough
information on link quality from the modules which are in the
best position to know about it on a global scale. Link drivers
clearly know something about the link, but the global information
may be crucial for some kinds of error recovery, particuarly
for purely datagram links.
Currently, this kind of feedback is considered a "layering
violation" by some. I suggest that either this notion of layering
is wrong, or people have a very stilted view of the interaction between
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:39:35 GMT