Re: PSN 7 End-to-End question.


Phil R. Karn (thumper!karn@faline.bellcore.com)
2 Feb 88 01:53:34 GMT


> 1) Best effort systems rely totally on hosts for congestion
> management. That is, transport protocols are responsible for
> congestion control and congestion avoidance.

The problem with the existing flow control mechanisms in the ARPANET is
that they add considerable overhead even when the network is otherwise
lightly loaded. As I understand it, the ARPANET links all run at 56kbps;
so in theory they could carry 7 kilobytes/sec. However I've *never* seen
a file transfer throughput of more than 2.5-3.0 kilobytes/sec, even over
a short East Coast path in the middle of the night. My timings are
consistent enough that I can only attribute the difference to the
ARPANET's internal packetizing and flow control overhead. (If there are
other factors at work I'd like to know about them).

Yes, transport protocol behavior is important, but there's no reason why
the "best effort" network can't have defense mechanisms that activate
only when the network is congested. For example, the network might
normally run in a pure datagram mode, with no network bandwidth wasted
on edge-to-edge acknowledgements. However when the network becomes
congested, the internal equivalent of "source quench" packets are sent
to the entry node, telling it to stop injecting so much traffic into the
network. The entry node might translate that into an access protocol
message telling the host (or gateway) to slow down, but more importantly
the entry node could simply delay or discard additional traffic before
it enters the network. Discarding packets is certainly one way to get
TCP's attention, but the delaying tactic would be more efficient.

There have been theoretical assertions that even infinite buffering is
insufficient to prevent datagram network congestion. However, this
assumes no interaction between the network and the transport protocol,
and this is certainly not true with TCP. TCP cannot transfer more than
one window's worth of data per round trip time, so to slow it down you
either reduce the window size or increase the round trip time. If you
can't get it to reduce its window size voluntarily (e.g., with a source
quench) then you can certainly increase its round trip time (i.e., with
additional buffering). Given a finite number of TCP connections, enough
buffer space will eventually reduce the offered datagram load to the
capacity of the network, although everyone would be better served if the
TCPs could instead cut their window sizes. NFS and ND are not completely
uncontrolled, rather they are basically stop-and-wait protocols. They
would behave well too if only they did retransmission backoff like TCP.

A lot of effort has gone into tuning TCP round trip time estimates.
Question: has anyone looked into techniques for tuning TCP window sizes?
Intuitively, I expect that increasing the window size for a TCP-based
file transfer would increase throughput until the slowest link in the
path saturates. Beyond this point, throughput would remain constant but
round trip delay would goes up, unnecessarily increasing delay for other
users of the same path. It would be nice to find an algorithm that found
and operated at this optimal operating point automatically. Any ideas?

Phil



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:40:55 GMT