[Ken Pogran <pogran@ccq.bbn.com>: Recent ARPANET Performance Improv]

Dennis G. Perry (PERRY@vax.darpa.mil)
Mon 17 Nov 86 23:56:03-EST

Although I don't think we have conquered the things lying beneath the
swamp, I do believe that we have cut thru some of the water lilies. Please
let me know how we are doing, both good and bad so we can make this
place a little more bareable. Below is a report of the latest improvements
to the Arpanet tangle.


Received: from ccq.bbn.com by vax.darpa.mil (4.12/4.7)
        id AA12250; Wed, 12 Nov 86 16:16:16 est
Message-Id: <8611122116.AA12250@vax.darpa.mil>
Date: Wed, 12 Nov 86 15:23:01 EST
From: Ken Pogran <pogran@ccq.bbn.com>
Subject: Recent ARPANET Performance Improvements
To: Prishivalko@ddn2.arpa, Perry@vax.darpa.mil, Grindle@ddn1.arpa,
        Leonard@ddn1.arpa, ARPANETMGR@ddn1.arpa
Cc: JBurke@cc7.bbn.com, MLevandowski@ccm.bbn.com, RGrenier@ccm.bbn.com,
        Blumenthal@vax.bbn.com, McKenzie@j.bbn.com, Mayersohn@cc5.bbn.com,
        JWiggins@cc5.bbn.com, CGreenleaf@cc5.bbn.com, MPrimak@cc5.bbn.com,
        FSerr@cc5.bbn.com, SCohn@cc5.bbn.com, Hinden@ccv.bbn.com,
        Bartlett@cct.bbn.com, BDlugos@ccy.bbn.com, CStein@ccq.bbn.com,
        STaylor@ccb.bbn.com, pogran@ccq.bbn.com

As you know, over the past few weeks a number have steps have
been taken to alleviate the extreme congestion that has plagued
the ARPANET in recent months. BBNCC is pleased to be able to
report that, as a result of these steps, there has been a
significant improvement in network performance.

In a message dated 30 September, Jeff Mayersohn recommended eight
actions to improve ARPANET performance. Of those, five have been
implemented at this time. They are:

1. TAC 113 has been installed on ARPANET TACs, reducing
    character-at-a-time traffic.

2. Network parameters have been adjusted to provide more even
    sharing of cross-country bandwidth.

3. Wideband Network gateways have been modified to favor the
    Wideband Net over the ARPANET for cross-country traffic
    between some LANs.

4. Additional network performance statistics were collected in
    early October (as reported earlier).

5. A link has been restored between the Purdue and Wisconsin

In addition, and most significantly, a 56kb line was put into
service last week between USC and CIT. This line effectively
bypasses a 19.2kb line that had created a bottleneck in one of
the ARPANET's three cross-country paths, as reported in Jeff's
message of 8 October.

Finally, the ARPANET was upgraded to PSN 6, and Mailbridges were
upgraded to MB1008.1.

On Friday, 7 November, all of these changes were in place
together for the first time. Performance measurement data taken
on Friday and on Monday, 10 November indicate significant
improvement in three key measures: mean round trip delay, mean
number of hops taken by data packets, and number of
performance-related traps received by the ARPANET monitoring

Mean round trip delay and mean number of hops have returned
approximately to their June 1986 levels, and are down
significantly from levels measured in early October (Mean round
trip delay, in particular, has dropped from 1215 ms on 2 October
and 625 ms on 3 October to 298 ms on 7 November). "Traps"
reported by network nodes to the Monitoring Center indicating
congestion and other performance problems have decreased by more
than an order of magnitude -- from 80K-150K per day in October to
5K-10K on November 7 and 10.

>From this data we can conclude that the ARPANET is past its most
immediate crisis. However, the network has little reserve
capacity, and performance is still critical. Some congestion
still occurs, and loss of a single trunk or node could still
bring the network into a very congested state. Thus, further
steps to improve network performance, such as the provision of
additional cross-country bandwidth and additional processing
capacity at several key nodes, must still be taken in order to
restore the ARPANET to longterm good health.

Attached to this message is a report from John Wiggins of our
Network Analysis staff detailing the results of our latest
performance measurements.

 Ken Pogran

From: John Wiggins (BBN 5/134 617-497-3390) <jwiggins@cc5.bbn.com>
Date: 11 Nov 86 18:37:00 EST (Tue)
Subject: Arpanet Congestion Analysis (PART THREE)

To: mayersohn@cc5.bbn.com
cc: jwiggins@cc5.bbn.com, pyle@cc5.bbn.com, cvenkate@cc5.bbn.com,
         cgreenleaf@cc5.bbn.com, fserr@cc5.bbn.com, scohn@cc5.bbn.com,

                                                November 11, 1986

I am very pleased to inform you of significant improvement in network
performance as a result of the changes that have been made during the
last few weeks. These changes have included: (1) removal of the 19.2
kb link {between the two USC nodes} from a major trans-continental
path, (2) increasing the propagation delays on crucial links, (3)
keeping the giveback timer set at two slow ticks, instead of 8 ticks,
(4) installation of TAC 113 to take advantage of "word-at-a-time"
optimizations, (5) reconnection of link 94 (between WISC94 and
PURDU37), and (6) installation of PSN 6 to remove faulty microcode
from the network.

The Arpanet is still in a critical state, with little or no reserve
capacity. So, by no means should this good news be construed to imply
a lessening of the urgency of deploying our longer term
recommendations. In particular, addition trans-continental trunking
bandwidth is dearly needed.

Here is a comparison of some important network-wide cumstats data.
Note that the major topological difference between October and June is
that the 19.2kb link was connect to a stub in June, but was part of
one of the three trans-continental paths during the October
collections. The November data include our recommended by-pass of the
19.2kb link with a new 56kb link connecting CIT54 and US121. At this
time, the 19.2kb link remains in "backup" with a very high configured
propagation delay; this was done in an attempt to force traffic away
from the 19.2kb link unless the new 56kb link is down.

The data from each day are averaged over the 6-hour period from 8:00
to 14:00 EST. On October 3, the propagation delays for three links
were increased so that routing would report the maximum delay for a
single hop, approximately 1.6 seconds. This is the major difference
between the two October collections.

                             6-hour Periods from 9:00 to 15:00 EDT
                            June 11 Oct. 2 Oct. 3 Nov. 7

msgs/sec 218 152 181 185
pkts/sec 293 209 262 254

mean round trip delay (sec) 312 1215 625 298

data pkts/sec 1044 1009 1189 1000
ctl. pkts/sec 983 907 1136 898
total pkts/sec 2028 1916 2325 1898
internode throughput (kb/s) 206 182 228 208
utilization (data only) .094 .094 .113 .090
utilization (incl. overhead) .213 .208 .245 .173

routing updates/sec 1.67 2.40 2.44 1.45


'min-hop' msg_weighted
     mean path 2.75 3.51 3.54 3.35

data pkts/sec out trunks
     divided by pkts/sec
     from hosts 3.56 4.83 4.54 3.93

This last quantity would be the mean number of hops for data packets
from hosts in the absence of retransmissions. It would increase with
retransmissions, as well as with increased real path lengths. The
"min-hop msg_weighted mean path" would increase from a real
re-distribution of offered load, and also includes some contribution
from the additional hop on the southern trans-continental route for
the October and November data.

Our daily monitoring of PSN traps processed by the noc has also lead
us to conclude our recommendations have improved network performance.
Before the correct giveback timer setting and the new 56kb links were
in the network, we would observe between 80,000-150,000 performance
related traps on a typical weekday. On Friday, 7 Nov 86, all of our
short-term recommendations were in place for the first time. On that
day, we observed less than 5000 performance related traps. Yesterday,
Mon 10 Nov 86, we processed just 10,000 of these PSN traps.

We still need the recommended additional trunking Although the two
days we have so far look much better, they still indicate problems in
network performance. The number of traps is still fairly high. The
loss of a single trunk would bring the network into a very congested
state. In conclusion, there is little reserve capacity. We hope to
see our other recommendations implemented ASAP.

                                                   _John Wiggins_

This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:36:59 GMT