[Ken Pogran <pogran@ccq.bbn.com>: Recent ARPANET Performance]

Sun 14 Dec 86 23:00:14-EST

The message below indicates what I am sure some of you have suspected,
the Arpanet has had a relaps.


Received: from ccq.bbn.com by vax.darpa.mil (4.12/4.7)
        id AA15979; Sun, 14 Dec 86 12:44:18 est
Message-Id: <8612141744.AA15979@vax.darpa.mil>
Date: Sun, 14 Dec 86 12:32:30 EST
From: Ken Pogran <pogran@ccq.bbn.com>
Subject: Recent ARPANET Performance
To: Perry@vax.darpa.mil
Cc: CStein@ccw.bbn.com, McKenzie@j.bbn.com, Mayersohn@alexander.bbn.com,
        FSerr@alexander.bbn.com, BDlugos@ccy.bbn.com, Bartlett@cct.bbn.com,


Over the past few weeks, users may have noticed some degradation
of ARPANET performance over the level attained in early November.
BBN Communications Corporation believes that this change in
network performance correlates with outages in certain key
network lines; in particular, lines between CMU and DCEC and,
most recently and most significantly, TEXAS and BRAGG. The
effect of these line outages, which are not uncommon events,
demonstrate that the ARPANET is still "on the edge", particularly
where cross-country bandwidth is concerned.

The primary measure that we have of the degradation in network
performance is the increase in congestion- or performance-related
"traps" or exception reports made by the C/30 packet switches in
the network to the Network Operaions Center. We reported on
November 12 that, following several changes made in the network,
the number of traps diminished by an order of magnitude.
Recently, the number of traps has increased substantially,
indicating degraded performance, although the number of traps has
not risen to the extremely high levels seen earlier this fall.

The DDN PMO and BBNCC are taking action to correct the present
line outage problems.

 Ken Pogran
 BBN Communications Corporation

The following data is provided by the Bob Pyle of the BBNCC
Network Analysis Department:


From: rpyle@cc5.bbn.com
Date: 10 Dec 86 13:49:26 EST (Wed)
Subject: Recent ARPANET performance

We can divide time since mid-October into four periods to see the
effect of various trunking problems on ARPANET performance:

  Oct 27 - Nov 6 19.2 kb bottleneck at USC still in effect
  Nov 7 - Nov 24 "Good" period
  Nov 25 - Dec 5 CMU-DCEC line out of service
  Dec 8 - Dec 9 CMU-DCEC and TEXAS-BRAGG lines out of service

Each of these periods is characterized by a more-or-less constant
production of performance-related traps with fairly abrupt transitions
between them (the last period is of course too short as yet to talk
about in statistical terms). The most numerous trap is the 63 trap,
long wait for all8. The table below shows the average number of 63
traps per day and the average total number of traps per day on the
ARPANET for the four periods (workdays only, no weekends, no

 Time Period | 63 traps | total traps
  10/27-11/6 | 72190 | 101115
             | |
  11/7-11/24 | 4186 | 9954
             | |
  11/25-12/5 | 11990 | 23681
             | |
  12/8-12/9 | 34602 | 68352


This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:37:00 GMT