Some Internet gateway performance data


Mills@UDEL.EDU
Wed, 23 Mar 88 23:36:06 EST


        Node Mpkt UpHr PPS Drop Quench
        ----------------------------------------------
         1 3.49 100 9.75 0.17 0.04
         2 18.48 260 19.72 0.38 0.39
         3 6.33 102 17.33 0.16 0.17
         4 15.08 262 15.99 0.31 0.45
         5 19.36 266 20.20 0.85 0.18
         6 2.97 48 17.35 0.17 0.14
         7 7.02 262 7.44 0.71 0.04
        ----------------------------------------------
        Total 72.74 1299 15.55 0.34 0.18

The "Mpkt" column shows the aggregate throughput in megapackets for all output
queues, including serial lines and Ethernet. The "UpHr" column shows the
aggregation interval in hours. The "PPS" column down through the "Total" row
shows the resulting throughput, which is the "Mpkt" column divided by the
"UpHr" column adjusted to the proper units. The "Drop" and "Quench" columns
show the percentage of packets dropped and quenched respectively. The value
shown in the "Total" row for these columns is the average of the column
itself. The existing NSFNET Backbone clearly meets the performance objective
of less than one percent drop rate.

For comparison the following table shows the performance of the ARPANET/MILNET
gateways for the week ending 21 March. So far as can be determined, each
gateway is connected to two 56-kbps data paths.

                ID Mpkt UpHr PPS Drop
                -------------------------------------
                 1 4.83 144 9.32 7.26
                 2 6.15 144 11.86 8.18
                 3 7.06 146 13.48 7.40
                 4 7.03 139 14.08 12.87
                 5 3.14 145 6.00 0.83
                 6 3.75 109 9.54 3.23
                 7 5.07 146 9.66 2.85
                 8 2.76 129 5.95 3.65
                -------------------------------------
                Total 39.79 1101 10.04 5.78

As evident from these figures, the NSFNET Backbone Fuzzballs carry a
throughput over fifty percent greater per node than the ARPANET/MILNET
gateways with a drop rate of over ninety percent less. Note that this
comparison may not be fair in two ways: first, the ARPANET/MILNET gateways are
connected to networks, not trunks, which can have large dispersive delays;
second, the NSFNET Backbone Fuzzballs are connected to Ethernets, which
provide no insulation against unruly traffic generators.

>From measurements made last July and reported in the SIGCOMM paper last year,
the selective-preemption policy made a whale of a difference. The case for the
source-quench policy installed recently is less clear, although there is
recent evidence that it is in fact effective for those hosts that respond to
quench messages. However, even if the crafted policies and Fuzzball
implementations may be suboptimal and change next Monday, the data above
should be convincing beyond doubt that fairness policies and queue disciplines
similar to these will be necessary for future generations of connectionless
packet switches and gateways.

Dave



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:41:31 GMT