Re: maximum Ethernet throughput

Ken Pogran (
Wed, 9 Mar 88 17:56:36 EST


Getting "maximum throughput" from a LAN has been a challenge
since the inception of Local Area Network technology back in the
'70s. Often there's a disparity between what the LAN specs
permit (at the "MAC" layer) and what practical controller
implementations are capable of delivering. The challenge is to
implement a controller that can deliver something approaching the
maximum throughput without being prohibitively expensive (i.e.,
from a vendor's perspective: "Well, we could build one that would
go that fast, but it would price us out of the market."). So one
sees compromises in what a SINGLE flow can obtain from a LAN.

In ETHERNET implementations, the compromise usually centers on
how fast one can "turn around" a controller following one
transaction (packet sent or received) to begin another one. On
transmit, this limits how quickly you can send on an unloaded
net. There are (or at least there were five years ago) some
controller products that did a lot of "housekeeping" between
outgoing packets, lowering throughput (I used one; I was
disappointed.). On receive, the effect is is much more
pernicious: If your controller takes "a long time" to get ready
to receive the next incoming packet, and the controller sending
to you has a faster turn-around than you do, your controller
might be "blind" to the net and miss a packet, requiring
retransmission by a higher-level protocol.

Applications can't bank on getting a significant fraction of a
LAN; after all, what would happen when two (or ten) of 'em happen
to run simultaneously? On the other hand, software implemented
in an environment that only contains "slow" controllers might
break in embarrassing ways when employed on systems with quick
turn-around controllers and a lightly loaded net! Which brings
up a good point for developers of software that run right above
the MAC layer: make sure it'll run on hardware "quicker" than

> When designers were thinking of Ethernet, I rather suspect
> they might not have considered the possibility that one host
> could actually use all 10Mb of bandwidth.

The mechanisms of the ETHERNET care not a whit whether the next
packet on the line is from the same guy as the last, or from
someone else. I think it's clear that there were (are?) a number
of CONTROLLER designs that assumed that no individual host would
want to TRY to use all 10 Mb/s. I haven't looked at the insides
of any recent controllers (or the timing specs of recent controller
chips) but I'd be willing to bet that the choke point is more
likely to be either the controller or its interface to the host,
rather than the specs of the ETHERNET itself.

This all applies to token-ring LANs, too, of course. And the
faster the aggregate data rate on the LAN, the more these
arguments apply. (So I might, for example, be able to get a
greater PERCENTAGE of a 4 Mb/s Token Ring than I can of a 10 Mb/s

Any comments from designers of LAN controllers?

Ken Pogran

This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:41:30 GMT