Barry Shein (bzs%bu-cs.bu.edu@bu-it.BU.EDU)
Thu, 10 Mar 88 22:06:59 EST
Breathtaking, in a word, who made that comment about Nobel Prizes
At any rate, there is another interesting issue here:
>(but ask yourself: do you really want workstations
>that routinely use 100% of the ethernet bandwidth? I'm pretty
>sure we don't and we're not running this tcp on any of our
My temptation is to ask the converse: Do I wish to believe that merely
mediocre algorithms protect me from this as a problem?
It seems that given what we might call near-capacity algorithms (for
ethernets, of course new wire technologies such as direct fiber
hookups will be interesting again) we need to think about rational
ways to administer such networks.
In the trivial case we could isolate many of these workstations, as we
already do here, to their own ethernets, barely shared, so it is less
of a problem. Perhaps this would spur vendors to provide hardware to
make that even easier and more economical. This would certainly be
useful in the case of networked file systems using a client/server
model (eg. diskless or disk-poor clients.)
Beyond that I have often thought of the idea of a network "throttle",
a settable parameter that might control maximum throughput (packets
output per second, for example) that a machine might limit itself to.
Obviously that requires voluntary compliance (although it could be an
extension of window advertising, that is, making the window behavior
tunable by the system administrator rather than calculated for maximum
throughput always based upon blind assumptions about resources.)
Interesting, at any rate...
-Barry Shein, Boston University
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:41:31 GMT