Re: offloading the protocols


Frank Kastenholz (KASTEN@MITVMA.MIT.EDU)
Sat, 19 Mar 88 09:38:16 EST


More importantly than devising protocols with OP's in mind is to
move data directly from the users space to the processor -
it should not go through some central network application.

A second (equally important issue) is to trust your local I/O
channel.

The things that really kill the protocol processing are checksums
and "adminstrative" I/O (separate ack's, etc).

By trusting the local I/O channel, you do not need to checksum packets
going between the OP and the host, ack them, etc, etc.

A very empirical model that I have dreamed up for a TCP file
transfer for a non-kernal TCP implementation (e.g. Wollongong, KNET
etc) is:

    The cost of moving the data from disk to user is X, from user
    to network application is X, running the TCP checksum is X and
    then moving the data from the network application to the I/O
    adaptor is X. The total cost is 4X and X seems to be O(n).

This model is not proven, but seems to be borne out by some empirical
testing for running file transfers through a VAX using TCP and UDP
(both had about the same throughput, but TCP took 100% and UDP
about 75% of the CPU - the transfers were done by FTP/TCP and NFS/RPC/
UDP - the only effective difference was the TCP checksum).

Moral of the story, if you can not move the data from the user's space
to the OP directly (i.e. need to go to an application level network
process first) you only save about 25%.

Remember, this is all empirical! real testing needs to be done.

Frank Kastenholz
Atex Inc.

All opinions are mine - not my employers.....



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:41:06 GMT