29 Aug 1988 11:18-EDT
As John pointed out, the call queueing is used to reduce the
probability that a SYN arrives during a window where there is no listening
connection to bind it to, thus causing a RESET to be returned. Queueing
can only reduce it, since it is always possible for there to be more
arriving SYNs than there are remaining free queue slots. However, I reguard
specifying the size to the kernel to be more for the purpose of "limiting"
resources (such as processes [from the forks] and cpu cycles) than for
As with all implementations, tradeoffs have been made. I feel that
one of the "minuses" with the BSD implementation is that not only does the
listen cause the arriving SYN to be accepted and bound to a new TCP connection
block, but that the protocol machine is also started. Thus a connection may
proceed to the ESTABLISHED state and accept/ACKnowledge (usually 4K bytes of)
data before the application-level peer (process) is even created. This
prevents the process from:
1) examining the IP-address identity of the caller before deciding
ACKnowledge the SYN vs. send a RESET,
What if X were to place a "collect" call to such an implementation
and send 4K data; then the receiver process start up and decides
it doesn't want to accept the call. Who pays for the 4K bytes?
(The receiver COULD make the 4K available to the appliaction.)
2) checking its routing tables and applying administratively specified
per IP-address source routing, etc.,
3) selecting initialization parters based on IP-level parameters such as
TOS and options, or
Maybe local system has a method for setting the TCP MSS (which
the spec says has to be in the SYN packet).
4) specifying initial TCP-level options, etc.
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:43:14 GMT