Re: Graziano's Streams Query


Dave Crocker (dcrocker@TWG.COM)
21 Sep 88 17:12:00 PDT


Periodically, a query hits this list about Unix Stream and about the
vendors offering TCP implementations for it. The latest, from
Marco Graziano of Olivetti, asked a broad range of pointed questions and
so, it seems to me, forms a good base for a response that might be of
general interest.

Virtual Device Drivers

Streams uses modules and message-passing, in the kernel, to implement
protocol layers. (Unless streams is implemented in user space, there is
no such thing as a "streams program" such as FTP or Telnet. Streams refers
only to the kernel-level protocol core, usually limited to data link,
network and transport levels, although some higher-level protocols are done
inside streams.)

Communication between modules is STRICTLY message-based. The only other
access to information, by a module, is to call kernel subroutines. Modules
may either be single queue in and out (called a "module") or multiple
queues in or out (called "driver").

1. Advantages: Streams imposes a discipline which forces highly modular
protocol coding. That can, of course, create terrible performance, but does
not need to. The other side of this discipline is that it can allow the
construction of functions which can be mixed and matched. It also means that
the task of creating protocols (i.e., in separate modules) can be partitioned
among different people easily.

One of the emerging major benefits of Streams is its utility in a
multi-processor environment. Properly implemented, streams modules may
operate in DIFFERENT address spaces. The only shared memory that is needed
is for message-passing and data-buffers.

One of the subtle benefits is portability. Highly integrated code has a
tendency to bury many of its operating-system-dependent assumptions. Streams
makes the assumptions clearly defined. To port streams code, you need only
to create a Streams environment. (This may turn out to be roughly the same
effort as porting/hacking another implementation, but the process is far
more predicatable and you are left with all of the protocol code being shared
between the original and new implementation.

2. Are modules truly independent? Yes! Except, of course, that they must
agree to the format and rules of messages that are passed. That is, there
must be a well-defined semantic interface between modules.

3. Out-of-band access to modules: Modules are accessible as "devices".
For example, we supply /dev/arp. You can open it and then do IOCTLs with it.

4. Support of alternate transports: You betcha! TCP and UDP are supported
as co-equals. Further, we have an OSI TP4.

5. NFS integration: NFS is an example of code above transport level which
resides in the kernel and is part of the Streams environment. It accesses
UDP via the standard Transport Provider Interface (TPI).

Access from user programs is via the Transport Library Interfaces. It gets you
across the user/kernel interface and is competitive with Berkeley sockets
in terms of its role in life. TLI is user level. TPI is kernel level. In
effect, it does for kernel processes what TLI does for user processes.

(By the way, there is at least one NFS that does not use TPI, but I do not
know any other details of its implementation.)

System V implementations able to use BSD kernel functions? Well, mostly
there is a pretty-good mapping of SVR3/Streams calls to the kernel onto
BSD kernel functions.

1. Device drivers are, in fact, kept alive by a daemon, typically. In the
TCP case, the daemon sets up the multiplexed set of stream modules and holds
the file descriptor.

2. Global data structures: Basically, these are a no-no. The only
shared data that is allowed, other than what you would access through standard
kernel functions, is via message-passing. Keeping stray shared data structures
around leads to tough questions about how the structure will be shared in
a multi-processor environment. The safest way is to have a query-response
discipline between the module that owns the structure and any that need to
"read" it.

We have recently been embarrassed to find a couple of placess that we commit
this sin-of-sharing. In both cases, the solutions are conceptually simple,
involve small programming effort, and will not have any performance impact.

3. Message buffer passing: This is done via pointer (message block) passing.
Actual data are not passed. e.g., TCP has a header message block and points
to user data. IP adds its header message block and points to the TCP mb.
ARP sets up the ethernet header mb and points to the IP mb. ARP or the
device driver, when they are done with this chain -- Note that this is
a scatter/gather model, which can be quite nice, for some devices -- they
free it. However, TCP holds on to the data block until it gets an ACK back
for the relevant sequence number.

4. Kernel modifications required: Should not be any! You can hide quite
a bit of convenient non-discipline by adding things to the kernel, but it
is not in the spirit of Streams. Sockets, in particular, can be emulated on
top of TLI (i.e., within each user's application) with a fair degree of
faithfulness and no meaningful performance impact. The one exception to this
rosy picture is select(), which currently does not map to poll(). This
required a select() driver. (In Streams, rather than part of the generic
operating system kernel.)

5. Performance measurement: Probably the best answer is to ask for the
next question. There really are not any serious performance measurement or
instrumentation tools. You can get information about buffer exhaustion and
can use the strace logging facility to record useful information, but that
seems to be about it.

Dave Crocker
VP, Engineering
The Wollongong Group



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:43:30 GMT