Sun, 30 Oct 88 03:05:52 -0600
A few years back, I along with Steve Dyer, Charles Lynn, and Dan
Tappan set out to rationalize output flushing in the various BBN
implementations of telnet over TCP/IP. The systems involved were the
C/30 TAC (Terminal Access Controller), the C/70, DECSystem-20 (running
a BBN version of TOPS-20), and a home-grown terminal server called
a Fibrenet TC. Dave Plummer and Jon Postel were also valuable sources
Because of the variety of implementations involved, we decided the
only practical approach was to use (what recent TCP-IP messages have termed)
an "open loop" scheme. The telnet client does not attempt to identify
interrupt or flush characters; it simply passes them to the telnet
server. It is the responsibility of the server to initiate an output
flush, if appropriate.
Many operating systems have some character (or string or characters)
which, when typed from a hardwired terminal, will cause output to be
flushed (i.e. discarded). Presumably, receipt of this character (or
characters) causes a flush signal to be sent to the terminal driver,
which discards the contents of it's output buffer. The particular
character (e.g. ^C, ^O, HX, etc.) is irrelevant.
The job of the telnet server is to, upon receipt of a flush signal,
emulate the action of a hardwired terminal driver as closely as possible.
Herein lies the first caveat:
The telnet output stream contains embedded
option negotations. Output cannot simply be
discarded, or option negotiations may be lost.
Thus, even while a flush is in progress, the
telnet client still must scan the incoming stream
for option negotiations; normal characters can be
Upon receipt of a flush signal, the telnet server must as quickly as
possible notify the client that a flush is in progress. This implies two
a) The notification should be attached to the very next TCP
segment sent to the client, even if there are several
thousand bytes of TCP data buffered at the server's machine.
b) As soon as a TCP segment with a flush indication arrives at
the client's machine, the client should be signaled. This
is true EVEN IF the TCP segment arrives out of sequence or
is a retransmission. N.B.: The data contained in the segment
should NOT be presented out-of-sequence; the client simply
needs to know to begin flushing.
Not surprisingly, this requires careful coding of the TCP
implementations used by both the server and client; I'll address this
issue again toward the end of the message.
So, how is this accomplished? If the "URGENT" bit is set in a TCP
segment, the segment contains the identification of an "interesting" byte
in the data stream. The sequence number of the "interesting" byte is
calculated as follows:
sequence number = Segment Sequence Number + Urgent Pointer
WARNING WARNING WARNING: Certain pages of the TCP Specification (both
RFC 793 and MILSTD 1778), as well as the Stallings
Handbook, state that this calculation yields the
sequence number of the byte FOLLOWING the
"interesting" byte. This is WRONG. Page 8 of the
latest "Official Internet Protocols" (RFC1011)
provides a clarification. Please check your
Whenever the telnet server receives a flush signal, it inserts the two-
character sequence IAC DM in the outgoing data stream, and instructs its
TCP to mark the second character (DM) as "interesting". Note that multiple
flushes are handled OK; outgoing TCP segments will always identify the latest
(e.g. highest-sequence-numbered) "interesting" byte identification.
Action by the telnet client is fairly straightforward. When an
"interesting" byte identification arrives at the telnet client's machine, the
client will be in one of two states:
1) Processing incoming data normally. The TCP implementation will
immediately notify the client that there is an "interesting"
byte somewhere ahead in the data stream, and the client will
enter state 2.
2) Processing a flush (handling option negotiations, but discarding
normal data) until the "interesting" byte has been read and an
IAC DM sequence has been found. The TCP implementation will
simply update its record of the "interesting" byte's sequence
number. Thus two flushes in quick succession will be handled as
a single longer flush: the client will find the first IAC DM,
but discover that the "interesting" byte hasn't yet been read.
Note that the preceding discussion assumes that:
a) An "interesting byte" is read in-sequence. If byte 101 is
"interesting", it will be read after bytes 99 and 100. For
telnet, it's useless to be able to read the "interesting" byte
out-of-sequence. Instead, the TCP implementation informs the
telnet client asynchronously that an "interesting" byte lies
somewhere ahead in the data stream.
b) After a TCP read, the telnet client is able to inquire whether
the "interesting" byte has been read. This is necessary so
that the client can determine when to stop flushing.
HINT: Because of the specification error regarding the calculation of
the sequence number of the "interesting" byte, some TCP implementations
may incorrectly mark the byte FOLLOWING the DM as "interesting". A
strict interpretation of the telnet specification will result in the
client flushing output forever, because the "interesting" byte will
not yet have been read at the time the IAC DM is found, and the client
will search forever for another IAC DM sequence. Thus, I recommend
unconditionally terminating the flush after the "interesting" byte
has been read, even if an IAC DM has not been located. Your users
will be a lot happier!
I previously mentioned that careful coding of the server and client
TCP's was necessary for optimal performance. In particular, every TCP segment
sent by the server's machine should contain the latest "interesting"
sequence number. If possible, this should even apply to retransmissions.
The client machine's TCP should immediately check all incoming segments for
updated "interesting" byte information. The goal in both cases is to ensure
that the telnet client is notified of the flush absolutely as soon as possible.
These implementation guidelines become more critical as the TCP receive
window size of the client is increased; the amount of TCP data buffered in the
server and client machines can become very large indeed!
A LOT of work was involved, but our telnet project at BBN was largely
successful. The improvement was dramatic for 300 Baud terminals, but even
at 9600 baud the reduction in output (after sending a character which
resulted in a flush) was quite noticable when displaying short lines.
I've been planning to write an RFC on the subject of TCP URGENT
and its use with telnet; given the sudden flurry of interest, perhaps
now would be a propitious time. An expanded (and more coherent!) version
of this message may someday appear.
Best of luck.
University of Wisconsin-Madison
and BBN Communications Corporation
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:43:57 GMT