William Westfield (BILLW@MATHOM.CISCO.COM)
Wed 13 Jul 88 00:50:19-PDT
Well, the conversation has died down somewhat, so its time to
respond to several peoples comments.
There are Telnet controls (eg IP, AO, AYT...) that can appear at any
time in the stream, and a correct implementation must check for them.
Yes, Yes, I understand all that. What I am proposing is that a telnet
server/client be able to say "I promise that I will never send any more
option negotiations OR telnet controls. In effect, I am turning off the
telnet protocol, and from now on will be sending you just data." Most
servers certainly never send IP, AYT, etc, and my experience says that
AO is not very effective. Most clients do not locally trap signals locally
and send the equivalent telnet control (though this may change ala the
Cray local edit option). The actual options would look something like:
DO SUPPRESS-TELNET requests that the receiver no longer use the telnet
WILL SUPPRESS-TELNET offers to no longer use the telnet protocol.
DON'T SUPPRESS-TELNET demands that the receiver continue to use telnet.
WONT SUPPRESS-TELNET demand to continue using the telnet protocol.
The only strangeness is that once the option is successfully negotiated,
there is no way for it to be turned off.
mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) asks:
Are you suggesting that one eliminate line mode and only use binary
mode? Or are you suggesting that there be an integration of TELNET
and the local systems terminal driver?
No, line mode would still be there. If line mode was in effect before
the SUPPRESS-TELNET option was negotiated, it would remain in effect.
I was suggesting that SUPPRESS-TELNET would provide the biggest gain
in binary mode, since in addition to no longer scanning the datastream
for IACs, you would also no longer have to handle special End of line
conditions (eg CR-NULL --> CR).
The suggestion has nothing to do with integrating telnet into the
local terminal driver. However, one of the spinoff features of the
option would be that this would become much easier. (A program could
do the initial option negotiation, and then just "attach" the tcp
connection to the terminal driver. The terminal driver would no
longer have to know anything about the telnet protocol.)
From: Doug Nelson <08071TCP%MSU.BITNET@cunyvm.cuny.edu>
Why use Telnet if you don't want it? It certainly isn't that difficult
to keep scanning for IAC, and Telnet is certainly not a very complex
protocol. And you always have the option of rejecting any changes in
options if you don't want to deal with them.
The reason for using telnet that it is a standard protocol, and has the
capability of negotiating things (eg local echo) that I need to
negotiate. The reason for getting rid of the telnet protocol after I am
done with negotiating these options is that scanning for IACs is more
difficult than you think. The telnet protocol requires that the telnet
process look at every character. While this may not be much to a telnet
server whose operating systems eventually have to look at every character
anyway, it can make a big difference to something like a terminal server.
One of the motivations for my original message was that we have recently
been working on improving performance in the cisco Terminal Server. After
this work, I noticed that our TTY-daemon process (this is the process that
would feed a printer from a tcp connection, for example) used a factor of
50 (fifty!) less CPU time on a TCP stream than it did on a telnet connection
(using straight-forward implementations of each - the stream dealt with
things a segment at a time, and the telnet dealt with them a character at
a time. Admittedly, there are obvious improvements we can make to the
telnet process, but the fact that the obvious implementation is so bad
points strongly to a place in the protocol where improvements can be made.
In what circumstances would you want to use this feature? In a general
interactive environment, it would seem like you'd want to keep your
At cisco, we have half a dozen or so 38.4bps graphics terminals that spend
their time connected to a CAD package running on a DEC20. Once the connection
has been set up, no telnet options are negotiated, and no telnet escapes are
sent. We also have two laserwriters running at 38.4kbps. Although one
usually talks tcp streams to these, telnet can be used for interactive
purposes. Many of our customers use our terminal servers in "milking
machine" mode, where telnet sequence are never sent after initial
What concerns me, though, is that some Telnet implementations apparently
assume that no more options will be negotiated after the startup, and then
stop working when they encounter software that sends them, such as echo
toggles for password suppression.
Really? I didn't know of any vendors that did this. Care to name names?
The SUPPRESS-TELNET option is clearly more valuable to systems that operate
in remote-echo, character-at-a-time mode. This is most systems, however.
It would be an OPTION that could be refused, and it would operate
independently in each direction.
From: Mark Crispin <MRC@PANDA.PANDA.COM>
The performance problem you refer to (2 process switches/character) is
an artifact of the design of the Telnet server and operating system and
not a problem in the Telnet protocol itself.
In WAITS, Tenex, and TOPS-20, the Telnet server is in the same context
as the terminal driver (that is, it is part of the operating system).
Which is not to say that tops20 virtual terminals could not be made much
more efficient if they didn't have to look for IACs and such. The tops20
terminal driver is not exactly an example of efficiency - every character
is carefully examined and massaged several times on both input and output.
If it had to do extra process wakeups in addition to this, it would be
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:42:51 GMT