Digested messages


Vivian Neou (Vivian@SRI-NIC.ARPA)
Mon 31 Mar 86 18:09:16-PST


Due to a problem with the NIC mailer, it has come to my attention that
some of the recent TCP-IP messages may not have made it out to most of
the list. I've taken those messages and digestified them. My apologies
to those of you who already got the messages, but to insure that everyone
does receive them, I think the safest course is just to remail the whole
bundle.

Vivian
----------
Date: 30 Mar 1986 15:02-PST
Subject: Re: The NIC and FTPing host tables
From: STJOHNS@SRI-NIC.ARPA
To: dennis@CSNET-SH.ARPA
In-Reply-To: The message of Fri, 28 Mar 86 08:57:27 -0500 from Dennis Rockwell <dennis@SH.CS.NET>

In answer to the second question "Does the hostname server...",
No. They are both TCP connections and they both cause a job to
be started to serve them.

Mike
-------
Date: Sun, 30 Mar 86 23:48:20 pst
From: Earl Craighill <From: Earl Craighill <ejc%sri-gemini@sri-tsc>
Subject: Re: major changes to 1822

There is some data on ARPAnet performance for real-time traffic using
type-3 packets. We conducted exercises (experiments on the internet
are VERY difficult to set up or interpret) using packet speech over a
variety of networks (ARPAnet, WB SATnet, Atlantic SATnet, PRnet,
RINGnets, LEXnet) in June of 1983. The sites were NTA in Norway,
Lincoln Laboratory in Lexington, Mass. and SRI in California. We
estimated the likely range of delays in the ARPAnet for two segments--
SRI to LL, and LL to CSS. The maximum time for SRI-LL was 2200 ms.
The minimum route at that time was 11 hops, so the minimum time
(calculated, not measured) was about 300 ms. The LL-CSS route (min.
path 3 hops) was 90 to 550 ms. I won't mention the delays on the other
nets, even the satellite nets look better than the DARPAnet. With
increased traffic, these numbers have to increase. Also, high thruput
wasn't the difficulty since we used low-rate (2400 baud) coded speech.

This data indicates that type-3 wasn't great for real-time traffic. Of
course, the ARPAnet wasn't designed for real-time traffic. But the new
design shouldn't assume that datagrams are the "right" method for low
delay service. Further, flow-control algorithms may protect the
network as a whole, but won't help low-delay service, especially with
the anticipated growth in traffic (however, we did make effective use
of feedback from the PRnet to throttle back our offered rate). What is
needed is flow-enhancement, say, some sensible use of priority in PSN
queue management. Our experiments on the PRnet indicate that some
portion of "preferred" traffic can be supported without bringing the
network to its knees.

I would encourage more experimentation, especially at the subnet level.
Our measures were very coarse and could not identify reasons for the
high delays (imps throwing away packets, long holding times in queues,
rerouting because of congestion, ??). The current picture does not
look good. Hopefully, some experimental data may lead to a different
transport mechanism that will provide low-delay service.

Earl
-------
Date: Mon, 31 Mar 86 8:35:20 EST
From: Andrew Malis <malis@bbnccs.ARPA>
Subject: Re: major changes to 1822
In-Reply-To: Your message of Sun, 30 Mar 86 23:48:20 pst

Earl,

Thanks for your informative message. When we started working on
the new End-to-End, providing a low-delay service was not one of
our explicit goals. However, improving the priority queuing
characteristics inside the PSN itself was one our goals, and one
that we have paid much attention to. As a result, provided that
the hosts use the different priority levels intelligently,
high-priority traffic should experience a lower delay through the
new EE than low-priority traffic.

Of course, the upcoming release of the PSN only has these changes
in the EE section of the code. Upgrading the store-and-forward
to also use the updated priority queuing is under way, and
scheduled for a later release of the PSN.

Andy
-------
Date: Mon, 31 Mar 86 09:46 EST
From: David C. Plummer <DCP@SCRC-QUABBIN.ARPA>
Subject: Re: The NIC and FTPing host tables
In-Reply-To: The message of 28 Mar 86 15:03-EST from Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>

    Date: Fri, 28 Mar 86 15:03:04 EST
    From: Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>

    I'm new to some of this but wouldn't the obvious thing to do would be
    to put up a difference file (perhaps with a hook in the name so the
    difference from version 701 (eg) to the current were called something
    like HDIFF701.TXT, that is, the string to FTP could be built on the
    fly if you knew your current version.) The difference could then be
    patched in at the local host and away you go, quickly.

    Obviously that implies a few applications but I think they would be
    basically trivial, analogues already exist (eg. UNIX' diff, patch.)
    And for those who slavishly connect anyhow I suppose a format for
    a null patch file could be created. Or is this all a moot issue?

It isn't moot, but it doesn't address a larger issue: the general need
for secondary servers. I agree with the initial message that spawned
this conversation, and I think network databases that have hundreds to
thousands of potential clients should have secondary servers. For that
matter, it would be nice if some secondary server mechanism were in
place. The host table is but one database that needs secondary servers.
The domain system needs them as well. [It may already for all I know;
our current implementation tries to connect to sri-nic in order to find
out that BBNA is the resolver for .BBN.COM. Maybe our implementation
isn't mature enough.] There are at least TWO big wins to secondary
servers: (1) off loading the primary, (2) data availability when the
primary is down (or more generally, multiple data availability).

-------
Date: Mon, 31 Mar 86 07:19 EST
From: JOHNSON%northeastern.csnet@CSNET-RELAY.ARPA
Subject: Concerning unused options fields in IP packet headers.

     I'm just learning tcp/ip myself. In reading rfc 791, (which
may have been replace by now, I don't know yet) I found the following
general statement in section 3.2:

     "In general an [IP] implementation must be conservative in its
sending behavior, and liberal in its receiving behavior."

     Concerning assumption about IP packets this seems like a VERY good
idea. To protect one's own system, one should hope for good input but
expect problems from incoming packets. In general I've found this to be
the best way with most software. Yes it's more work and on some systems
it can be very difficult. However, covering your tail pointer is
usually a good idea. It can be very embarrassing to have your wonderful,
totally cosmic piece of software crash in public. It's even more
embarrassing when a real-time system like tcp/ip goes down because you
drop packets on the floor and then have to get a broom and a bucket to
clean things up. At the speeds of some parts of the network, the mess
can get really big really fast.

     I don't know about anyone else but I don't think I'd assume
anything about what options look like in a packet. We all know what
happens when assumptions are made.

Chris Johnson
Sr System Programmer
Northeastern University
-------
Date: Mon 31 Mar 86 12:37:50-EST
From: "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
Subject: Re: Anybody normally send packets with unused options?

        I guess I have a range of reactions to this. First, if you are
really that concerned about performance, there are tons of ways to
lose, many of them in how you structure the software and the
interchange of data between the network layer and the application.
Space does not permit expanding on this, but see RFC817, in the IP
Implementors Guide (*everyone* building anything should read the set
of memos written by Dave Clark in that volume). I would think that
copying around a few bytes of options is not a big concern.
        Second, although it may be true now that there usually aren't
options, it may not be true in the future, so I wouldn't plan your
performance around that. Third, I wouldn't make the data part of a
record anyway, but consider the data and options as separate
(optional) blocks. Use the pointers and grin.

        Noel
-------
Date: Mon 31 Mar 86 12:47:24-EST
From: "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>
Subject: Re: Death to IEN 116
In-Reply-To: Message from "POSTEL@USC-ISIB.ARPA" of Fri 28 Mar 86 11:56:22-EST

        Jon, I think what he was trying to get at was that the full
blown Domain Name thing is a big mechanism, and perhaps what would be
better for small workstations such as PC's is a thing where the PC can
say to some sort of local translation server 'what's the IP address of
XX.LCS.MIT.EDU', instead of doing it all itself. Such an approach
would have the added benefit of less traffic, since there'd be one big
cache instead of lots of little ones. For talking to such a translation
server, a protocol such as IEN116 is perfect. I think some people at
MIT do this already.

        Noel
-------
Date: 31 Mar 1986 12:49:52 PST
From: POSTEL@USC-ISIB.ARPA
Subject: host.txt vs. domain servers
To: tcp-ip@SRI-NIC.ARPA

Rick Adams:

I agree that it is nice to have a way to figure out the real address
when all you've got to go on is a nickname or a partial name. But,
one of these days the host.txt file is going to die. So one of the
things that needs to be done is to educate users to realize that when
they tell other people to send them mail the have to give the full
official name of their host. This is one motivation for using the
full official name of the host on everything the host does. It gets
the users used to seeing it, and using it. Local nicknames can be
nice but they do have some disadvantages. So while using hosts.txt
for figuring out what names should have been used when you get stuck
with a nickname is a neat trick, also realize that it won't work
forever.

--jon.

-------
-------



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:36:05 GMT