Re: Networks & vendor upgrades/fixes


James B. VanBokkelen (JBVB@AI.AI.MIT.EDU)
Fri, 20 Nov 87 23:03:52 EST


    ...
    I offer my speculations below and would like you to confirm/revise.

    Source Code: So that an organization can fix/improve the code
    themselves, or by a third party, and not have to depend on the original
    vendor. Does your recommendation change if software maintenance is in
    effect?

Depends on the situation. A university I know of has a couple hundred PCs off
line waiting for software maintainers at one of two vendors to solve a TCP
problem. Everybody is acting in good faith, and both sides are partly at
fault. One vendor has delivered what they think is a fix, but it took 4
months. The other vendor had a new release, and that had to be tested to
see if the problem went away, etc, etc.

There are dozens of Unix vendors who are shipping 4.2's buggy, non-standard
TFTP. How long will it be till they supply nameservers? Consider the vendors
who spent years getting subnets working. Some of these long-standing problems
may be helped by a host version of the "how to be a good gateway RFC" (1009?),
and the Non-Interoperability questionaire which is supposed to get some
attention at the conference in December, but I can't predict anything.

    Rigid Version Control: This has more to do with the organization's
    network than with vendors. The organization doesn't want two different
    versions of the same process to be in use.

Correct. There is also an element of "we don't want anyone to modify it".
Identifying which version is in use is simply one more step in identifying
a problem, whether or not it has been encountered before.

    Homogenous Hardware: This one troubles me - I would like to think that
    through the use of standard protocols an organization could achieve
    interoperability ...

It is generally true that via TCP/IP, two different hosts can communicate.
Significant TCP bugs are getting pretty rare, although interpretations
for things like urgent differ. Except for the Unix XPWD incompatibility,
FTP does pretty well. SMTP has one widespread problem, discussed last week.
Most Telnets interoperate well, until you start playing with options, or
wondering why you are receiving parity (excuse my hobbyhorse).

However, this is fairly nuts-and-bolts compared to what you can do between
homogeneous systems. There are a number of essentially Unix-Unix protocols
that add functionality to the basic ARPA suite, but they aren't well
documented, and non-Unix implementations are extremely rare. Network
printing, to take a prosaic example.

Furthermore, most organizations have finite numbers of maintainers. For
any number of maintainers, it appears that the tinkerers employed by
the vendors can tinker at least as fast as the maintainers can
learn/code/upgrade. The more vendors, the more high-level people you
need, and high-level techies are the non-technical manager's nightmare.
Whole sectors of the software industry are founded on the absolute un-
willingness of many companies to employ permanently any techies more
qualified than a computer operator, or application program maintainer.
DEC has been moving VMS in their direction for years but few vendors
have even begun on "Unix for the masses", and that represents more than
half (vigorous hand-waving) of all TCP/IP hosts.

Anyone who's ever hacked on sendmail.cf knows what a vale of tears a
complex software system whose implementer is unavailable can be. So,
the managers have mostly learned to keep things simple, generic where
possible, but ultimately maintainable. The net I described in my first
posting is still useable, so the maintainers and their managers are
sticking to the problems they can solve, and relying on the influx of
new, uniform machines to eventually eliminate the off-maintenance bad
actors. I'm not saying I prefer this, but it looks pretty likely.

    Regards - Craig

jbvb



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:39:56 GMT