David C. Plummer (DCP@SCRC-QUABBIN.ARPA)
Mon, 31 Mar 86 09:46 EST
Date: Fri, 28 Mar 86 15:03:04 EST
From: Ra <root%bostonu.csnet@CSNET-RELAY.ARPA>
I'm new to some of this but wouldn't the obvious thing to do would be
to put up a difference file (perhaps with a hook in the name so the
difference from version 701 (eg) to the current were called something
like HDIFF701.TXT, that is, the string to FTP could be built on the
fly if you knew your current version.) The difference could then be
patched in at the local host and away you go, quickly.
Obviously that implies a few applications but I think they would be
basically trivial, analogues already exist (eg. UNIX' diff, patch.)
And for those who slavishly connect anyhow I suppose a format for
a null patch file could be created. Or is this all a moot issue?
It isn't moot, but it doesn't address a larger issue: the general need
for secondary servers. I agree with the initial message that spawned
this conversation, and I think network databases that have hundreds to
thousands of potential clients should have secondary servers. For that
matter, it would be nice if some secondary server mechanism were in
place. The host table is but one database that needs secondary servers.
The domain system needs them as well. [It may already for all I know;
our current implementation tries to connect to sri-nic in order to find
out that BBNA is the resolver for .BBN.COM. Maybe our implementation
isn't mature enough.] There are at least TWO big wins to secondary
servers: (1) off loading the primary, (2) data availability when the
primary is down (or more generally, multiple data availability).
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:36:05 GMT