Barry Shein (email@example.com)
Sun, 29 Mar 87 18:31:15 EST
Your questions mostly focus on design trade-offs.
Trying to get the data as close to to the destination node and then
spooling might not be defineable in many cases. What does closer mean
when routing can be changed? The very notion implies that a route
calculated at transmission time remains static (perhaps for days) as
the file moves from host to host through the network. This is not
always a reasonable assumption.
I do think a Batch FTP is a fine idea, many people accomplish this
anyhow by backgrounding transfer jobs, others by sending files through
the mail networks.
One major issue, of course is security. FTP is designed to demand a
password before a file transfer begins. Not sure how this is
accomplished in batch mode without sending passwords around the net
(store and forward exacerbates this problem as passwords may lie on
intermediate hosts for long periods of time.) Bitnet basically takes
the attitude that it is similar to mail and files are simply put on a
spool for a target user, no password to send a file is required. How
*does* a user fetch a file within the Bitnet sendfile paradigm? Other
than specialized servers for certain files, of course.
Another problem with S&F networks at the higher (that is file or mail
message) level are the failure modes presented. Clogging of hosts'
disks receiving these as intermediaries becomes a problem (I believe
WISCVM fell as much as 3 weeks behind in file transfers recently due
to this type of clogging? correct me if I'm wrong, I mean no malice.)
Packets are much more ephemeral things. There, of course, is also the
problem of getting an error or success back to the originator from an
intermediate node who decides the file's fate has been sealed.
For another extreme, try MAIL-11 on VMS in a DECNET network. You can't
send a message unless it is immediately deliverable, very annoying to
me to end a message (typically a reply) and have it immediately
rejected because the host happens to be down at the moment. This is a
place where some kind of store and forward seems essential, I hate
"busy signals" in mail networks.
I think my conclusion is that it would probably be interesting to see
someone propose and design a store and forward file transfer protocol
and get it accepted by the community. There are several design
problems but I don't think the concept is flawed, why have a human sit
and wait on a slow link? Or not be able to initiate a transfer just
because a host is down at the moment? I don't think, however, Bitnet
is a great model for this at all layers if for no other reason than
the fact that the routing is so well defined in Bitnet for where to
store and where to forward while in the Internet things get a bit more
nebulous. But, hey, that's what the designer will have to solve.
-Barry Shein, Boston University
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:37:46 GMT