Jeffrey Mogul (email@example.com)
Tue, 26 Apr 88 11:15:11 PDT
Someone asked why one should pay "network charges" if 14 of 15
megabytes are transferred and then the FTP connection aborts.
If we are still asking this question when usage-based charges are
common, we deserve what we get. There is (almost) no reason why this
should be a problem, given some thought to protocol design; in other
words, the problem is in FTP, not in whatever charging scheme.
Consider what happens today when you've just lost an FTP connection
after waiting 3 hours for a large file to ooze across the ARPAnet. Do
you think "wow, what I really want to do is retransfer all those bytes"
or do you think "darn, all I really want is the last 37 bytes"? FTP is
a good protocol for efficient transfers over stable networks, but it's
not particularly useful if the probability that the transfer will never
complete approaches 1.
If our bulk-transfer protocol allowed us to ask for a piece of a file,
rather than the whole thing, life today would be a lot easier (and life
in the pay-per-packet future would be a lot cheaper). A robust FTP
user program might then be able to automatically retry failed transfers
without duplicating data already transferred. I'm not saying that NFS
is perfect, but at least it makes this kind of thing possible.
The reason why I used the word "almost" in the 2nd paragraph is that
one would have to worry more about locking (in case the file changed)
between two partial transfers, and that opens up a hornets nest ... but
many current systems don't guarantee consistency anyway.
This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:41:56 GMT