Re: NFS comments


Charles Hedrick (hedrick@topaz.rutgers.edu)
Sat, 20 Dec 86 21:43:28 est


Do you know how random access is done in FTAM? The big problem it
seems to be is specifying locations in the file. Unix does this by
byte number. That can't work for ISAM files. But if you do it by
record number, you are going to have to count records from the
beginning of the file to make that work on Unix. So at first glance
it would seem that system-indepedent random-access is impossible
unless you force people to conform to one file model. The folks at
Johns Hopkins hospital have a very large multi-vendor distributed
database system. They decided to forget about network file systems
and did it directly on top of RPC. It seems to have worked very well
for them. The idea is that it isn't all that useful to do
cross-system random access anyway. Let the database be managed by a
local process, and have people on other machines make somewhat
high-level requests via RPC. They made RPC work on an incredible
variety of machines, including ones that only understood BASIC, and
only talked on serial lines.

If you restrict your network file system to sequential I/O, and if you
are willing to specify whether you want a file treated as text or
binary, then it is possible to do things across a variety of systems.
The Xerox D-machines implement a transparent network file system using
normal Internet FTP under these constraints. NFS didn't do this
because there is no obvious way to tell in Unix whether a file is
binary or text. There would seem to be basic design issues here, and
I am sceptical about claims that FTAM somehow gets around them. If
you think of the network file system as something external, i.e. if
you don't store all your system files on it, but use it only for
things that the user knows are special, then of course all these
problems go away. You can demand some clue as to whether the file is
binary or text, and you can impose restrictions on what operations are
allowed (e.g. no random access or locations specified only by a "magic
cookie"). But NFS was designed to allow you to completely replace
your disk drive with it, in which case such restrictions are not
acceptable. I'm open to the possibilty that one needs two different
kinds of network file system, one which is completely transparent in
terms of the host system's semantics, and the other which makes
whatever compromises are needed to support every possible operating
system. NFS is a compromise between these two, and like all
compromises runs the danger of satisfying no one.

Can anybody tell me how FTAM handles these issues? I don't need the
whole standard, just a brief description of its file model and the
kinds of operations allowed.



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:37:14 GMT