Re: RFS vs NFS


Martin McKendry (amdcad!amd!intelca!oliveb!felix!martin@ucbvax.Berkeley.EDU)
14 Aug 87 21:54:22 GMT


I wrote
>
> Actually, I like to think of it as RFS using volatile state, and NFS
> using permanent state (i.e., disk). In order to permit the simple
> recovery that NFS aspires to, you have to store data on disk that
> will survive a failure so that a client can do correct cache
> invalidation. This NFS does, and it completes all writes prior to
> acknowledging a write to a client. In view of these requirements,
> I feel that the term "stateless" without qualification is incorrect.
>

Jim Rees (Jim Rees (rees@apollo.uucp) responded:

>Well... I guess so, but if you take that interpretation, you could also
>argue that no server is ever stateless, because the data in the files
>represents state, too.

My comments were probably less than precise. I'm actually getting at
terminology and commonly drawn inferences. I would be happy to go with
the position that no server is ever stateless. After all, state is what
consistency is all about, and consistency is a (if not _the_) major
problem in a distributed file system. I just object to the terminology:
a "stateless" file system sounds like one that doesn't store anything!
I also object to the common perception that "stateless" is somehow "better".
Consider what you can do if you are not "stateless":

Our FileNet filesystem is "stateful"; we maintain a list with each inode of
who is using it. This allowed us to implement a fairly sophisticated
caching mechanism. The mechanism allows us to cache both data blocks
and inodes -- opens, closes, reads, and some writes can happen without
any communication between a client and the server storing a file. Basically,
the site that stores a file notifies clients if it changes. If a file
changes a lot, (relative to the number of read accesses) we switch
modes (all sites using the file notified) so that
clients take responsibility for asking the server if there have been changes.
Our experience so far (we are currently in quality assurance) is that it
is very rare that this pays off. On our basic applications
we are seeing hit ratios on the inode cache of 75-85% for opens. Only
closes that change a file have to contact the server -- over 98% don't.
Total message traffic due to the file system is down over 50% in many cases.
I'd rather use a standard filesystem protocol and mechanism (I have spent
considerable time trying to do this). However, we build a tightly-coupled
system with a very centralized view. Sophisticated caching is
essential to supporting large numbers of workstations. You can't do that
if you don't maintain some kind of "connection state" that tells a server
who has a file cached. So "statelessness" is not a universal panacea.

Nonetheless, there are many people around (FileNet included) who believe
that there is something inherently 'better' about being stateless
(as defined by Sun). But there is no free lunch. The price of Sun's
stateless approach is that preservation of consistency of the file system
requires that clients interrogate the server to confirm currency of cached
entries, *whether or not those entries have changed*. This slows down
the client and it loads the server, thus contributing to the limits on its
capacity. It may not worry you now, but when you have much larger
workstation memories, and you have all the data you need locally, you are
not going to be pleased at constantly having to interrogate a server
before you can use the data. I'd like to see more understanding of these
issues. There are many advantages to Sun's approach. There are also
limitations. Like not being able to implement exclusive locking in the
file system. (Of course, the v-node mechanism reduces this need, but that
moves work to the server, which is already overloaded...etc...)

Jim Rees (Jim Rees (rees@apollo.uucp) also said:
> In this context, when I say "server state," I
>mean some information held by the server about connections with the
>clients. The server "knows" that node X has file "foo" open for reading.
>NFS does not keep this kind of information around, not even on disk,
>except maybe as an optimization.

The main reasons we keep this data at the server are a) for cache
invalidation, and b) to manage exclusive locking. Sun let the
client worry about invalidation, and they don't do exclusive locking.
Given that the semantics of NFS don't require it to be "stateful",
I don't see why being "stateless" gets such a billing as a wonderful
feature. I don't know if RFS supports exclusive locking. Their
caching is pretty disgusting, but it does require that the server
contact clients.

Mr/Ms Face [Mr/Ms Face [monkey@unixprt.UUCP (Monkey Mr/Ms Face [monkey@unixprt.UUCP (Monkey Face@unixprt)] also responded:

>The "stateless" term does not really refer to the 'write through'
>characteristic of NFS, but to the fact that the 'server' does not
>maintain 'state' related to currently opened files by 'clients'.

>From the Usenix (86?) paper on NFS:
    "Because the NFS is stateless,, when servicing an NFS request
     it must commit any modified data to stable storage before
     returning results."

Here, they are including as one of the properties of statelessness
the ease with which a client can recover from a server failure.
If you don't feel that this is a required property (e.g., the
client could detect it and do clean up), then your comment is
correct. This property and the lack of per-client data stored
at the server do seem to be separable issues. We don't do
write-through on our system, but conceivably we could do it
to simplify client recovery even though we are not "stateless".

These issues are complex, and the tradeoffs are too. I'm not
claiming that there are any universal answers. But I'm interested
in discussing the tradeoffs.

--
Martin S. McKendry;	FileNet	Corp	{hplabs,trwrb}!felix!martin
Get in on the ground floor: Buy	FileNet	stock now!



This archive was generated by hypermail 2.0b3 on Thu Mar 09 2000 - 14:38:49 GMT