# Shadow password support
shadow: compat
If you use shadow passwords along with NIS, you must try to maintain some security by restricting access to your NIS database. See "NIS Server Security" earlier in this chapter.
Chapter 14. The Network File System
The Network File System (NFS) is probably the most prominent network service using RPC. It allows you to access files on remote hosts in exactly the same way you would access local files. A mixture of kernel support and user-space daemons on the client side, along with an NFS server on the server side, makes this possible. This file access is completely transparent to the client and works across a variety of server and host architectures.
NFS offers a number of useful features:
· Data accessed by all users can be kept on a central host, with clients mounting this directory at boot time. For example, you can keep all user accounts on one host and have all hosts on your network mount /home from that host. If NFS is installed beside NIS, users can log into any system and still work on one set of files.
· Data consuming large amounts of disk space can be kept on a single host. For example, all files and programs relating to LaTeX and METAFONT can be kept and maintained in one place.
· Administrative data can be kept on a single host. There is no need to use rcp to install the same stupid file on 20 different machines.
It's not too hard to set up basic NFS operation on both the client and server; this chapter tells you how.
Linux NFS is largely the work of Rick Sladkey, who wrote the NFS kernel code and large parts of the NFS server. [79] Rick can be reached at jrs@world.std.com.
The latter is derived from the unfsduser space NFS server, originally written by Mark Shand, and the hnfsHarris NFS server, written by Donald Becker.
Let's have a look at how NFS works. First, a client tries to mount a directory from a remote host on a local directory just the same way it does a physical device. However, the syntax used to specify the remote directory is different. For example, to mount /home from host vlager to /users on vale , the administrator issues the following command on vale : [80] Actually, you can omit the -t nfs argument because mount sees from the colon that this specifies an NFS volume.
# mount -t nfs vlager:/home /users
mount will try to connect to the rpc.mountd mount daemon on vlager via RPC. The server will check if vale is permitted to mount the directory in question, and if so, return it a file handle. This file handle will be used in all subsequent requests to files below /users .
When someone accesses a file over NFS, the kernel places an RPC call to rpc.nfsd (the NFS daemon) on the server machine. This call takes the file handle, the name of the file to be accessed, and the user and group IDs of the user as parameters. These are used in determining access rights to the specified file. In order to prevent unauthorized users from reading or modifying files, user and group IDs must be the same on both hosts.
On most Unix implementations, the NFS functionality of both client and server is implemented as kernel-level daemons that are started from user space at system boot. These are the NFS Daemon (rpc.nfsd) on the server host, and the Block I/O Daemon (biod) on the client host. To improve throughput, biod performs asynchronous I/O using read-ahead and write-behind; also, several rpc.nfsd daemons are usually run concurrently.
The current NFS implementation of Linux is a little different from the classic NFS in that the server code runs entirely in user space, so running multiple copies simultaneously is more complicated. The current rpc.nfsd implementation offers an experimental feature that allows limited support for multiple servers. Olaf Kirch developed kernel-based NFS server support featured in 2.2 Version Linux kernels. Its performance is significantly better than the existing userspace implementation. We'll describe it later in this chapter.
Before you can use NFS, be it as server or client, you must make sure your kernel has NFS support compiled in. Newer kernels have a simple interface on the proc filesystem for this, the /proc/filesystems file, which you can display using cat:
$ cat /proc/filesystems
minix
ext2
msdos
nodev proc
nodev nfs
If nfs is missing from this list, you have to compile your own kernel with NFS enabled, or perhaps you will need to load the kernel module if your NFS support was compiled as a module. Configuring the kernel network options is explained in the "Kernel Configuration" section of Chapter 3, Configuring the Networking Hardware.
The mounting of NFS volumes closely resembles regular file systems. Invoke mount using the following syntax: [81] One doesn't say filesystem because these are not proper filesystems.
# mount -t nfs nfs_volume local_dir options
nfs_volume is given as remote_host : remote_dir . Since this notation is unique to NFS filesystems, you can leave out the -t nfs option.
There are a number of additional options that you can specify to mount upon mounting an NFS volume. These may be given either following the -o switch on the command line or in the options field of the /etc/fstab entry for the volume. In both cases, multiple options are separated by commas and must not contain any whitespace characters. Options specified on the command line always override those given in the fstab file.
Here is a sample entry from /etc/fstab :
# volume mount point type options
news:/var/spool/news /var/spool/news nfs timeo=14,intr
This volume can then be mounted using this command:
# mount news:/var/spool/news
In the absence of an fstab entry, NFS mount invocations look a lot uglier. For instance, suppose you mount your users' home directories from a machine named moonshot , which uses a default block size of 4 K for read/write operations. You might increase the block size to 8 K to obtain better performance by issuing the command:
# mount moonshot:/home /home -o rsize=8192,wsize=8192
The list of all valid options is described in its entirety in the nfs(5) manual page. The following is a partial list of options you would probably want to use:
rsize=n and wsize=n
These specify the datagram size used by the NFS clients on read and write requests, respectively. The default depends on the version of kernel, but is normally 1,024 bytes.
timeo=n
This sets the time (in tenths of a second) the NFS client will wait for a request to complete. The default value is 7 (0.7 seconds). What happens after a timeout depends on whether you use the hard or soft option.
hard
Explicitly mark this volume as hard-mounted. This is on by default. This option causes the server to report a message to the console when a major timeout occurs and continues trying indefinitely.
soft
Soft-mount (as opposed to hard-mount) the driver. This option causes an I/O error to be reported to the process attempting a file operation when a major timeout occurs.
Читать дальше