Installing NFS and configuring shares on Debian Wheezy.
Software
Software used in this article:
- Debian Wheezy (for both NFS server and NFS client)
- nfs-kernel-server 1.2.6-4
- rpcbind 0.2.0-8
Before We Begin
NFS server has the IP address of 10.10.1.2 and is placed on the 10.10.1.0/24 subnet. NFS client is placed on the same 10.10.1.0/24 subnet. This article uses Debian Wheezy for both NFS server and NFS client.
NFS Server Installation
# apt-get install nfs-kernel-server nfs-common rpcbind
To run an NFS server, the rpcbind service must be running first:
# service rpcbind status [ ok ] rpcbind is running.
If the rpcbind service is running, then the nfs-kernel-server service can be started:
# service nfs-kernel-server start [ ok ] Exporting directories for NFS kernel daemon.... [ ok ] Starting NFS kernel daemon: nfsd mountd.
If you start an NFS service when the rpcbind is down, you get a warning:
# service nfs-kernel-server start
[ ok ] Exporting directories for NFS kernel daemon....
[....] Starting NFS kernel daemon: nfsd
[warn] Not starting: portmapper is not running ... (warning).
Basic NFS Server Configuration
We will create three directories (sgares) on the NFS server for different purposes:
/data/nfs-public
– for read-only access.
/data/nfs-root
– for read-write access, preserving root permissions (no_root_squash).
/data/nfs-no-root
– for read-write access, revoking root permissions (root_squash).
Create shares:
# mkdir -p /data/nfs-public /data/nfs-root /data/nfs-no-root # chown nobody:nogroup /data/nfs-* # chmod 0755 /data/nfs-*
We will no open the /etc/exports
file and add the following lines:
/data/nfs-public 10.10.1.0/24(ro,no_subtree_check,root_squash,all_squash) /data/nfs-root 10.10.1.0/24(rw,sync,no_subtree_check,no_all_squash,no_root_squash) /data/nfs-no-root 10.10.1.0/24(rw,sync,no_subtree_check,root_squash,all_squash)
By default, any file request made by user root on a client machine is treated as if it is made by user nobody on the server. This is called squashing, and “root_squash” option is the default one.
If “no_root_squash” option is selected, then root on a client machine will have the same level of access to the files on the system as root on the server.
User/group squashing summary (thanks to http://lpic2.unix.nl/ch10s02.html):
- root_squash (default): all requests by user root on the client will be done as user nobody on the server. This implies, for instance, that user root on the client can only read files on the server that are world readable.
- no_root_squash: all requests as root on the client will be done as root on the server.This is necessary when, for instance, backups are to be made over NFS. This implies that root on the server completely trusts user root on the client.
- all_squash: requests of any user other than root on the client are performed as user nobody on the server. Use this if you cannot map usernames and UID’s easily.
- no_all_squash (default): all requests of a non-root user on the client are attempted as the same user on the server.
Now if we go back to our configuration, the first line in /etc/exports
means that the NFS system allows clients from 10.10.1.0/24 subnet read-only access to /data/nfs-public
, and that reads made by user root on 10.10.1.0/24 (the client) will be done as user nobody. Requests of any user other than root on 10.10.1.0/24 are performed as user nobody.
The second line in /etc/exports
means that the NFS system allows clients from 10.10.1.0/24 subnet read-write access to /data/nfs-root
, and that requests made as root on the client will be done as root on the server. All requests of a non-root user on are attempted as the same user on the server (user mapping).
The third line in /etc/exports
is basically the same as the first line, but allows read-write access rather than read-only access.
The “sync” option replies to the NFS request only after all data has been written to disk and is therefore supposed to prevent data corruption if the server reboots. If sync is in use, the server will only acknowledge data after it’s written out. See this page http://www.tldp.org/HOWTO/NFS-HOWTO/performance.html#SYNC-ASYNC for a complete discussion of sync and async behaviour.
The “no_subtree_check” options prevents subtree checking, which can produce problems when a requested file is renamed while the client has the file open.
Restart NFS service:
# service nfs-kernel-server restart [ ok ] Stopping NFS kernel daemon: mountd nfsd. [ ok ] Unexporting directories for NFS kernel daemon.... [ ok ] Exporting directories for NFS kernel daemon.... [ ok ] Starting NFS kernel daemon: nfsd mountd.
Re-export all directories:
# exportfs -r
Show the NFS server’s export list:
# showmount -e --no-headers /data/nfs-no-root 10.10.1.0/24 /data/nfs-root 10.10.1.0/24 /data/nfs-public 10.10.1.0/24
Verify that NFS is running:
# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 53926 status 100024 1 tcp 37301 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 100227 3 tcp 2049 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 100227 3 udp 2049 100021 1 udp 51294 nlockmgr 100021 3 udp 51294 nlockmgr 100021 4 udp 51294 nlockmgr 100021 1 tcp 44429 nlockmgr 100021 3 tcp 44429 nlockmgr 100021 4 tcp 44429 nlockmgr 100005 1 udp 54435 mountd 100005 1 tcp 42426 mountd 100005 2 udp 44366 mountd 100005 2 tcp 52098 mountd 100005 3 udp 40581 mountd 100005 3 tcp 47698 mountd
Secure NFS Server
There are at least two ways I can think of to secure NFS access, by using firewall, or by implementing TCP wrappers (/etc/hosts.allow
and /etc/hosts.deny
).
We’ll secure the NFS server by configuring iptables firewall and limiting access from LAN only.
Allow NFS and rpcbind (portmapper) access from 10.10.1.0/24 LAN:
# iptables -A INPUT -s 10.10.1.0/24 -p tcp -m multiport --dport 111,2049 -j ACCEPT # iptables -A INPUT -s 10.10.1.0/24 -p udp -m multiport --dport 111,2049 -j ACCEPT
Configure NFS Client and Mount Shares
Install NFS:
# apt-get install nfs-common rpcbind
Create mount points for NFS shares:
# mkdir /mnt/nfs1 /mnt/nfs2 /mnt/nfs2
Mount NFS shares:
# mount.nfs 10.10.1.2:/data/nfs-public /mnt/nfs1 # mount.nfs 10.10.1.2:/data/nfs-root /mnt/nfs2 # mount.nfs 10.10.1.2:/data/nfs-no-root /mnt/nfs3
Print information about each of the mounted NFS file systems:
# nfsstat -m /mnt/nfs1 from 10.10.1.2:/data/nfs-public Flags: rw,relatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.10,minorversion=0,local_lock=none,addr=10.10.1.2 /mnt/nfs2 from 10.10.1.2:/data/nfs-root Flags: rw,relatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.10,minorversion=0,local_lock=none,addr=10.10.1.2 /mnt/nfs3 from 10.10.1.2:/data/nfs-no-root Flags: rw,relatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.10,minorversion=0,local_lock=none,addr=10.10.1.2
Let us try to create a test file on each share.
# touch /mnt/nfs1/test touch: cannot touch `/mnt/nfs1/test': Read-only file system
This is an expected behavior, as the share is configured as read-only on the NFS server.
# touch /mnt/nfs2/test # touch /mnt/nfs3/test
These two went well because the NFS server is configured for read-write access. Now, let us check the file and group ownership on the NFS server:
# ls -l /data/nfs-* /data/nfs-no-root: total 0 -rw-r--r-- 1 nobody nogroup 0 Nov 29 19:05 test /data/nfs-public: total 0 /data/nfs-root: total 0 -rw-r--r-- 1 root root 0 Nov 29 19:05 test
As we may see above, the test file in the /data/nfs-no-root
folder has the ownership of nobody, even knowing we created the file with the root user. This is due to “root_squash”. The test file in the /data/nfs-root
folder has the ownership of root due to “no_root_squash”.
References
http://www.howtoforge.com/install_nfs_server_and_client_on_debian_wheezy