©2015 -
NFS (Network File System) is a type of file system developed by Sun Microsystems Inc. (now Oracle Corporation) that allows the files to be shared across network to connected subscribers (clients). It was developed initially on UNIX platforms and has evolved into the Linux operating environments. In the Microsoft Windows world, this is the equivalent of a shared folder, but the NFS arguably has more security options to offer.
Most of the NFS implementation today are on the Linux platform and it is commonly found on sharable storage devices such as NAS. Linux being an open source, voluntary development to support various versions of NFS is still on-
On selecting the NFS communication protocol, you have 2 choices -
NFS on UDP
If you have an ever reliable network, then the lightweight “connectionless” protocol is sufficient for NFS. Since UDP does not require a handshake for connecting to clients, the delivery of network packets is much faster. I would probably implement the use of this protocol if the NFS server and its clients are on the same network switch just so to avoid data loss.
Suggested tuning setup:
Set your network interface MTU equal to that of rsize and wsize (usually 8k). Use ifconfig utility to reset MTU.
Often, the default MTU size is 1500 bytes. Since most filesystem setup uses 8kb blocks, the the minimum optimal rsize and wsize exceeds the default network packet value and thus introduces IP packet fragmentation. And so, adjustment of MTU size to at least match your rsize and wsize is recommended for optimal packet transmission.
NFS on TCP
TCP is a protocol that focuses on reliability of data transmission. It is capable of doing retransmission of dropped packets to ensure 100% delivery of data. It also employs a connection handshake with the client before a session begins. There is some performance overhead in doing all these actions, but it is less noticeable when both the client and the server are not too far apart.
I recommended to use TCP for your NFS implementation to avoid data integrity issues. But since this is not a “stateless” protocol like the UDP, if the NFS server crashes in the midst of packet transmission, the connected clients will hang and the shared file system need to be remounted.
Below is my quick reference guide when setting up an NFS server on Linux with client subscribers of various operating environment. My initial sample of heterogeneous subscriber is the client configuration on the Solaris UNIX connecting to a Linux NFS server.
@Atlanticlinux server (my designated NFS server),
Create a group (fshare) that will own the nfs shared file.
# groupadd -
Create your directory to be shared across network.
# mkdir -
# chgrp fshare /dsk0/nfs
# chmod -
# chmod g+s /dsk0/nfs
Edit /etc/exports to add your shared directory:
/dsk0/nfs *(rw,sync)
where * = applies to all available network IP
You can optionally enter explicit hostname or IPs instead of wild card '*' Like:
/usr/local 192.168.0.1(ro) 192.168.0.2(ro)
/dsk0/nfs 172.16.33.126(rw,sync) 172.16.33.127(rw,sync)
where ro = read only
rw = read write
sync = ensure i/o is successful before sending ack to client (default is async)
Or, for large scale implementation of a local area network, use subnet to range the IPs
/usr/local 192.168.0.0/255.255.255.0(ro)
/dsk0/nfs 172.16.0.0/255.255.255.0(rw,sync)
Or, use wilcards with partial names
/usr/local *.mydomain.com(ro)
In linux, nfs port is dynamically allocated. To ensure it only uses specific port, edit "/etc/sysconfig/nfs" to uncomment:
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
Also, for multi-
# Turn off v4 protocol support
RPCNFSDARGS="-
Modify the Linux firewall to include the above fixed ports plus the nfs port 2049 and rpcbind port 111 in the input chain.
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
iptables -
NOTE: Rule and policy definitions take effect immediately. To persist upon reboot, modified policies must be updated
in "/etc/sysconfig/iptables" file.
To save your changes,
[root@atlanticlinux sysconfig]# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
[root@atlanticlinux sysconfig]#
Or, you can directly edit the iptables file. And if it includes FORWARD definitions if your box acts as router,
you may enter the set values such as this:
[root@atlanticlinux sysconfig]# vi iptables
# Generated by iptables-
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8502:8898400]
:RH-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
COMMIT
# Completed on Fri May 24 17:30:10 2013
[root@atlanticlinux sysconfig]#
If nfs already running, do restart:
[root@atlanticlinux sysconfig]# /etc/init.d/portmap restart
Stopping portmap: [ OK ]
Starting portmap: [ OK ]
[root@atlanticlinux sysconfig]# /sbin/service nfs restart
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
[root@atlanticlinux sysconfig]#
If nfs is not yet running, then start the nfs:
[root@atlanticlinux etc]# /sbin/chkconfig nfs on
[root@atlanticlinux etc]# /sbin/service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
[root@atlanticlinux etc]#
To reload (if nfs is already running):
exportfs -
To list mounted exports (nfs):
[root@atlanticlinux etc]# showmount -
Export list for atlanticlinux:
/dsk0/nfs 172.16.33.127,172.16.33.126
[root@atlanticlinux etc]#
To verify if NFS RPC-
# rpcinfo -
The output should list the processes for both tcp and udp (at least 1 process for each protocol):
portmapper
status
rquotad
mountd
nfs
nlockmgr
[root@atlanticlinux ~]# rpcinfo -
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 662 status
100024 1 tcp 662 status
100011 1 udp 875 rquotad
100011 2 udp 875 rquotad
100011 1 tcp 875 rquotad
100011 2 tcp 875 rquotad
100021 1 udp 32769 nlockmgr
100021 3 udp 32769 nlockmgr
100021 4 udp 32769 nlockmgr
100021 1 tcp 32803 nlockmgr
100021 3 tcp 32803 nlockmgr
100021 4 tcp 32803 nlockmgr
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100005 1 udp 892 mountd
100005 1 tcp 892 mountd
100005 2 udp 892 mountd
100005 2 tcp 892 mountd
100005 3 udp 892 mountd
100005 3 tcp 892 mountd
[root@atlanticlinux ~]#
If one of the NFS services does not start up correctly, portmap will be unable to map RPC requests from clients for that service to the correct port.
In many cases, restarting NFS as root (/sbin/service nfs restart) will cause those service to correctly register with portmap and begin working.
If you come back and change your /etc/exports file, the changes you make may not take effect immediately.
You should run the command exportfs -
If you can't find the exportfs command, then you can kill nfsd with the -
If that still doesn't work, don't forget to check hosts.allow to make sure you haven't forgotten to list any new client machines there.
To verify if nfs and mountd is responding in udp and tcp:
[root@atlanticlinux etc]# rpcinfo -
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
[root@atlanticlinux etc]#
[root@atlanticlinux etc]# rpcinfo -
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
[root@atlanticlinux etc]#
[root@atlanticlinux etc]# rpcinfo -
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
[root@atlanticlinux etc]# rpcinfo -
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
[root@atlanticlinux etc]#
SECURING THE SHARED FILE SYSTEM
In general it is a good idea with NFS (as with most internet services) to explicitly deny access to IP addresses that you don't need to allow access to.
/etc/hosts.allow and /etc/hosts.deny
These two files specify which computers on the network can use services on your machine.
Each line of the file contains a single entry listing a service and a set of machines.
When the server gets a request from a machine, it does the following:
1. It first checks hosts.allow to see if the machine matches a rule listed here. If it does, then the machine is allowed access.
2. If the machine does not match an entry in hosts.allow the server then checks hosts.deny to see if the client matches a rule listed there. If it does then the machine is denied access.
3. If the client matches no listings in either file, then it is allowed access.
edit /etc/hosts.deny to add:
portmap:ALL
mountd:ALL
rquotad:ALL
lockd:ALL
statd:ALL
edit /etc/hosts.allow to add:
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
where 192.168.0.1 = pacific client host
192.168.0.2 = atlantic client host
If you intend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allows for network/netmask style entries in the same manner as /etc/exports.
CONFIGURE CLIENT NFS IN SOLARIS
@Solaris10 x86 -
Some optional Firewall checks:
To Enable Solaris Firewall:
svcadm enable svc:/network/ipfilter:default
To Disable Solaris Firewall:
svcadm disable svc:/network/ipfilter:default
To Check Firewall IP filter list:
ipfstat -
Check connectivity to nfs server using tcp and udp protocol at port 2049:
# rpcinfo -
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
#
# rpcinfo -
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
#
# rpcinfo -
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
#
# rpcinfo -
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
#
# showmount -
export list for atlanticlinux:
/dsk0/nfs 172.16.33.126
#
It can be good to disable the callback:
svcadm disable svc:/network/nfs/cbd
Enable NFS V3 as default, edit /etc/default/nfs to add:
NFS_CLIENT_VERSMAX=3
Mount the shared FS:
# mkdir -
# mount -
Let's check the mount point:
pacific:oradb> df -
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 9.7G 5.0G 4.6G 52% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 812K 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
9.7G 5.0G 4.6G 52% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 14G 36K 14G 1% /tmp
swap 14G 32K 14G 1% /var/run
/dev/dsk/c1t0d0s6 65G 25G 40G 39% /dsk0
/dev/dsk/c1t0d0s7 42M 2.9M 35M 8% /export/home
atlanticlinux:/dsk0/nfs
46G 10G 34G 23% /disk/share
pacific:oradb>
Let's test a file:
In this case I am using a non-
pacific:oradb> echo "Hello" >> /disk/share/test.txt
pacific:oradb> cd /disk/share
pacific:oradb> ls -
total 16
-
pacific:oradb>
Let's check the file at the nfs server:
[root@atlanticlinux sysconfig]# cd /dsk0/nfs
[root@atlanticlinux nfs]# ls -
total 8
-
[root@atlanticlinux nfs]#
OK, now we have a uid issue. Need to correct the osuser to match the uid.
At the solaris client host, add a group fshare having the same GID of the NFS server.
# groupadd -
Modify the /etc/group to add oradb user (and other users that uses the /nfs/share) as a member of fshare.
fshare::1000:oradb
At the NFS server, create the user oradb having the same UID of the NFS client hosts.
# groupadd -
# groupadd -
# groupadd -
# useradd -
-
In order for the mount point to persist on the NFS client upon reboot, add in /etc/vfstab:
atlanticlinux:/dsk0/nfs -
NOTE: above parameters are tab separated.
For Solaris 11, you must enable the NFS/client with the -
svcadm enable -
reboot
********** TROUBLESHOOTING **********
To Monitor:
# snoop -
To Read:
snoop -
On the NFS client,
Check available NFS shared file system
# showmount -
export list for atlanticlinux:
/dsk0/nfs 172.16.33.126
#
Edit the /etc/fstab file to have mount entries that references the NFS shared FS:
atlanticlinux:/dsk0/nfs /disk/share nfs rw,rsize=32768,wsize=32768,sync,nointr,hard 0 0
The above entry ensures that upon reboot of the Linux client host the shared file system will be mounted automatically.
Now, since we haven’t rebooted the client host. Let’s the shared the shared FS:
# mkdir /disk/share
# mount /disk/share
The RDBMS may have its own requirements for the options in the /etc/fstab entry, so the rw,rsize example may need to be adjusted accordingly.
LINUX NFS SERVER AND CLIENT CONFIGURATION