©2015 - 2022 Chad’s Technoworks. Disclaimer and Terms of Use

Chad’s TechnoWorks My Journal On Technology

NFS (Network File System) is a type of file system developed by Sun Microsystems Inc. (now Oracle Corporation) that allows the files to be shared across network to connected subscribers (clients). It was developed initially on UNIX platforms and has evolved into the Linux operating environments. In the Microsoft Windows world, this is the equivalent of a shared folder, but the NFS arguably has more security options to offer.

Most of the NFS implementation today are on the Linux platform and it is commonly found on sharable storage devices such as NAS. Linux being an open source, voluntary development to support various versions of NFS is still on-going. As of this writing , the most stable version that it supports is NFS v3, while NFS v4 may already exist on some Linux kernel versions, it has limited support on the v4 specifications and development is still on-going in University of Michigan. Thus, it is advisable to use v3 when implementing mixed operating system client-server configurations. If there is a compelling requirement to implement NFS v4 or even NFS v6, consider using commercial UNIX OS such as Solaris, AIX, or HP-UX. My personal preference would be Sun Solaris UNIX since they are the one’s who invented such technology.


On selecting the NFS communication protocol, you have 2 choices - UDP or TCP.


NFS on UDP

If you have an ever reliable network, then the lightweight “connectionless” protocol is sufficient for NFS. Since UDP does not require a handshake for connecting to clients, the delivery of network packets is much faster. I would probably implement the use of this protocol if the NFS server and its clients are on the same network switch just so to avoid data loss.

Suggested tuning setup:

Set your network interface MTU equal to that of rsize and wsize (usually 8k). Use ifconfig utility to reset MTU.

Often, the default MTU size is 1500 bytes. Since most filesystem setup uses 8kb blocks, the the minimum optimal rsize and wsize exceeds the default network packet value and thus introduces IP packet fragmentation. And so, adjustment of MTU size to at least match your rsize and wsize is recommended for optimal packet transmission.


NFS on TCP

TCP is a protocol that focuses on reliability of data transmission. It is capable of doing retransmission of dropped packets to ensure 100% delivery of data. It also employs a connection handshake with the client before a session begins. There is some performance overhead in doing all these actions, but it is less noticeable when both the client and the server are not too far apart.

I recommended to use TCP for your NFS implementation to avoid data integrity issues. But since this is not a “stateless” protocol like the UDP, if the NFS server crashes in the midst of packet transmission, the connected clients will hang and the shared file system need to be remounted.


Below is my quick reference guide when setting up an NFS server on Linux with client subscribers of various operating environment. My initial sample of heterogeneous subscriber is the client configuration on the Solaris UNIX connecting to a Linux NFS server.


Configure Linux NFS Server

Configure Linux NFS Client

Configure Solaris NFS Client


CONFIGURE LINUX NFS SERVER


@Atlanticlinux server (my designated NFS server),


Create a group (fshare) that will own the nfs shared file.


# groupadd -g 1000 fshare


Create your directory to be shared across network.


# mkdir -p /dsk0/nfs

# chgrp fshare /dsk0/nfs

# chmod -R 777 /dsk0/nfs

# chmod g+s /dsk0/nfs


Edit /etc/exports to add your shared directory:

/dsk0/nfs  *(rw,sync)

  where * = applies to all available network IP


You can optionally enter explicit hostname or IPs instead of wild card '*' Like:

/usr/local   192.168.0.1(ro) 192.168.0.2(ro)

/dsk0/nfs    172.16.33.126(rw,sync) 172.16.33.127(rw,sync)

  where ro = read only

        rw = read write

        sync = ensure i/o is successful before sending ack to client (default is async)

      

Or, for large scale implementation of a local area network, use subnet to range the IPs

/usr/local 192.168.0.0/255.255.255.0(ro)

/dsk0/nfs  172.16.0.0/255.255.255.0(rw,sync)


Or, use wilcards with partial names

/usr/local *.mydomain.com(ro)


In linux, nfs port is dynamically allocated. To ensure it only uses specific port, edit "/etc/sysconfig/nfs" to uncomment:

RQUOTAD_PORT=875

LOCKD_TCPPORT=32803

LOCKD_UDPPORT=32769

MOUNTD_PORT=892

STATD_PORT=662


Also, for multi-OS client implementation (i.e. Solaris, AIX), you must turn off NFS V4 since Linux still does not fully support it:


# Turn off v4 protocol support

RPCNFSDARGS="-N 4"


FIREWALL SETUP FOR NFS

Modify the Linux firewall to include the above fixed ports plus the nfs port 2049 and rpcbind port 111 in the input chain.


iptables -A INPUT -p tcp --dport 875 -j ACCEPT

iptables -A INPUT -p udp --dport 875 -j ACCEPT

iptables -A INPUT -p tcp --dport 2049 -j ACCEPT

iptables -A INPUT -p udp --dport 2049 -j ACCEPT

iptables -A INPUT -p tcp --dport 111 -j ACCEPT

iptables -A INPUT -p udp --dport 111 -j ACCEPT

iptables -A INPUT -p tcp --dport 32803 -j ACCEPT

iptables -A INPUT -p udp --dport 32769 -j ACCEPT

iptables -A INPUT -p tcp --dport 892 -j ACCEPT

iptables -A INPUT -p udp --dport 892 -j ACCEPT

iptables -A INPUT -p tcp --dport 662 -j ACCEPT

iptables -A INPUT -p udp --dport 662 -j ACCEPT


NOTE: Rule and policy definitions take effect immediately. To persist upon reboot, modified policies must be updated

in "/etc/sysconfig/iptables" file.

To save your changes,

[root@atlanticlinux sysconfig]# service iptables save

Saving firewall rules to /etc/sysconfig/iptables: [  OK  ]

[root@atlanticlinux sysconfig]#


Or, you can directly edit the iptables file. And if it includes FORWARD definitions if your box acts as router,

you may enter the set values such as this:


[root@atlanticlinux sysconfig]# vi iptables

# Generated by iptables-save v1.3.5 on Fri May 24 17:30:10 2013

*filter

:INPUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [8502:8898400]

:RH-Firewall-1-INPUT - [0:0]

-A INPUT -j RH-Firewall-1-INPUT

-A FORWARD -j RH-Firewall-1-INPUT

-A RH-Firewall-1-INPUT -i lo -j ACCEPT

-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT

-A RH-Firewall-1-INPUT -p esp -j ACCEPT

-A RH-Firewall-1-INPUT -p ah -j ACCEPT

-A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT

-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 25 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 137 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 138 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 139 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 23 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8888 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 2049 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 32803 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 32769 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 892 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 892 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 662 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 662 -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 875 -j ACCEPT

-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 875 -j ACCEPT

-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited

COMMIT

# Completed on Fri May 24 17:30:10 2013

[root@atlanticlinux sysconfig]#


STARTUP NFS SERVICES

If nfs already running, do restart:


[root@atlanticlinux sysconfig]# /etc/init.d/portmap restart

Stopping portmap: [  OK  ]

Starting portmap: [  OK  ]


[root@atlanticlinux sysconfig]# /sbin/service nfs restart

Shutting down NFS mountd: [  OK  ]

Shutting down NFS daemon: [  OK  ]

Shutting down NFS quotas: [  OK  ]

Shutting down NFS services:  [  OK  ]

Starting NFS services:  [  OK  ]

Starting NFS quotas: [  OK  ]

Starting NFS daemon: [  OK  ]

Starting NFS mountd: [  OK  ]

[root@atlanticlinux sysconfig]#


If nfs is not yet running, then start the nfs:


[root@atlanticlinux etc]# /sbin/chkconfig nfs on

[root@atlanticlinux etc]# /sbin/service nfs start

Starting NFS services:  [  OK  ]

Starting NFS quotas: [  OK  ]

Starting NFS daemon: [  OK  ]

Starting NFS mountd: [  OK  ]

[root@atlanticlinux etc]#


To reload (if nfs is already running):

exportfs -ra


To list mounted exports (nfs):


[root@atlanticlinux etc]# showmount -e

Export list for atlanticlinux:

/dsk0/nfs 172.16.33.127,172.16.33.126

[root@atlanticlinux etc]#



To verify if NFS RPC-based services are enabled for portmap:


# rpcinfo -p


The output should list the processes for both tcp and udp (at least 1 process for each protocol):

portmapper

status

rquotad

mountd

nfs

nlockmgr


 

[root@atlanticlinux ~]# rpcinfo -p

   program vers proto   port

    100000    2   tcp    111  portmapper

    100000    2   udp    111  portmapper

    100024    1   udp    662  status

    100024    1   tcp    662  status

    100011    1   udp    875  rquotad

    100011    2   udp    875  rquotad

    100011    1   tcp    875  rquotad

    100011    2   tcp    875  rquotad

    100021    1   udp  32769  nlockmgr

    100021    3   udp  32769  nlockmgr

    100021    4   udp  32769  nlockmgr

    100021    1   tcp  32803  nlockmgr

    100021    3   tcp  32803  nlockmgr

    100021    4   tcp  32803  nlockmgr

    100003    2   udp   2049  nfs

    100003    3   udp   2049  nfs

    100003    2   tcp   2049  nfs

    100003    3   tcp   2049  nfs

    100005    1   udp    892  mountd

    100005    1   tcp    892  mountd

    100005    2   udp    892  mountd

    100005    2   tcp    892  mountd

    100005    3   udp    892  mountd

    100005    3   tcp    892  mountd

[root@atlanticlinux ~]#


If one of the NFS services does not start up correctly, portmap will be unable to map RPC requests from clients for that service to the correct port.

In many cases, restarting NFS as root (/sbin/service nfs restart) will cause those service to correctly register with portmap and begin working.


If you come back and change your /etc/exports file, the changes you make may not take effect immediately.

You should run the command exportfs -ra to force nfsd to re-read the /etc/exports file.

If you can't find the exportfs command, then you can kill nfsd with the -HUP flag (see the man pages for kill for details).

If that still doesn't work, don't forget to check hosts.allow to make sure you haven't forgotten to list any new client machines there.


To verify if nfs and mountd is responding in udp and tcp:


[root@atlanticlinux etc]# rpcinfo -u atlanticlinux nfs

program 100003 version 2 ready and waiting

program 100003 version 3 ready and waiting

[root@atlanticlinux etc]#

[root@atlanticlinux etc]# rpcinfo -u atlanticlinux mountd

program 100005 version 1 ready and waiting

program 100005 version 2 ready and waiting

program 100005 version 3 ready and waiting

[root@atlanticlinux etc]#

[root@atlanticlinux etc]# rpcinfo -t atlanticlinux nfs

program 100003 version 2 ready and waiting

program 100003 version 3 ready and waiting

[root@atlanticlinux etc]# rpcinfo -t atlanticlinux mountd

program 100005 version 1 ready and waiting

program 100005 version 2 ready and waiting

program 100005 version 3 ready and waiting

[root@atlanticlinux etc]#




SECURING THE SHARED FILE SYSTEM

In general it is a good idea with NFS (as with most internet services) to explicitly deny access to IP addresses that you don't need to allow access to.


/etc/hosts.allow and /etc/hosts.deny

These two files specify which computers on the network can use services on your machine.

Each line of the file contains a single entry listing a service and a set of machines.

When the server gets a request from a machine, it does the following:

1. It first checks hosts.allow to see if the machine matches a rule listed here. If it does, then the machine is allowed access.

2. If the machine does not match an entry in hosts.allow the server then checks hosts.deny to see if the client matches a rule listed there. If it does then the machine is denied access.

3. If the client matches no listings in either file, then it is allowed access.


edit /etc/hosts.deny to add:


portmap:ALL

mountd:ALL

rquotad:ALL

lockd:ALL

statd:ALL


edit /etc/hosts.allow to add:


lockd: 192.168.0.1 , 192.168.0.2

rquotad: 192.168.0.1 , 192.168.0.2

mountd: 192.168.0.1 , 192.168.0.2

statd: 192.168.0.1 , 192.168.0.2


where 192.168.0.1 = pacific client host

      192.168.0.2 = atlantic client host

 

If you intend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allows for network/netmask style entries in the same manner as /etc/exports.




CONFIGURE CLIENT NFS IN SOLARIS


@Solaris10 x86 - pacific (my client host),


Some optional Firewall checks:

  To Enable Solaris Firewall:

    svcadm enable svc:/network/ipfilter:default


  To Disable Solaris Firewall:

    svcadm disable svc:/network/ipfilter:default


  To Check Firewall IP filter list:

    ipfstat -io


Check connectivity to nfs server using tcp and udp protocol at port 2049:

# rpcinfo -t -n 2049 atlanticlinux nfs

program 100003 version 2 ready and waiting

program 100003 version 3 ready and waiting

#

# rpcinfo -u -n 2049 atlanticlinux nfs

program 100003 version 2 ready and waiting

program 100003 version 3 ready and waiting

#

# rpcinfo -t atlanticlinux mountd

program 100005 version 1 ready and waiting

program 100005 version 2 ready and waiting

program 100005 version 3 ready and waiting

#

# rpcinfo -u atlanticlinux mountd

program 100005 version 1 ready and waiting

program 100005 version 2 ready and waiting

program 100005 version 3 ready and waiting

#

# showmount -e atlanticlinux

export list for atlanticlinux:

/dsk0/nfs 172.16.33.126

#



It can be good to disable the callback:


svcadm disable  svc:/network/nfs/cbd



Enable NFS V3 as default, edit /etc/default/nfs to add:


NFS_CLIENT_VERSMAX=3



Mount the shared FS:


# mkdir -p /disk/share

# mount -F nfs -o rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768 atlanticlinux:/dsk0/nfs /disk/share


Let's check the mount point:


pacific:oradb> df -h

Filesystem             size   used  avail capacity  Mounted on

/dev/dsk/c1t0d0s0      9.7G   5.0G   4.6G    52%    /

/devices                 0K     0K     0K     0%    /devices

ctfs                     0K     0K     0K     0%    /system/contract

proc                     0K     0K     0K     0%    /proc

mnttab                   0K     0K     0K     0%    /etc/mnttab

swap                    14G   812K    14G     1%    /etc/svc/volatile

objfs                    0K     0K     0K     0%    /system/object

sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab

/usr/lib/libc/libc_hwcap1.so.1

                       9.7G   5.0G   4.6G    52%    /lib/libc.so.1

fd                       0K     0K     0K     0%    /dev/fd

swap                    14G    36K    14G     1%    /tmp

swap                    14G    32K    14G     1%    /var/run

/dev/dsk/c1t0d0s6       65G    25G    40G    39%    /dsk0

/dev/dsk/c1t0d0s7       42M   2.9M    35M     8%    /export/home

atlanticlinux:/dsk0/nfs

                        46G    10G    34G    23%    /disk/share

pacific:oradb>


Let's test a file:

In this case I am using a non-root user oradb.


pacific:oradb> echo "Hello" >> /disk/share/test.txt

pacific:oradb> cd /disk/share

pacific:oradb> ls -l

total 16

-rw-r--r--   1 oradb    fshare         7 May 25 13:28 test.txt

pacific:oradb>


Let's check the file at the nfs server:


[root@atlanticlinux sysconfig]# cd /dsk0/nfs

[root@atlanticlinux nfs]# ls -l

total 8

-rw-r--r-- 1 cimsrvr fshare 7 May 25 13:28 test.txt

[root@atlanticlinux nfs]#


OK, now we have a uid issue. Need to correct the osuser to match the uid.


At the solaris client host, add a group fshare having the same GID of the NFS server.

# groupadd -g 1000 fshare


Modify the /etc/group to add oradb user (and other users that uses the /nfs/share) as a member of fshare.


fshare::1000:oradb


At the NFS server, create the user oradb having the same UID of the NFS client hosts.

# groupadd -g 2001 orainst

# groupadd -g 2002 oradba

# groupadd -g 2003 oraoper

# useradd -g orainst -G oradba,oraoper,fshare -m \

     -s /bin/bash -d /home/oradb -c "Oracle Account" -u 501 oradb




In order for the mount point to persist on the NFS client upon reboot, add in /etc/vfstab:


atlanticlinux:/dsk0/nfs   -   /disk/share   nfs   -   yes    rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768


NOTE: above parameters are tab separated.


For Solaris 11, you must enable the NFS/client with the -r option for a successful mount of NFS directory


svcadm enable -r nfs/client

reboot




********** TROUBLESHOOTING **********


To Monitor:

# snoop -d e1000g0 -o /dsk0/snoopy.txt atlanticlinux


To Read:

snoop -i snoopy.txt | more




CONFIGURE CLIENT NFS IN LINUX

On the NFS client,


Check available NFS shared file system

# showmount -e atlanticlinux

export list for atlanticlinux:

/dsk0/nfs 172.16.33.126

#


Edit the /etc/fstab file to have mount entries that references the NFS shared FS:


atlanticlinux:/dsk0/nfs /disk/share nfs rw,rsize=32768,wsize=32768,sync,nointr,hard 0 0


The above entry ensures that upon reboot of the Linux client host the shared file system will be mounted automatically.


Now, since we haven’t rebooted the client host. Let’s the shared the shared FS:


# mkdir /disk/share

# mount /disk/share


The RDBMS may have its own requirements for the options in the /etc/fstab entry, so the rw,rsize example may need to be adjusted accordingly.




LINUX NFS SERVER AND CLIENT CONFIGURATION

Information Technology