©2015 -
Solaris NFS Server And Client Setup
The NFS (Network File system) is a file sharing technology originally developed by Sun Microsystems back in the 80’s for the UNIX operating system. The concept of this technology is similar to the Windows’ shared folder that most users are familiar. Majority of the implementation of NFS today are found on NAS devices which mostly are running in Linux. Since Linux is quite popular nowadays, most of my NFS Server setup are hosted on Linux. But then came a point when I hit a snag that the Linux either crashes or just go hung. I have found out that the problem always lies with memory allocation and management when the Linux NFS server is subjected to stress with hundreds of file updates. In my frustration, as this scenario equates to a major setback in my implementation and would often require me to do a lot of clean up and chores to fix the data integrity of the application, I had decided to use the original NFS Server engine currently available in the Solaris Unix of Sun Microsystems (now Oracle). Nothing beats the original, it is just so reliable!
There are many reference document available at the Internet on how to build a Solaris NFS server such as the Solaris NFS FAQ doc. But below is my implementation for my lab setup and will be a useful reference for me in my future needs.
SETUP SOLARIS NFS SERVER
Create the directories to be shared at your NFS Server
pacific:oragrid> su -
Password:
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# cd /dsk0/share
# mkdir crsdata1 crsdata2 crsdata3
# ls -
total 8
drwxr-
drwxr-
drwxr-
drwxr-
Optionally, you may change the ownership of the directories to a user of the client. This implementation would require to have the same OS user account having the same UID and GID between the NFS server host and NFS client host. In my case below, I had previously created the oragrid and oraem user at the NFS Server host matching the UIDs and the GIDs of the client.
# chown oragrid:orainst crs*
# ls -
total 8
drwxr-
drwxr-
drwxr-
drwxr-
#
Add to file /etc/dfs/dfstab
share -
share -
share -
share -
Manually start the NFS Server service,
# /etc/init.d/nfs.server start
Set run command for boot autostart of NFS server
# cd /etc/rc3.d
# ls
README S16boot.server S50apache S80mipagent
# ln -
# ls -
total 16
-
-
lrwxrwxrwx 1 root root 22 Feb 3 11:17 S18nfs.server -
-
-
#
SETUP SOLARIS NFS CLIENT
Verify what was being shared:
chad@s11node1:~$ showmount -
export list for pacific:
/dsk0/share/crsdata1 s11node1,s11node2
/dsk0/share/crsdata2 s11node1,s11node2
/dsk0/share/crsdata3 s11node1,s11node2
/dsk0/share/oms (everyone)
chad@s11node1:~$
As root,
# mkdir -
# mkdir -
Assign user ownership of the newly created directories that would serve as mount points.
# chown -
# chown -
Mount the shared directories:
mount -
mount -
mount -
mount -
root@s11node1:~# df -
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris 34G 4.2G 4.0G 52% /
/devices 0K 0K 0K 0% /devices
/dev 0K 0K 0K 0% /dev
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.0G 1.6M 3.0G 1% /system/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
8.2G 4.2G 4.0G 52% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
rpool/ROOT/solaris/var
34G 230M 4.0G 6% /var
swap 3.2G 128M 3.0G 4% /tmp
rpool/VARSHARE 34G 97K 4.0G 1% /var/share
rpool/export 34G 32K 4.0G 1% /export
rpool/export/home 34G 35K 4.0G 1% /export/home
rpool/export/home/chad
34G 813K 4.0G 1% /export/home/chad
rpool/export/home/oraem
34G 13G 4.0G 77% /export/home/oraem
rpool/export/home/oragrid
34G 1.6G 4.0G 29% /export/home/oragrid
rpool 34G 5.0M 4.0G 1% /rpool
rpool/VARSHARE/zones 34G 31K 4.0G 1% /system/zones
rpool/VARSHARE/pkg 34G 32K 4.0G 1% /var/share/pkg
rpool/VARSHARE/pkg/repositories
34G 31K 4.0G 1% /var/share/pkg/repositories
/hgfs 16G 4.0M 16G 1% /hgfs
/tmp/VMwareDnD 0K 0K 0K 0% /system/volatile/vmblock
pacific:/dsk0/share/oms
63G 18G 44G 30% /oem/app
pacific:/dsk0/share/crsdata1
63G 18G 44G 30% /ogrid/clusterdata1
pacific:/dsk0/share/crsdata2
63G 18G 44G 30% /ogrid/clusterdata2
pacific:/dsk0/share/crsdata3
63G 18G 44G 30% /ogrid/clusterdata3
root@s11node1:~#
For permanent NFS client mount:
edit /etc/vfstab to add the following entries -
pacific:/dsk0/share/crsdata1 -
pacific:/dsk0/share/crsdata2 -
pacific:/dsk0/share/crsdata3 -
pacific:/dsk0/share/oms -
To verify current mount options and type:
mount -
Special NFS Mount Options for Oracle Grid Clusterware
If you are planning to use NFS for Oracle Grid installation and clusterware configuration, the following are the suggested options for NFS client mount.
Reference doc: Oracle Grid Infrastructure Installation Guide For Solaris
On the cluster member nodes, you must set the values for the NFS buffer size parameters rsize and wsize to 32768.
The NFS client-
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,suid 0 0
If you have Oracle Grid Infrastructure binaries on an NFS mount, then you must include the suid option.
The NFS client-
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,noac,forcedirectio
Update the /etc/vfstab file on each node with an entry containing the NFS mount options for your platform.