©2015 - 2022 Chad’s Technoworks. Disclaimer and Terms of Use

Chad’s TechnoWorks My Journal On Technology

Information Technology

Oracle Grid Cluster Installation With NFS - page 2

Prev< 1 2 3 4 5 >Next

OPERATING ENVIRONMENT SETUP


Install Required OS Packages


Ensure that you had the following required packages installed:


SUNWbtool

* SUNWhea

SUNWlibm

SUNWlibms

SUNWsprot

SUNWtoo

* SUNWxwplt (This is for setting xwindow)

SUNWfont-xorg-core (This package is required only for GUI-based interactive installation, and not for silent installation)

SUNWlibC

SUNWcsl


* = Denotes that the packages are missing in Solaris 11 default installation. You must install these packages.



To verify above packages:


pkginfo <package_name>


To install missing packages:


pkg install <package name>


Note: The package install will download the file from Oracle's website. Requires internet connection.


Example:


root@s11node1:~# pkginfo SUNWhea

ERROR: information for "SUNWhea" was not found

root@s11node1:~# pkg install SUNWhea

           Packages to install:  1

       Create boot environment: No

Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                                1/1     1584/1584      3.2/3.2  354k/s


PHASE                                          ITEMS

Installing new actions                     1702/1702

Updating package state database                 Done

Updating package cache                           0/0

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

root@s11node1:~#

root@s11node1:~# pkginfo SUNWhea

system      SUNWhea SunOS Header Files

root@s11node1:~#


root@s11node1:~# pkginfo SUNWxwplt

ERROR: information for "SUNWxwplt" was not found

root@s11node1:~# pkg install SUNWxwplt


pkg install: 'SUNWxwplt' matches multiple packages

 pkg://solaris/compatibility/packages/SUNWxwplt

 pkg://solaris/SUNWxwplt


Please provide one of the package FMRIs listed above to the install command.

root@s11node1:~# pkg install //solaris/SUNWxwplt

           Packages to install:  6

       Create boot environment: No

Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                                6/6         51/51      0.7/0.7  143k/s


PHASE                                          ITEMS

Installing new actions                       229/229

Updating package state database                 Done

Updating package cache                           0/0

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

root@s11node1:~#

root@s11node1:~# pkginfo SUNWxwplt

system      SUNWxwplt X Window System platform software

root@s11node1:~#


On some Oracle software product installation, you might get an error related to X11 Code Set similar to the following:


pacific:oradb> cat prereq2015-02-07_09-55-00PM.err

/etc/inittab does not seem to contain default runlevel information.

pacific:oradb> cat prereq2015-02-07_09-55-00PM.log

...

Checking for SUNWi1cs; Not found.       Failed <<<<

Checking for SUNWi15cs; Not found.      Failed <<<<


It is always good to have these packages installed to avoid running into this issues when using X11 for your software installations.


As root,


pkgadd -d /cdrom/sol_10_113_x86/Solaris_10/Product SUNWi1cs SUNWi15cs


With Solaris 11.2 initial public release, the assembler package is missing from the installed development package. This was somehow left out during the development package install. To check do:


oraem@s11node1:~$ ls /usr/ccs/bin/as

/usr/ccs/bin/as: No such file or directory

oraem@s11node1:~$


In this case, it is missing. To avoid encountering the make file errors during the install, install the assembler package as demonstrated below:


root@s11node1:~# pkg search assembler

INDEX       ACTION VALUE                                          PACKAGE

pkg.fmri    set    solaris/developer/assembler                    pkg:/developer/assembler@0.5.11-0.175.2.1.0.4.0

pkg.summary set    Converts assembler source code to object code. pkg:/developer/assembler@0.5.11-0.175.2.1.0.4.0


root@s11node1:~# pkg install assembler

           Packages to install:  1

       Create boot environment: No

Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                                1/1           6/6      0.2/0.2  127k/s


PHASE                                          ITEMS

Installing new actions                         14/14

Updating package state database                 Done

Updating package cache                           0/0

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

root@s11node1:~#


root@s11node1:~# ls -l /usr/ccs/bin/as

-rwxr-xr-x   1 root     bin       626028 Feb  5 10:22 /usr/ccs/bin/as

root@s11node1:~#




Create OS Groups And OS User


You need to create OS Groups for which the Grid User will be part of. This needs to be done on both the NFS Server (pacific) and the cluster nodes (s11node1, s11node2). It is important that the assigned GID (Group ID) is consistent across all servers.


At the servers - pacific, s11node1,s11node2 - I created the following:


    groupadd -g 2001 orainst   # OINSTALL

    groupadd -g 2002 oradba    # OSDBA

    groupadd -g 2003 oraoper   # OSOPER

    groupadd -g 2004 gridadm   # OSASM

    groupadd -g 2005 griddba   # ASM OSDBA

    groupadd -g 2006 gridoper  # ASM OSOPER


As for the resource management for the Solaris Container, I created the following on each server:


projadd -p 201 -c "Oracle Grid Project" \

       -K "project.max-shm-memory=(priv,4g,deny)" \

       -K "process.max-sem-nsems=(priv,1024,deny)" \

       -K "process.max-file-descriptor=(basic,4096,deny)" oraclegrid


I created the Grid user on each server as:


useradd -g orainst -G gridadm,griddba,gridoper,oradba,oraoper -p oraclegrid \

    -K "project=oraclegrid" -m -s /bin/bash -d /export/home/oragrid \

    -c "Oracle Grid Owner" -u 1003 oragrid


Note that the oragrid membership with the group oradba and oraoper is just optional in case you’d like oragrid to manage or execute scripts that are of that group.




Environment Variables And Required Directories


As always with any Oracle product, you need to define the directory paths for the OS environment variables ORACLE_BASE and ORACLE_HOME.

To ensure that we define the correct directory trees for the environment variables, Oracle has a guideline called OFA (Optimal Flexible Architecture) that describes the standard naming convention and the levels of sub-directories. You may want to read the OFA details found in Oracle’s Documentation - Oracle Grid Infrastructure Cluster Installation Concepts - for better understanding of the environment setup.


Environment script: oraenv_grid.sh

ORACLE_BASE=/opt/app/ogrid ; export ORACLE_BASE

ORACLE_HOME=/opt/app/11.2.0.4/ogrid ; export ORACLE_HOME

ORAINST=/opt/app/oraInventory/oraInst.loc ; export ORAINST

PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:. ; export PATH

echo ------- DATABASE ENV -------

echo ORACLE_BASE=$ORACLE_BASE

echo ORACLE_HOME=$ORACLE_HOME

echo ORAINST=$ORAINST   


Build the ORACLE BASE and ORACLE HOME directories in the cluster nodes - s11node1, s11node2.


# mkdir -p /opt/app/ogrid

# mkdir -p /opt/app/11.2.0.4/ogrid

# mkdir -p /opt/app/oraInventory

# chown -R oragrid:orainst /opt/app

# chmod -R 775 /opt/app


If an Oracle database is to be installed on the same mount point, you may want to set its ORACLE BASE to its appropriate owner.

Example:


# chown oradb:orainst /opt/app/oracle


For more details of the ORACLE BASE and ORACLE HOME for a database, read the OFA guidelines found at -

Oracle Database Installation Guide For Oracle Solaris



CONFIGURE NFS CLIENT MOUNT


We needed a shared repository to store the OCR data and the voting data. The NFS server need to be configured to share two directories to store these data. If you are not familiar with NFS setup, you may read my article - Solaris NFS Server and Client Setup.


At the NFS server (pacific), I have created two directories to be shared. Each directory are owned by the oragrid user.


/dsk0/share/crsdata1

/dsk0/share/crsdata2


The above directories need to be presented as NFS mount point on each client node (s11node1,s11node2) as the following:


/ogrid/clusterdata1

/ogrid/clusterdata2


Thus your entries in your /etc/vfstab should look like:


pacific:/dsk0/share/crsdata1 - /ogrid/clusterdata1 nfs - yes rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768

pacific:/dsk0/share/crsdata2 - /ogrid/clusterdata2 nfs - yes rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768



ORACLE GRID INSTALLATION


Download Grid Software

If you don’t have a copy of the Oracle Database 11g Release 2 installation software, you may download the 11g R2 Grid software from Oracle’s website.


In my case, the installer file for the grid software is p13390677_112040_Solaris86-64_3of6.zip


Proceed to unzip the grid installer,


oragrid@s11node1:~/install$ unzip p13390677_112040_Solaris86-64_3of6.zip



Setup SSH Key Authentication For Grid User


The Oracle Grid Installation for cluster configuration requires the grid user to login across cluster node members without being being prompted by a password. You can either do this by rlogin setup or better, by SSH key authentication. In my case, I prefer the SSH key authentication for security reasons.


There are many ways to manually configure SSH Key Authentication between nodes as I had discussed on my article - Solaris SSH User Key Authentication


Oracle has provided a convenient way for DBAs to setup the SSH Key Authentication for the grid user through a script - sshUserSetup.sh - found on the Grid installer. To keep things simple, we’ll run this script.


./sshUserSetup.sh -user oragrid -hosts "s11node1 s11node2" -noPromptPassphrase -confirm -advanced    


Sample output:


oragrid@s11node1:~/install/grid/sshsetup$ ./sshUserSetup.sh

Please specify a valid and existing cluster configuration file.

Either user name or host information is missing

Usage ./sshUserSetup.sh -user <user name> [ -hosts "<space separated hostlist>" | -hostfile <absolute path of cluster configuration file> ] [ -advanced ]  [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassphrase]

oragrid@s11node1:~/install/grid/sshsetup$ ./sshUserSetup.sh -user oragrid -hosts "s11node1 s11node2" -noPromptPassphrase -confirm -advanced

The output of this script is also logged into /tmp/sshUserSetup_2015-03-05-10-10-52.log

Hosts are s11node1 s11node2

user is oragrid

Platform:- SunOS

Checking if the remote hosts are reachable

PING s11node1: 5 data bytes

13 bytes from s11node1 (::1): icmp_seq=0.

13 bytes from s11node1 (::1): icmp_seq=1.

13 bytes from s11node1 (::1): icmp_seq=2.

13 bytes from s11node1 (::1): icmp_seq=3.

13 bytes from s11node1 (::1): icmp_seq=4.


----s11node1 PING Statistics----

5 packets transmitted, 5 packets received, 0% packet loss

PING s11node2: 5 data bytes

13 bytes from s11node2 (172.16.33.121): icmp_seq=0.

13 bytes from s11node2 (172.16.33.121): icmp_seq=1.

13 bytes from s11node2 (172.16.33.121): icmp_seq=2.

13 bytes from s11node2 (172.16.33.121): icmp_seq=3.

13 bytes from s11node2 (172.16.33.121): icmp_seq=4.


----s11node2 PING Statistics----

5 packets transmitted, 5 packets received, 0% packet loss

Remote host reachability check succeeded.

The following hosts are reachable: s11node1 s11node2.

The following hosts are not reachable: .

All hosts are reachable. Proceeding further...

firsthost s11node1

numhosts 0

The script will setup SSH connectivity from the host s11node1 to all

the remote hosts. After the script is executed, the user can use SSH to run

commands on the remote hosts or copy files between this host s11node1

and the remote hosts without being prompted for passwords or confirmations.


NOTE 1:

As part of the setup procedure, this script will use ssh and scp to copy

files between the local host and the remote hosts. Since the script does not

store passwords, you may be prompted for the passwords during the execution of

the script whenever ssh or scp is invoked.


NOTE 2:

AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY

AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE

directories.


Do you want to continue and let the script make the above mentioned changes (yes/no)?

Confirmation provided on the command line


The user chose yes

User chose to skip passphrase related questions.

Creating .ssh directory on local host, if not present already

Creating authorized_keys file on local host

Changing permissions on authorized_keys to 644 on local host

Creating known_hosts file on local host

Changing permissions on known_hosts to 644 on local host

Creating config file on local host

If a config file exists already at /export/home/oragrid/.ssh/config, it would be backed up to /export/home/oragrid/.ssh/config.backup.

Creating .ssh directory and setting permissions on remote host s11node1

THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oragrid. THIS IS AN SSH REQUIREMENT.

The script would create /export/home/oragrid/.ssh/config file on remote host s11node1. If a config file exists already at /export/home/oragrid/.ssh/config, it would be backed up to /export/home/oragrid/.ssh/config.backup.

The user may be prompted for a password here since the script would be running SSH on host s11node1.

Warning: Permanently added 's11node1' (RSA) to the list of known hosts.

Password:

Done with creating .ssh directory and setting permissions on remote host s11node1.

Creating .ssh directory and setting permissions on remote host s11node2

THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oragrid. THIS IS AN SSH REQUIREMENT.

The script would create /export/home/oragrid/.ssh/config file on remote host s11node2. If a config file exists already at /export/home/oragrid/.ssh/config, it would be backed up to /export/home/oragrid/.ssh/config.backup.

The user may be prompted for a password here since the script would be running SSH on host s11node2.

Warning: Permanently added 's11node2,172.16.33.121' (RSA) to the list of known hosts.

Done with creating .ssh directory and setting permissions on remote host s11node2.

Copying local host public key to the remote host s11node1

The user may be prompted for a password or passphrase here since the script would be using SCP for host s11node1.

Password:

Done copying local host public key to the remote host s11node1

Copying local host public key to the remote host s11node2

The user may be prompted for a password or passphrase here since the script would be using SCP for host s11node2.

Done copying local host public key to the remote host s11node2

Creating keys on remote host s11node1 if they do not exist already. This is required to setup SSH on host s11node1.


Creating keys on remote host s11node2 if they do not exist already. This is required to setup SSH on host s11node2.


Updating authorized_keys file on remote host s11node1

Updating known_hosts file on remote host s11node1

Updating authorized_keys file on remote host s11node2

Updating known_hosts file on remote host s11node2

cat: cannot open /export/home/oragrid/.ssh/known_hosts.tmp: No such file or directory

cat: cannot open /export/home/oragrid/.ssh/authorized_keys.tmp: No such file or directory

SSH setup is complete.


------------------------------------------------------------------------

Verifying SSH setup

===================

The script will now run the date command on the remote nodes using ssh

to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,

THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR

PASSWORDS. If you see any output other than date or are prompted for the

password, ssh is not setup correctly and you will need to resolve the

issue and set up ssh again.

The possible causes for failure could be:

1. The server settings in /etc/ssh/sshd_config file do not allow ssh

for user oragrid.

2. The server may have disabled public key based authentication.

3. The client public key on the server may be outdated.

4. /export/home/oragrid or /export/home/oragrid/.ssh on the remote host may not be owned by oragrid.

5. User may not have passed -shared option for shared remote users or

may be passing the -shared option for non-shared remote users.

6. If there is output in addition to the date, but no password is asked,

it may be a security alert shown as part of company policy. Append the

additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.

------------------------------------------------------------------------

--s11node1:--

Running /usr/bin/ssh -x -l oragrid s11node1 date to verify SSH connectivity has been setup from local host to s11node1.

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.

Thursday, March  5, 2015 10:11:16 AM CST

------------------------------------------------------------------------

--s11node2:--

Running /usr/bin/ssh -x -l oragrid s11node2 date to verify SSH connectivity has been setup from local host to s11node2.

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.

Thursday, March  5, 2015 10:11:16 AM CST

------------------------------------------------------------------------

------------------------------------------------------------------------

Verifying SSH connectivity has been setup from s11node1 to s11node1

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.

bash: -c: line 0: unexpected EOF while looking for matching `"'

bash: -c: line 1: syntax error: unexpected end of file

------------------------------------------------------------------------

------------------------------------------------------------------------

Verifying SSH connectivity has been setup from s11node1 to s11node2

IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.

bash: -c: line 0: unexpected EOF while looking for matching `"'

bash: -c: line 1: syntax error: unexpected end of file

------------------------------------------------------------------------

-Verification from complete-

SSH verification complete.

oragrid@s11node1:~/install/grid/sshsetup$



Let’s verify that the authorized_keys file were created on each node.


oragrid@s11node1:~$ ls -l $HOME/.ssh

total 11

-rw-r--r--   1 oragrid  orainst      398 Jan 27 17:27 authorized_keys

-rw-------   1 oragrid  orainst     1675 Jan 27 17:21 id_rsa

-rw-r--r--   1 oragrid  orainst      398 Jan 27 17:21 id_rsa.pub

-rw-r--r--   1 oragrid  orainst      404 Jan 27 17:26 known_hosts

oragrid@s11node1:~$


oragrid@s11node2:~$ ls -l $HOME/.ssh

total 11

-rw-r--r--   1 oragrid  orainst      398 Jan 27 17:26 authorized_keys

-rw-------   1 oragrid  orainst     1679 Jan 27 17:23 id_rsa

-rw-r--r--   1 oragrid  orainst      398 Jan 27 17:23 id_rsa.pub

-rw-r--r--   1 oragrid  orainst      404 Jan 27 17:27 known_hosts

oragrid@s11node2:~$



Let’s test the user authentication with no password for remote SSH across cluster node members.


@node1, do remote connect to node2 and you should not be prompted with a password.


oragrid@s11node1:~$ ssh s11node2

Last login: Tue Jan 27 17:26:24 2015 from s11node1

Oracle Corporation      SunOS 5.11      11.2    June 2014

oragrid@s11node2:~$


@node2, do remote connect to node1 and you should not be prompted with a password.


oragrid@s11node2:~$ ssh s11node1

Last login: Tue Jan 27 17:27:39 2015 from s11node2

Oracle Corporation      SunOS 5.11      11.2    June 2014

oragrid@s11node1:~$



At this point, we have now confirmed that our SSH setup is good.


We can now proceed to the next step which is to verify all the cluster requirements before running the installer.