©2015 - 2022 Chad’s Technoworks. Disclaimer and Terms of Use

Chad’s TechnoWorks My Journal On Technology

Information Technology

Oracle 12c Enterprise Manager Cloud Control Installation - page 5

Oracle Enterprise Manager High Availability Configuration


NOTE: This article is an extension of my previous article on Oracle Enterprise Manager (OEM) Stand Alone Installation. I advice that you read that section first if you are not familiar with the basic installation procedures.  


High Availability in Oracle Enterprise Manager (OEM) as of this writing is mostly based on the failover capability of the OMS instance onto another node. You may further extend this failover capability also onto its repository database (OMR) by either configuring it with Oracle Data Guard Physical Standby or Oracle RAC. But for this exercise, we'll only be focused on the OMS failover capability just to examine the idea of how its being done. If you wanted to explore the Data Guard database setup, check my article - Data Guard Setup For Physical Standby.


In today's powerful computers in the x86-64 platform, it would be cost effective for the IT budget if we deploy monitoring solutions in a virtualized environment versus the traditional dedicated hardware. In this excercise, I am using VMware Fusion as my main infrastructure. Thus, some of my hardware configuration tips are specific for virtualization.


I. INFRASTRUCTURE AND OPERATING ENVIRONMENT OVERVIEW


Prev< 1 2 3 4 5 6 7 8 >Next

Host: s11node1, s11node1.vlabs.net

OS: Solaris 11.2 x86-64

RAM: 6 gb

Disk: 35 gb

Additional Net Interface:

 eth1 (for the interconnect)

 eth2 (for the OEM vip)

Description: This host is the primary node of the OEM


Host: s11node2, s11node2.vlabs.net

OS: Solaris 11.2 x86-64

RAM: 6 gb

Disk: 35 gb

Additional Net Interface:

 eth1 (for the interconnect)

 eth2 (for the OEM vip)

Description: This host is the failover node of the OEM


Host: pacific.vlabs.net

OS: Solaris 10 x86-64

Database: OMRDB

Description: This host is for the OMR database repository. This is also the NFS server for use with the Oracle Grid and as a shared repository of the OMS installation.

Shared File System:

/disk0/nfs/ocr   (128mb)

/disk0/nfs/oms   (min 24gb)



To describe the above architecture, there are two nodes (s11node1, s11node2) for the OMS but only one will be active at a time.

An OEM vip will be the main IP/host for the OEM and the failover of this IP will be controlled by the Oracle Grid Clusterware.

The NFS filesystems are mounted on both nodes for use with the Oracle Clusterware OCR and the installation of the OMS.

The OMR database represented by the instance OMRDB will be our database repository for the 12c OEM. The database setup will not be covered in this article and it is assumed that is up and running already. If you wanted to understand the creation of such database read a section of my article starting at Oracle EM Cloud Control Installation page 1 - Install Database Software, then proceed to the topics on creating a database.



II. OPERATING ENVIRONMENT SETUP OF PRIMARY NODE AND FAILOVER NODE


BUILD A SOLARIS 11 VM FOR THE PRIMARY AND FAILOVER NODE

Unfortunately, this is totally a different topic if you wanted to dig deeper onto the building of virtualized instances. I will discuss this in my future articles when I have the time. But to those who are familiar with VMWare, I have used the ISO image of Solaris 11 R2 x86-64 found in oracle.com download page. In my case, I have created s11node1 as my primary node and s11node2 as my failover node. Since by default, the VM instances are configured to use DHCP which randomly assigns an IP everytime it starts up, we need to assign a more permanent IP to the primary network interface. This is important in relation to clusterware configuration. If you don't know how to set this up in VMware Fusion, read my tips on this article - How to assign DHCP static IP for Guest OS.

In my case, the net0 interface will have the following IPs:


s11node1 net0 172.16.33.120

s11node2 net0 172.16.33.121


Then my /etc/hosts file will have the initial entries of:


172.16.33.120 s11node1 s11node1.vlabs.net

172.16.33.121 s11node2 s11node2.vlabs.net


plus, I've added the pre-existing hosts of NFS server and the database server:


172.16.33.126  pacific pacific.vlabs.net


Thus, my /etc/hosts for both nodes s11node1 and s11node2 would look like:


root@s11node1:/etc# cat /etc/hosts

#

# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.

# Use is subject to license terms.

#

# Internet host table

#

::1 s11node1 localhost

127.0.0.1 localhost loghost

172.16.33.120   s11node1 s11node1.vlabs.net

172.16.33.121   s11node2 s11node2.vlabs.net

172.16.33.126   pacific pacific.vlabs.net

root@s11node1:/etc#



REQUIRED PACKAGES

Ensure that you had the following required packages installed in the Solaris 11 of your cluster nodes:


SUNWbtool

* SUNWhea

SUNWlibm

SUNWlibms

SUNWsprot

SUNWtoo

* SUNWxwplt (This is for setting xwindow)

SUNWfont-xorg-core (This package is required only for GUI-based interactive installation, and not for silent installation)

SUNWlibC

SUNWcsl


* = Denotes that the packages are missing in Solaris 11 default installation. You must install these packages.



To verify above packages:


pkginfo <package_name>


To install missing packages:


pkg install <package name>


Note: The package install will download the file from Oracle's website. Requires internet connection.


Example:


root@s11node1:~# pkginfo SUNWhea

ERROR: information for "SUNWhea" was not found

root@s11node1:~# pkg install SUNWhea

           Packages to install:  1

       Create boot environment: No

Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                                1/1     1584/1584      3.2/3.2  354k/s


PHASE                                          ITEMS

Installing new actions                     1702/1702

Updating package state database                 Done

Updating package cache                           0/0

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

root@s11node1:~#

root@s11node1:~# pkginfo SUNWhea

system      SUNWhea SunOS Header Files

root@s11node1:~#


root@s11node1:~# pkginfo SUNWxwplt

ERROR: information for "SUNWxwplt" was not found

root@s11node1:~# pkg install SUNWxwplt


pkg install: 'SUNWxwplt' matches multiple packages

 pkg://solaris/compatibility/packages/SUNWxwplt

 pkg://solaris/SUNWxwplt


Please provide one of the package FMRIs listed above to the install command.

root@s11node1:~# pkg install //solaris/SUNWxwplt

           Packages to install:  6

       Create boot environment: No

Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                                6/6         51/51      0.7/0.7  143k/s


PHASE                                          ITEMS

Installing new actions                       229/229

Updating package state database                 Done

Updating package cache                           0/0

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

root@s11node1:~#

root@s11node1:~# pkginfo SUNWxwplt

system      SUNWxwplt X Window System platform software

root@s11node1:~#



CREATE A VIP FOR USE WITH OEM AT THE PRIMARY SERVER


In my case, I configured an available IP 172.16.33.99 at my public interface and assigned a hostname oem-vip.vlabs.net at the /etc/hosts.

If you don't know how to do this, read my instructions on how to configure a vip onto an existing network interface as discussed in my article - Solaris 11 Network IP Configuration


Let's verify the VIP setup:


oraem@s11node1:~$ ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

 inet 127.0.0.1 netmask ff000000

net0: flags=100001004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4,PHYSRUNNING> mtu 1500 index 2

 inet 172.16.33.120 netmask ffffff00 broadcast 172.16.33.255

net0:1: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2

 inet 172.16.33.99 netmask ffffff00 broadcast 172.16.33.255

net1: flags=100001004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4,PHYSRUNNING> mtu 1500 index 3

 inet 192.168.65.111 netmask ffffff00 broadcast 192.168.65.255

lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1

 inet6 ::1/128

net0: flags=120002004841<UP,RUNNING,MULTICAST,DHCP,IPv6,PHYSRUNNING> mtu 1500 index 2

 inet6 fe80::20c:29ff:fe1b:7a7b/10

net1: flags=120002004841<UP,RUNNING,MULTICAST,DHCP,IPv6,PHYSRUNNING> mtu 1500 index 3

 inet6 fe80::250:56ff:fe21:4912/10

oraem@s11node1:~$ ipadm show-addr

ADDROBJ           TYPE     STATE        ADDR

lo0/v4            static   ok           127.0.0.1/8

net0/v4           dhcp     ok           172.16.33.120/24

net0/oemvip       static   ok           172.16.33.99/24

net1/v4           dhcp     ok           192.168.65.111/24

lo0/v6            static   ok           ::1/128

net0/v6           addrconf ok           fe80::20c:29ff:fe1b:7a7b/10

net1/v6           addrconf ok           fe80::250:56ff:fe21:4912/10

oraem@s11node1:~$


The /etc/hosts entries in both nodes s11node1 and s11node2 has the following entries:


root@s11node1:~# cat /etc/hosts

#

# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.

# Use is subject to license terms.

#

# Internet host table

#

::1 s11node1 localhost

127.0.0.1 localhost loghost

172.16.33.120   s11node1 s11node1.vlabs.net

172.16.33.121   s11node2 s11node2.vlabs.net

172.16.33.126   pacific pacific.vlabs.net

192.168.65.111  s11grid1 s11grid1.private.net

192.168.65.112  s11grid2 s11grid2.private.net

172.16.33.99   oem-vip oem-vip.vlabs.net

root@s11node1:~#




CREATE USER AND GROUPS


Ensure that you have consistent UID/GID between NFS server and the OEM nodes.


For Solaris only, create the project resource for the users.


projadd -p 300 -c "Oracle EM Project" \

       -K "project.max-shm-memory=(priv,6g,deny)" \

       -K "process.max-sem-nsems=(priv,1024,deny)" \

       -K "process.max-file-descriptor=(basic,4096,deny)" oracleEM

       

projadd -p 201 -c "Oracle Grid Project" \

       -K "project.max-shm-memory=(priv,4g,deny)" \

       -K "process.max-sem-nsems=(priv,1024,deny)" \

       -K "process.max-file-descriptor=(basic,4096,deny)" oraclegrid

                  

Create the OS groups (s11node1, s11node2, pacific)

      

    groupadd -g 2001 orainst   # OINSTALL

    groupadd -g 2002 oradba    # OSDBA

    groupadd -g 2003 oraoper   # OSOPER

    groupadd -g 2004 gridadm   # OSASM

    groupadd -g 2005 griddba   # ASM OSDBA

    groupadd -g 2006 gridoper  # ASM OSOPER



Create the OS Users for Solaris:


useradd -g orainst -G oradba,oraoper -p oracleEM -K "project=oracleEM" -m \

    -s /bin/bash -d /export/home/oraem -c "Oracle Enterprise Manager" -u 2001 oraem


#useradd -g orainst -G oradba,oraoper -p oracleEM -K "project=oracleEM" -m \

#    -s /bin/bash -d /export/home/oragrid -c "Oracle Grid" -u 1003 oragrid


useradd -g orainst -G gridadm,griddba,gridoper,oradba,oraoper -p oraclegrid \

    -K "project=oraclegrid" -m -s /bin/bash -d /export/home/oragrid \

    -c "Oracle Grid Owner" -u 1003 oragrid



Set Environment script (.profile) for user oraem to add:

ORACLE_HOSTNAME=<your virtual host (vip)>


example:

ORACLE_HOSTNAME=oem-vip.vlabs.net



CONFIGURE NFS SHARED FILE SYSTEM FOR OMS

The 12c EM Cloud Control software has to be made available in both nodes s11node1 and s11node2 so that at any failover event, the software ican be started. We will use NFS as our shared file system where installed software will reside. The Shared File System for both nodes will be mounted as /oem/app. At this point we’ll proceed on configuring the NFS server (pacific.vlabs.net) and client nodes (s11node1, s11node2) to create the mount point /oem/app. Detailed steps and examples are discussed in my article - Solaris NFS Server and Client Setup.


Once the NFS configuration is done, ensure that the ownership of the /oem/app mount point belongs to the Oracle OEM user oraem.


# chown oraem:orainst /oem/app



III. INSTALL ORACLE 12c CLOUD CONTROL SOFTWARE (also known as OEM)


The installation of OEM has already been covered in the previous sections of this article - Stand Alone OEM Setup - Install Enterprise Manager.

The only difference was the hostname should be the vip hostname instead of the node hostname. To ensure that the correct hostname will be assigned during the install set the following environment variable at the grid user oraem.

ORACLE_HOSTNAME=oem-vip.vlabs.net


Also, ensure that the OMR database had the correct init parameter requirements as discussed here - init.ora Parameter Adjustments.


My environment parameter script for use with oraem user:

>>>File: oraenv_oem.sh

MIDDLEWARE_HOME=/oem/app/oraem/middleware ; export MIDDLEWARE_HOME

AGENT_BASE=/oem/app/oraem/agent12c ; export AGENT_BASE

INSTANCE_BASE=/oem/app/oraem/gc_inst ; export INSTANCE_BASE

ORACLE_HOSTNAME=oem-vip.vlabs.net ; export ORACLE_HOSTNAME

ORACLE_HOME=$MIDDLEWARE_HOME/oms ; export ORACLE_HOME

PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:. ; export PATH

EDITOR=vi ; export EDITOR

TMPDIR=/oem/app/oraem/tmp ; export TMPDIR

TEMP=/oem/app/oraem/tmp ; export TEMP

TMP=/oem/app/oraem/tmp ; export TMP

  if [ ! -f $TMPDIR ];

  then

    mkdir -p $TMPDIR

  fi

echo ------ ORACLE 12C EM CLOUD CONTROL -------

echo MIDDLEWARE_HOME=$MIDDLEWARE_HOME

echo AGENT_BASE=$AGENT_BASE

echo INSTANCE_BASE=$INSTANCE_BASE

echo ORACLE_HOME=$ORACLE_HOME

echo TMPDIR=$TMPDIR


Pre-create the directories before running the installer.


mkdir -p $MIDDLEWARE_HOME

mkdir -p $AGENT_BASE

mkdir -p $INSTANCE_BASE

mkdir -p /oem/app/oraInventory



Startup the OMRDB11 database first,


pacific:oradb> . ./oraenv_multihome_v2 db 11.2 OMRDB11

------- DATABASE ENV -------

ORACLE_SID=OMRDB11

ORACLE_BASE=/dsk0/orabin/11gR2

ORACLE_HOME=/dsk0/orabin/11gR2/product/11.2.0.4/db

ORAINST=/dsk0/orabin/11gR2/product/11.2.0.4/db/oraInst.loc

TNS_ADMIN=/dsk0/orabin/11gR2/product/11.2.0.4/db/network/admin

NLS_LANG=AMERICAN_AMERICA.UTF8


pacific:oradb> ls

afiedt.buf                 oraenv_ORA10DB1_gg_db.sh*

oraenv_multihome_v2*

pacific:oradb> lsnrctl start OMRDB11


LSNRCTL for Solaris: Version 11.2.0.4.0 - Production on 05-FEB-2015 09:39:47


Copyright (c) 1991, 2013, Oracle.  All rights reserved.


Starting /dsk0/orabin/11gR2/product/11.2.0.4/db/bin/tnslsnr: please wait...


TNSLSNR for Solaris: Version 11.2.0.4.0 - Production

System parameter file is /dsk0/orabin/11gR2/product/11.2.0.4/db/network/admin/listener.ora

Log messages written to /dsk0/orabin/11gR2/diag/tnslsnr/pacific/omrdb11/alert/log.xml

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=pacific)(PORT=1537)))


Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=pacific)(PORT=1537)))

STATUS of the LISTENER

------------------------

Alias                     OMRDB11

Version                   TNSLSNR for Solaris: Version 11.2.0.4.0 - Production

Start Date                05-FEB-2015 09:39:47

Uptime                    0 days 0 hr. 0 min. 0 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /dsk0/orabin/11gR2/product/11.2.0.4/db/network/admin/listener.ora

Listener Log File         /dsk0/orabin/11gR2/diag/tnslsnr/pacific/omrdb11/alert/log.xml

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=pacific)(PORT=1537)))

Services Summary...

Service "PLSExtProc" has 1 instance(s).

  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...

The command completed successfully

pacific:oradb> sqlplus /nolog


SQL*Plus: Release 11.2.0.4.0 Production on Thu Feb 5 09:39:54 2015


Copyright (c) 1982, 2013, Oracle.  All rights reserved.


SQL> connect / as sysdba

Connected to an idle instance.

SQL> startup

ORACLE instance started.


Total System Global Area 2137886720 bytes

Fixed Size                  2252576 bytes

Variable Size             805306592 bytes

Database Buffers         1325400064 bytes

Redo Buffers                4927488 bytes

Database mounted.

Database opened.

SQL> exit

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

pacific:oradb>


Back at your cluster nodes, if you are on Solaris 11.2 initial public release, ensure that an assembler compiler exist.

This was somehow left out during the development package install. To check do:


oraem@s11node1:~$ ls /usr/ccs/bin/as

/usr/ccs/bin/as: No such file or directory

oraem@s11node1:~$


In this case, it is missing. To avoid encountering the make file errors during the install, install the assembler package per instructions below:


root@s11node1:~# pkg search assembler

INDEX       ACTION VALUE                                          PACKAGE

pkg.fmri    set    solaris/developer/assembler                    pkg:/developer/assembler@0.5.11-0.175.2.1.0.4.0

pkg.summary set    Converts assembler source code to object code. pkg:/developer/assembler@0.5.11-0.175.2.1.0.4.0

root@s11node1:~# pkg install assembler

           Packages to install:  1

       Create boot environment: No

Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                                1/1           6/6      0.2/0.2  127k/s


PHASE                                          ITEMS

Installing new actions                         14/14

Updating package state database                 Done

Updating package cache                           0/0

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

root@s11node1:~#

root@s11node1:~# ls -l /usr/ccs/bin/as

-rwxr-xr-x   1 root     bin       626028 Feb  5 10:22 /usr/ccs/bin/as

root@s11node1:~#


Then run the 12c OEM installer using the following parameters.


runInstaller -invPtrloc /oem/app/oraInventory/oraInst.loc ORACLE_HOSTNAME=oem-vip.vlabs.net


During the install, set your oracle inventory path to /oem/app/oraInventory and the rest of the options set to default.

Take note of the password you had supplied for the following users:

Middleware users:

weblogic

nodemanager

The agent registration password: oracle123


Database user:

sysman


At the last part of the installation, you’ll be prompted to run the root script.


root@s11node1:/oem/app/oraem/middleware/oms#./allroot.sh


Once the installation has finished, a window will display the following message:


This information is also available at:


 /oem/app/oraem/middleware/oms/install/setupinfo.txt


See below for information pertaining to your Enterprise Manager installation:



Use the following URL to access:


 1. Enterprise Manager Cloud Control URL: https://oem-vip.vlabs.net:7799/em

 2. Admin Server URL: https://oem-vip.vlabs.net:7101/console


The following details need to be provided during the additional OMS install:


 1. Admin Server Hostname: oem-vip.vlabs.net

 2. Admin Server Port: 7101


You can find the details on ports used by this deployment at : /oem/app/oraem/middleware/oms/install/portlist.ini



 NOTE:

 An encryption key has been generated to encrypt sensitive data in the Management Repository. If this key is lost, all encrypted data in the Repository becomes unusable.


 A backup of the OMS configuration is available in /oem/app/oraem/gc_inst/em/EMGC_OMS1/sysman/backup on host oem-vip.vlabs.net. See Cloud Control Administrators Guide for details on how to back up and recover an OMS.


 NOTE: This backup is valid only for the initial OMS configuration. For example, it will not reflect plug-ins installed later, topology changes like the addition of a load balancer, or changes to other properties made using emctl or emcli. Backups should be created on a regular basis to ensure they capture the current OMS configuration. Use the following command to backup the OMS configuration:

/oem/app/oraem/middleware/oms/bin/emctl exportconfig oms -dir <backup dir>


At this point, the installation of the 12c Cloud Control Software has completed.

We can now proceed to next steps to enable High Availability so that the OEM instance can failover to the next node.




IV. ORACLE GRID CLUSTER SETUP


So far, what we have accomplished is the installation of OEM on a shared file system via the NFS mount and the use of VIP as the OEM host. The VIP (oem-vip.vlabs.net) at the moment is configured directly on the network interface of s11node1 host. Even though the OEM software is available on both nodes (s11node1, s11node2), we cannot startup the OEM in s11node2 because of the absence of the OEM VIP. What we need to make this happen, is to provide failover of the VIP from s11node1 to s11node2 or vice-versa. Oracle Grid Cluster which comes part of the Oracle RAC product is capable of doing this job. But since this is a product intended for RAC database, the setup of Oracle Grid has “extras” that we don’t need such as the SCAN IP which was meant for the Oracle Database Listener. As of this writing, the current version of Oracle Grid does not allow to forgo the configuration of SCAN IP and is something we cannot avoid. Thus, we need additional IPs for our SCAN requirement when we install Oracle Grid.

 

The setup of Oracle Grid Cluster has been covered by my article - Oracle Grid Cluster Installation With NFS. The grid cluster has to be installed in s11node1 and s11node2. Once that’s done, the next step is to configure an Application VIP (oem-vip.vlabs.net) for cluster failover which I had discussed in this section - Application Resource Configuration.




APPLICATION FAILOVER PREP TASKS FOR OEM


Setup Oracle Inventory Location

Since we installed OEM only in the first node s11node1, the Oracle inventory Location file oraInst.loc only exists in that node. This file is important since it tells where the installation of various Oracle software are located and is useful during patching and upgrade of OEM. This file has to exist also in the second node s11node2. But at the moment since we installed the Oracle Grid, the inventory file is pointing at the local installed directories of the grid software. We need to backup the existing inventory location file - /var/opt/oracle/oraInst.loc - then create a new one that points to the OEM software.


As root (do this on both nodes - s11node1, s11node2), backup the file /var/opt/oracle/oraInst.loc of Oracle Grid.


# cd /var/opt/oracle

# cp oraInst.loc oraInst.loc.grid


Next, locate the oraInst.loc of the OEM.

It usually reside at two directory levels up of the MIDDLEWARE_HOME. In my case, the MIDDLEWARE_HOME is /oem/app/oraem/middleware and that means the inventory file will be located at /oem/app/oraInventory/oraInst.loc

 

# ls -l /oem/app/oraInventory/oraInst.loc


Then, create a symbolic link in /var/opt/oracle pointing to that file.


# cd /var/opt/oracle

# ln -s /oem/app/oraInventory/oraInst.loc oraInst.loc


Verify that the file is pointing to the correct location (/oem/app/oraInventory/oraInst.loc)

# ls -l oraInst.loc



Setup OEM User Environment Script

Create/modify a user environment script to include ORACLE_HOSTNAME set to the OEM VIP. Do this on both nodes.


>>>>oraenv_oem.sh


MIDDLEWARE_HOME=/oem/app/oraem/middleware ; export MIDDLEWARE_HOME

AGENT_BASE=/oem/app/oraem/agent12c ; export AGENT_BASE

INSTANCE_BASE=/oem/app/oraem/gc_inst ; export INSTANCE_BASE

ORACLE_HOSTNAME=oem-vip.vlabs.net ; export ORACLE_HOSTNAME

ORACLE_HOME=$MIDDLEWARE_HOME/oms ; export ORACLE_HOME

PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:. ; export PATH

EDITOR=vi ; export EDITOR

TMPDIR=/oem/app/oraem/tmp ; export TMPDIR

TEMP=/oem/app/oraem/tmp ; export TEMP

TMP=/oem/app/oraem/tmp ; export TMP

  if [ ! -f $TMPDIR ];

  then

    mkdir -p $TMPDIR

  fi

echo ------ ORACLE 12C EM CLOUD CONTROL -------

echo MIDDLEWARE_HOME=$MIDDLEWARE_HOME

echo AGENT_BASE=$AGENT_BASE

echo INSTANCE_BASE=$INSTANCE_BASE

echo ORACLE_HOSTNAME=$ORACLE_HOSTNAME

echo ORACLE_HOME=$ORACLE_HOME

echo TMPDIR=$TMPDIR



Test OEM Startup In The Failover Node

The objective is to test whether the OEM can successfully startup at the failover node.


1. If OEM is active, shutdown the running OMS in s11node1.


As OEM user, execute -


emctl stop oms all



2. Execute failover of OEM VIP from the primary node (s11node1) to the second node (s11node2).


As grid user, execute -


crsctl relocate resource omsvip -n s11node2


Example:


Check if the resource omsvip is online in the primary node s11node1.


oragrid@s11node1:~/.env$ crs_stat -t

Name           Type           Target    State     Host

------------------------------------------------------------

omsvip         app....t1.type ONLINE    ONLINE    s11node1

ora....ER.lsnr ora....er.type ONLINE    ONLINE    s11node1

ora....N1.lsnr ora....er.type ONLINE    ONLINE    s11node1

ora.asm        ora.asm.type   OFFLINE   OFFLINE

ora.cvu        ora.cvu.type   ONLINE    ONLINE    s11node1

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE

ora....network ora....rk.type ONLINE    ONLINE    s11node1

ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    s11node1

ora.ons        ora.ons.type   ONLINE    ONLINE    s11node1

ora....ry.acfs ora....fs.type OFFLINE   OFFLINE

ora....SM1.asm application    OFFLINE   OFFLINE

ora....E1.lsnr application    ONLINE    ONLINE    s11node1

ora....de1.gsd application    OFFLINE   OFFLINE

ora....de1.ons application    ONLINE    ONLINE    s11node1

ora....de1.vip ora....t1.type ONLINE    ONLINE    s11node1

ora....SM2.asm application    OFFLINE   OFFLINE

ora....E2.lsnr application    ONLINE    ONLINE    s11node2

ora....de2.gsd application    OFFLINE   OFFLINE

ora....de2.ons application    ONLINE    ONLINE    s11node2

ora....de2.vip ora....t1.type ONLINE    ONLINE    s11node2

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    s11node1

oragrid@s11node1:~/.env$



Failover the resource omsvip to the second node s11node2.


oragrid@s11node1:~/.env$ crsctl relocate resource omsvip -n s11node2

CRS-2673: Attempting to stop 'omsvip' on 's11node1'

CRS-2677: Stop of 'omsvip' on 's11node1' succeeded

CRS-2672: Attempting to start 'omsvip' on 's11node2'

CRS-2676: Start of 'omsvip' on 's11node2' succeeded

oragrid@s11node1:~/.env$


oragrid@s11node1:~/.env$ crs_stat -t

Name           Type           Target    State     Host

------------------------------------------------------------

omsvip         app....t1.type ONLINE    ONLINE    s11node2

ora....ER.lsnr ora....er.type ONLINE    ONLINE    s11node1

ora....N1.lsnr ora....er.type ONLINE    ONLINE    s11node1

ora.asm        ora.asm.type   OFFLINE   OFFLINE

ora.cvu        ora.cvu.type   ONLINE    ONLINE    s11node1

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE

ora....network ora....rk.type ONLINE    ONLINE    s11node1

ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    s11node1

ora.ons        ora.ons.type   ONLINE    ONLINE    s11node1

ora....ry.acfs ora....fs.type OFFLINE   OFFLINE

ora....SM1.asm application    OFFLINE   OFFLINE

ora....E1.lsnr application    ONLINE    ONLINE    s11node1

ora....de1.gsd application    OFFLINE   OFFLINE

ora....de1.ons application    ONLINE    ONLINE    s11node1

ora....de1.vip ora....t1.type ONLINE    ONLINE    s11node1

ora....SM2.asm application    OFFLINE   OFFLINE

ora....E2.lsnr application    ONLINE    ONLINE    s11node2

ora....de2.gsd application    OFFLINE   OFFLINE

ora....de2.ons application    ONLINE    ONLINE    s11node2

ora....de2.vip ora....t1.type ONLINE    ONLINE    s11node2

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    s11node1

oragrid@s11node1:~/.env$



Verify if the IP is configured on the network interface in s11node2.


oraem@s11node2:~/.env$ cat /etc/hosts | grep oem-vip

172.16.33.99   oem-vip oem-vip.vlabs.net

oraem@s11node2:~/.env$ ifconfig -a | grep 172.16.33.99

        inet 172.16.33.99 netmask ffffff00 broadcast 172.16.33.255

oraem@s11node2:~/.env$



3. Manually startup the OEM at the second node (s11node2).


As OEM user, execute -


emctl start oms


Example:


Verify that the NFS mount of OEM software exist in s11node2


oraem@s11node2:~/.env$ df -h

Filesystem             Size   Used  Available Capacity  Mounted on

rpool/ROOT/solaris      34G   6.9G        16G    31%    /

/devices                 0K     0K         0K     0%    /devices

/dev                     0K     0K         0K     0%    /dev

ctfs                     0K     0K         0K     0%    /system/contract

proc                     0K     0K         0K     0%    /proc

mnttab                   0K     0K         0K     0%    /etc/mnttab

swap                   3.6G   1.6M       3.6G     1%    /system/volatile

objfs                    0K     0K         0K     0%    /system/object

sharefs                  0K     0K         0K     0%    /etc/dfs/sharetab

/usr/lib/libc/libc_hwcap1.so.1

                        23G   6.9G        16G    31%    /lib/libc.so.1

fd                       0K     0K         0K     0%    /dev/fd

rpool/ROOT/solaris/var

                        34G   249M        16G     2%    /var

swap                   3.6G   104K       3.6G     1%    /tmp

rpool/VARSHARE          34G   449K        16G     1%    /var/share

rpool/export            34G    32K        16G     1%    /export

rpool/export/home       34G    35K        16G     1%    /export/home

rpool/export/home/chad

                        34G   867K        16G     1%    /export/home/chad

rpool/export/home/oraem

                        34G   320K        16G     1%    /export/home/oraem

rpool/export/home/oragrid

                        34G   9.0M        16G     1%    /export/home/oragrid

rpool                   34G   5.0M        16G     1%    /rpool

rpool/VARSHARE/zones    34G    31K        16G     1%    /system/zones

rpool/VARSHARE/pkg      34G    32K        16G     1%    /var/share/pkg

rpool/VARSHARE/pkg/repositories

                        34G    31K        16G     1%    /var/share/pkg/repositories

pacific:/dsk0/share/crsdata1

                        63G    47G        16G    75%    /ogrid/crsdata1

pacific:/dsk0/share/crsdata2

                        63G    47G        16G    75%    /ogrid/crsdata2

/hgfs                   16G   4.0M        16G     1%    /hgfs

/tmp/VMwareDnD           0K     0K         0K     0%    /system/volatile/vmblock

pacific:/dsk0/share/oms

                        63G    47G        16G    75%    /oem/app

oraem@s11node2:~/.env$



Startup the OMS in the failover node s11node2


oraem@s11node2:~/.env$ . ./oraenv_oem.sh

------ ORACLE 12C EM CLOUD CONTROL -------

MIDDLEWARE_HOME=/oem/app/oraem/middleware

AGENT_BASE=/oem/app/oraem/agent12c

INSTANCE_BASE=/oem/app/oraem/gc_inst

ORACLE_HOSTNAME=oem-vip.vlabs.net

ORACLE_HOME=/oem/app/oraem/middleware/oms

TMPDIR=/oem/app/oraem/tmp

oraem@s11node2:~/.env$


oraem@s11node2:~/.env$ which emctl

/oem/app/oraem/middleware/oms/bin/emctl

oraem@s11node2:~/.env$


oraem@s11node2:~/.env$ emctl start oms

Oracle Enterprise Manager Cloud Control 12c Release 4

Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.

Starting Oracle Management Server...

Starting WebTier...

WebTier Successfully Started

Oracle Management Server Successfully Started

Oracle Management Server is Up

oraem@s11node2:~/.env$



At this point, we have proven that the OMS can be started successfully on the second node after the failover of oem-vip.

The next step is to automate the startup of OMS during the failover procedures.