Multiple Standby Databases Dataguard Oracle

We will create 2 standby databases from a single RAC primary database. 11gR2 allows upto 30 standby databases.  The most important part is to consider the dataguard parameters, the rest is a normal dataguard creation. The important parameters are as below :

db_file_name_convert

log_archive_config

log_archive_dest_1

log_archive_dest_2

log_archive_dest_3

LOG_ARCHIVE_DEST_STATE_1

LOG_ARCHIVE_DEST_STATE_2

LOG_ARCHIVE_DEST_STATE_3

log_file_name_convert

remote_login_passwordfile

STANDBY_FILE_MANAGEMENT

fal_client

fal_server

sec_case_sensitive_logon    (if on 11gR2)
All database are running on Grid + ASM 11.2.0.3.4 with RAC primary and RAC standby’s. Please note the procedure is identical to a single instance setup. In Rac the only special thing you do is register the standby database and instances to the OCR via srvctl command.

We all call our primary database SWX and our PHYSICAL STANDBY databases as SWXSTANDBY, SWXSTANDYBY2. We will use RMAN Duplicate Active Standby database feature of 11gR2. If your standby databases are in different data centesr and the bandwidht isn’t suffcient you can use the tradionatl RMAN  duplicate command.

1. Create tnsnames.ora entry on Primary database. Use RDBMS owner for the tnsnames.ora not Grid infrastructure owner

SWX =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.213)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = swx1)(UR=A)
)
)

SWXSTANDBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.219)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = swxstandby)(UR=A)
)
)

SWXSTANDBY2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST =  *.*.*.224)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = swxstandby2)(UR=A)
)

2. Add an SID entry on the listener.ora as Grid infrastructure user

SID_LIST_EASYORADBA_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=swx1)(GLOBAL_DBNAME=swx)(ORACLE_HOME=/grid/app/11.2.0/grid/)))

3. Create a pfile and the below datagaurd parameters to the pfile

#########################STANDBY PARAMETERS########################
*.DB_UNIQUE_NAME=’swx’
*.db_file_name_convert=’+DATA/SWXSTANDBY2/’,’+DATA/SWXSTANDBY/’,’+DATA/SWX/’,’+FRA/SWXSTANDBY2/’,’+FRA/SWXSTANDBY/’,’+FRA/SWX/’
*.log_archive_config=’dg_config=(swxstandby2,swxstandby,swx)’
*.log_archive_dest_1=’LOCATION=+FRA valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=swx’
*.log_archive_dest_2=’service=swxstandby LGWR ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=swxstandby’
*.log_archive_dest_3=’service=swxstandby2 LGWR ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=swxstandby2′
*.LOG_ARCHIVE_DEST_STATE_1=’ENABLE’
*.LOG_ARCHIVE_DEST_STATE_2=’ENABLE’
*.LOG_ARCHIVE_DEST_STATE_3=’ENABLE’
*.LOG_ARCHIVE_FORMAT=’log%t_%s_%r.arc’
*.log_file_name_convert=’+DATA/SWXSTANDBY2/’,’+DATA/SWXSTANDBY/’,’+DATA/SWX/’,’+FRA/SWXSTANDBY2/’,’+FRA/SWXSTANDBY/’,’+FRA/SWX/’
*.remote_login_passwordfile=’exclusive’
*.STANDBY_FILE_MANAGEMENT=’AUTO’
*.fal_client=’SWX’
*.fal_server=’SWXSTANDBY2′,’SWXSTANDBY’
*.sec_case_sensitive_logon=FALSE
########################################

4. Restart the database with the new pfile and create spfile from it and bounce the database again

5. Login to the First DR database server as Oracle user and create tnsnames.ora entry

SWX =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.213)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = swx)(UR=A)
)
)

SWXSTANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.218)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = swxstandby)(UR=A)
)
)

SWXSTANDBY2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.224)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = swxstandby2)(UR=A)
)
)

6. Add SID listener entry in listener.ora file as Grid infrastructure user.

SID_LIST_EASYORADBA_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=swx1)(GLOBAL_DBNAME=swx)(ORACLE_HOME=/grid/app/11.2.0/grid)))

7. Create a pfile with the dataguard parameters from the pfile created before on primary and add below parameters.

#########################STANDBY DATABASE PARAMETERS########################
*.DB_UNIQUE_NAME=’SWXSTANDBY’
*.db_file_name_convert=’+DATA/SWX/’,’+DATA/SWXSTANDBY/’,’+DATA/SWXSTANDBY2/’,’+FRA/SWX/’,’+FRA/SWXSTANDBY/’,’+FRA/SWXSTANDBY2/’
*.log_archive_config=’dg_config=(swxstandby2,swxstandby,swx)’
*.log_archive_dest_1=’LOCATION=+FRA valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=swxstandby’
*.log_archive_dest_2=’service=swx LGWR ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=swx’
*.log_archive_dest_3=’service=swxstandby2 LGWR ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=swxstandby2′
*.LOG_ARCHIVE_DEST_STATE_1=’ENABLE’
*.LOG_ARCHIVE_DEST_STATE_2=’ENABLE’
*.LOG_ARCHIVE_DEST_STATE_3=’ENABLE’
*.LOG_ARCHIVE_FORMAT=’log%t_%s_%r.arc’
*.log_file_name_convert=’+DATA/SWX/’,’+DATA/SWXSTANDBY/’,’+DATA/SWXSTANDBY2/’,’+FRA/SWX/’,’+FRA/SWXSTANDBY/’,’+FRA/SWXSTANDBY2/’
*.remote_login_passwordfile=’exclusive’
*.STANDBY_FILE_MANAGEMENT=’AUTO’
*.fal_client=’SWXSTANDBY’
*.fal_server=’SWX’,’SWXSTANDBY2′
*.sec_case_sensitive_logon=FALSE
########################################

8. Start the database with the pfile and create spfile from it

9. Clone the database from RMAN using active option

rman target sys/***@swx auxiliary sys/***@swxstandby

RMAN> duplicate target database for standby from active database nofilenamecheck dorecover;

10. Register the database and instance with srvctl command

srvctl add database -d SWXSTANDBY -o /oracle/app/oracle/product/11.2.0/dbhome_1 -c RAC -r PHYSICAL_STANDBY -a DATA,FRA
srvctl add instance -d SWXSTANDBY -i swx1 -n csftest-2
srvctl stop database -d SWXSTANDBY
sqlplus “/as sysdba”
srvctl start database -d swxstandby
srvctl modify database -d swxstandby -s mount
srvctl config database -d swxstandby

## TO ACTIVATE ACTIVE STANDBY DATABASE ###
srvctl start database -d swxstandby -o open

11. Login to the Second DR database server as Oracle user and create tnsnames.ora entry

SWX =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.213)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = swx)(UR=A)
)
)

SWXSTANDBY2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.15)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SWXSTANDBY2)(UR=A)
)
)

SWXSTANDBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = *.*.*.218)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SWXSTANDBY)(UR=A)
)
)

12. Create a listener.ora entry as grid user

SID_LIST_EASYORADBA_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=swx1)(GLOBAL_DBNAME=swx)(ORACLE_HOME=/grid/app/11.2.0/grid)))

13. Create a pfile with the dataguard parameters from the pfile created before on primary and add below parameters.

#########################STANDBY DATABASE PARAMETERS########################
*.DB_UNIQUE_NAME=’SWXSTANDBY2′
*.db_file_name_convert=’+DATA/SWX/’,’+DATA/SWXSTANDBY/’,’+DATA/SWXSTANDBY2/’,’+FRA/SWX/’,’+FRA/SWXSTANDBY/’,’+FRA/SWXSTANDBY2/’
*.log_archive_config=’dg_config=(swxstandby2,swxstandby,swx)’
*.log_archive_dest_1=’LOCATION=+FRA valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=swxstandby2′
*.log_archive_dest_2=’service=swx LGWR ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=swx’
*.log_archive_dest_3=’service=swxstandby LGWR ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=swxstandby’
*.LOG_ARCHIVE_DEST_STATE_1=’ENABLE’
*.LOG_ARCHIVE_DEST_STATE_2=’ENABLE’
*.LOG_ARCHIVE_DEST_STATE_3=’ENABLE’
*.LOG_ARCHIVE_FORMAT=’log%t_%s_%r.arc’
*.log_file_name_convert=’+DATA/SWX/’,’+DATA/SWXSTANDBY/’,’+DATA/SWXSTANDBY2/’,’+FRA/SWX/’,’+FRA/SWXSTANDBY/’,’+FRA/SWXSTANDBY2/’
*.remote_login_passwordfile=’exclusive’
*.STANDBY_FILE_MANAGEMENT=’AUTO’
*.fal_client=’SWXSTANDBY2′
*.fal_server=’SWX’,’SWXSTANDBY’
*.sec_case_sensitive_logon=FALSE
########################################

14. Start the database with the pfile and create spfile from it

15. Clone the database from RMAN using active option

rman target sys/***@swx auxiliary sys/***@swxstandby

RMAN> duplicate target database for standby from active database nofilenamecheck dorecover;

16. Register the database and instance with srvctl command

srvctl add database -d swxstandby2 -o /oracle/app/oracle/product/11.2.0/dbhome_1 -c RAC -r PHYSICAL_STANDBY -n swx -a DATA,FRA
srvctl add instance -d swxstandby2 -i swx1 -n suncsftst

Now that we  have  a fully functional RAC PRIMARY with 2 PHYSICAL STANDBYs now let us consider the 2 most important things in Dataguard: Switchover and Failover

————–

Switchover

—————-

No extra steps are necessary when performing a switchover. All bystander physical
standby databases automatically apply redo data received from the new primary
database.

————–

Failover

—————-

The steps for performing a failover to a physical standby database depend on the
Redo Apply progress of the new primary database and any bystander physical
standby databases at the time of the failover.
• If the new primary database has applied more redo than all of the bystander
physical standby databases, no additional steps are required. Only the original
primary database needs to be reinstated, using the steps documented in Oracle
Data Guard Concepts and Administration[5], Section 12.4.1 “Flashing Back a Failed
Primary Database into a Physical Standby Database.”.
• If any bystander physical standby database has applied more redo than the new
primary database, then perform the following steps to reinstate the bystander
physical standby

SQL*Plus Physical/Physical Failover with Physical Bystander Ahead

1. Determine STANDBY_BECAME_PRIMARY_SCN from the new
primary.
SQL> select STANDBY_BECAME_PRIMARY_SCN from
v$database;
2. On the bystander physical standby, flash back to
STANDBY_BECAME_PRIMARY_SCN from the new primary
database.
SQL> flashback database to scn
<STANDBY_BECAME_PRIMARY_SCN>;
3. On the bystander physical standby, delete divergent archived redo logs
created at the time of, or after, the failover.
RMAN> delete archivelog from scn
<STANDBY_BECAME_PRIMARY_SCN>;
4. On the new primary database, enable the redo transport destination
for this bystander physical standby and archive the current redo log.
SQL> alter system set
log_archive_destination_2=enable;
SQL> alter system archive log current;
5. After the logs have been received by the bystander physical standby,
start Redo Apply on the bystander physical standby.

SQL> alter database recover managed standby
database using current logfile through all
switchover disconnect;
The bystander standby database is now reinstated.

For more information refer to this excellent Oracle document.

http://www.oracle.com/technetwork/database/features/availability/maa10gr2multiplestandbybp-1-131937.pdf

Interesting Case-Study on Multiple Standby Databases. Apparently Apple uses it for one of their Large databases.

http://www.oracle.com/technetwork/database/features/availability/311400-134359.pdf

Advertisements

OPatch for 11gR2(11.2.0.3) RAC Database+Grid on Sparc 64-Bit Solaris10

1. Install Latest OPatch clients from support.oracle.com

– As Grid User —

unzip p6880880_112000_SOLARIS64.zip -d $ORACLE_HOME

##Replace All
– As Oracle User —

unzip p6880880_112000_SOLARIS64.zip -d $ORACLE_HOME
##Replace All

 
2. Set PATH variable for OPatch for both Grid and Oracle user profiles

–oracle–

vi .profile

PATH=/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:$PATH

 

–grid–

vi .profile

PATH=/grid/app/11.2.0/grid/bin:/grid/app/11.2.0/grid/OPatch/:$PATH

 

3. Validate Oracle OPatch inventory

— grid

opatch lsinventory -detail -oh $ORACLE_HOME
— Oracle

opatch lsinventory -detail -oh $ORACLE_HOME

 

4. Unzip the patch files as grid user in a shared location between both nodes

unzip p14275572_112030_<platform>.zip
Determine One-off Patch Conflict Detection and Resolution

$ opatch prereq CheckConflictAgainstOHWithDetail -ph <UNZIPPED_PATCH_LOCATION>/14275605 –oh $GRID_HOME
$ opatch prereq CheckConflictAgainstOHWithDetail -ph <UNZIPPED_PATCH_LOCATION>/14275605 –oh $DB_HOME
$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/14275572 –oh $GRID_HOME
$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/14275572/custom/server/14275572 –oh $DB_HOM

 

5. OPatching where GI Home Is Not Shared and ACFS File System Is Not Configured and Database Homes Are Not Shared

As Grid user create ocm.rsp file

cd /grid/app/11.2.0/grid/OPatch/ocm/bin

./emocmrsp

 

As root user

export PATH=$PATH:/grid/app/11.2.0/grid/OPatch
opatch auto /var/tmp/ -ocmrf /grid/app/11.2.0/grid/OPatch/ocm/bin/ocm.rsp

 

Executing /grid/app/11.2.0/grid/perl/bin/perl /grid/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir /var -patchn tmp -ocmrf /grid/app/11.2.0/grid/OPatch/ocm/bin/ocm.rsp -paramfile /grid/app/11.2.0/grid/crs/install/crsconfig_params
/grid/app/11.2.0/grid/crs/install/crsconfig_params
/grid/app/11.2.0/grid/crs/install/s_crsconfig_defs

This is the main log file: /grid/app/11.2.0/grid/cfgtoollogs/opatchauto2012-12-02_13-56-59.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /grid/app/11.2.0/grid/cfgtoollogs/opatchauto2012-12-02_13-56-59.report.log

2012-12-02 13:56:59: Starting Clusterware Patch Setup
Using configuration parameter file: /grid/app/11.2.0/grid/crs/install/crsconfig_params
patch /var/tmp/14275572/custom/server/14275572 apply successful for home /oracle/app/oracle/product/11.2.0/dbhome_1
patch /var/tmp/14275605 apply successful for home /oracle/app/oracle/product/11.2.0/dbhome_1
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘csftest-1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN1.lsnr’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.cvu’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.OCR.dg’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.registry.acfs’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.FRA.dg’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.cvu’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.LISTENER_SCAN1.lsnr’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.scan1.vip’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.scan1.vip’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.csftest-1.vip’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.csftest-1.vip’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.FRA.dg’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.registry.acfs’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.oc4j’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.OCR.dg’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.asm’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.ons’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.ons’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.net1.network’ on ‘csftest-1’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘csftest-1’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘csftest-1’
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.evmd’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘csftest-1’ succeeded
CRS-2677: Stop of ‘ora.ctssd’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.cssd’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.gipcd’ on ‘csftest-1’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘csftest-1’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘csftest-1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘csftest-1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
The path “/grid/app/11.2.0/grid/bin/acfsrepl_initializer” does not exist
The path “/grid/app/11.2.0/grid/bin/acfsrepl_apply” does not exist
The path “/grid/app/11.2.0/grid/bin/acfsrepl_apply.bin” does not exist
Successfully unlock /grid/app/11.2.0/grid

patch /var/tmp/14275572 apply successful for home /grid/app/11.2.0/grid
patch /var/tmp/14275605 apply successful for home /grid/app/11.2.0/grid
CRS-4123: Oracle High Availability Services has been started.

 

 
6. Patch Post-Installation Instructions
6.1 Apply conflict resolution patches if any

6.2 Load Modified SQl files into the database

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT
Check for Errors in Log –> /oracle/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/catbundle/catbundle_PSU_SWX_APPLY_2012Dec02_14_57_04.log
6.3 If using RMAN catalog then Upgrade Oracle Recovery Manager Catalog

$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;
6.4 Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of Patch in the Oracle Home

These instructions are for a database that is created or upgraded after the installation of the patch.

You must execute the steps 6.2 “Loading Modified SQL Files into the Database” for any new database only if it was created by any of the following methods:

Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)

Using a script that was created by DBCA that creates a database from a sample database

There are no actions required for databases that have been upgraded.

 

7. Check if patches applied and compare the output with the one you ran before
$ opatch lsinventory

 

8. Rollback OPatch for both Non-Shared GI and DBMS homes

As Grid user

opatch auto /var/tmp/ -rollback -ocmrf /grid/app/11.2.0/grid/OPatch/ocm/bin/ocm.rsp
For each database instance running out of the ORACLE_HOME, connect to the database using SQL*Plus as SYSDBA and run the rollback script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle_PSU_<database SID PREFIX>_ROLLBACK.sql
SQL> QUIT
Check the log file for any errors. The log file is found in $ORACLE_BASE/cfgtoollogs/catbundle and is named catbundle_PSU_<database SID>_ROLLBACK_<TIMESTAMP>.log where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS.
Ensure that you verify the Oracle Inventory and compare the output with the one you ran before
$ opatch lsinventory

 

 

Create Virtual Interface Solaris 10

Ocassionally you need to create a quick one node RAC setup for testing and one of the requirements is a private virtual interface. Virtual interfaces allow a single ethernet interface to listen on additional IP addresses. You can quickly do it in Solaris 10 so that the grid infrastructure setup can see the private virtual interconnect. You can do as below

After adding entry for private interconnect in etc/hosts file you can proceed.

Given an ethernet interface bge0 (use ifconfig -a to identify the names of your interfaces), you can create a subinterface called bge0:1 with the following command:

— Configure Virtual Interface Solaris 10–

Virtual interfaces allow a single ethernet interface to listen on additional IP addresses.

Given an ethernet interface bge0 (use ifconfig -a to identify the names of your interfaces), you can create a subinterface called bge0:1 with the following command:

# dladm show-dev
bge0 link: up speed: 1000 Mbps duplex: full
bge1 link: unknown speed: 0 Mbps duplex: unknown
bge2 link: unknown speed: 0 Mbps duplex: unknown
bge3 link: unknown speed: 0 Mbps duplex: unknown

# dladm show-link
bge0 type: non-vlan mtu: 1500 device: bge0
bge1 type: non-vlan mtu: 1500 device: bge1
bge2 type: non-vlan mtu: 1500 device: bge2
bge3 type: non-vlan mtu: 1500 device: bge3

ifconfig bge0:1 plumb

You can set the IP address of the interface to 10.90.1.102 and turn on the interface with the following command:

ifconfig bge0:1 10.90.1.102 up

ifconfig -a

lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843 mtu 1500 index 2
inet 172.*.*.* netmask ffffff00 broadcast 172.*.*.*
ether 0:14:4f:95:e1:38

Unless you do some additional nonstandard things in your network, all of the subinterfaces on a physical interface need to be in the same subnet.

To make the virtual interface persist following a reboot, you can add the ip address or hostame from /etc/hosts in the file /etc/hostname.bge0:1

more /etc/hosts

#
# Internet host table
#
::1 localhost

10.90.1.102 hostname-priv

vi /etc/hostname.bge0:1

Add the entry hostname-priv from the host file to the virtual interface suncsftst-priv. And now it will persis after a reboot also.

Setting up user profiles for Grid and Oracle user : Oracle RAC 11gR2 on Solaris 10

Below is the Solaris user profiles for “grid” and “oracle” user. If you have followed Oracles official documentation for installation ORacle Rac 11gR2. It is best practice to install Oracle clusterware under grid user and oracle database with Oracle user. It takes a little time to get used to this setup specially for RAC users coming from 10g. But once you get a hang of it. It will work like a charm.

FYI the below user profiles have been set on Solaris 10\

___________________________________________

GRID USER PROFILE
___________________________________________

grid:x:100:100::/export/home/grid:/usr/bin/bash
oracle:x:101:100::/export/home/oracle:/usr/bin/bash

cd /export/home/grid

root@swxracnode1 # more .bash_profile

## If doesnt exist create it and give permission as 775 and owner as grid:oinstall

ORACLE_HOME=/u02/11.2.0/grid
GRID_HOME=/u02/11.2.0/grid
ORACLE_SID=+ASM1
export ORACLE_HOME ORACLE_SID GRID_HOME
PATH=/u02/11.2.0/grid/bin:$PATH

. .bash_profile

___________________________________________

ORACLE USER PROFILE
___________________________________________

grid:x:100:100::/export/home/grid:/usr/bin/bash
oracle:x:101:100::/export/home/oracle:/usr/bin/bash

cd /export/home/oracle

root@swxracnode1 # more .bash_profile

## If doesnt exist create it and give permission as 775 and owner as oracle:oinstall

ORACLE_HOME=’/u01/app/oracle/product/11.2.0/db_1′
ORACLE_SID=’swx1′
export ORACLE_HOME ORACLE_SID
PATH=$PATH:$ORACLE_HOME/bin
export PATH
ORACLE_UNQNAME=’swx’
export ORACLE_UNQNAME

Installing 11gR2 Real Application Clusters on Oracle Enterprise Linux 4 x86-64 (64 bit)

Installing Oracle Grid Infrastrcture

This document provides a step by step guide to, installing Oracle 11gR2 Real Application clusters on Oracle Enterprise Linux 4, x86-64 (64 Bit).
Keep in mind, this is a test installation and hence does not follow some of the best practices for installing oracle real application clusters (eg: use a separate o/s user for oracle grid infrastructure, have at least 3 voting disks etc…).

Important Documents

Grid infrastructure installation guide for Linux
Real Application Clusters Installation guide for Linux and Unix
Clusterware Administration and Deployment Guide
Real Application Clusters Administration and Deployment Guide

Some new concepts in 11gR2 Rac


Oracle clusterware and ASM now are installed into the Same Oracle Home, and is now called the grid infrastructure install.

Raw devices are no longer supported for use for anything (Read oracle cluster registry, voting disk, asm disks), for new installs.

OCR and Voting disk can now be stored in ASM, or a certified cluster file system.

The redundancy level of your ASM diskgroup (That you choose to place voting disk on) determines the number of voting disks you can have.
You can place

  • Only One voting disk on an ASM diskgroup configured as external redundancy
  • Only Three voting disks on an ASM diskgroup configured as normal redundancy
  • Only Five voting disks on an ASM diskgroup configured as high redundancy


The contents of the voting disks are automatically backed up into the OCR

ACFS (Asm cluster file system) is only supported on Oracle Enterprise Linux 5 (And RHEL5), not on OEL4.
There is a new service called cluster time synchronization service that can keep the clocks on all the servers in the cluster synchronized (In case you dont have network time protocol (ntp) configured)

Single Client Access Name (SCAN), is a hostname in the DNS server that will resolve to 3 (or at least one) ip addresses in your public network. This hostname is to be used by client applications to connect to the database (As opposed to the vip hostnames you were using in 10g and 11gr1). SCAN provides location independence to the client connections connecting to the database. SCAN makes node additions and removals transparent to the client application (meaning you dont have to edit your tnsnames.ora entries every time you add or remove a node from the cluster).

Oracle Grid Naming Service (GNS), provides a mechanism to make the allocation and removal of VIP addresses a dynamic process (Using dynamic Ip addresses).

Intelligent Platform Management Interface (IPMI) integration, provides a new mechanism to fence server’s in the cluster, when the server is not responding.

The installer can now check the O/S requirements, report on the requirements that are not met, and give you fixup scripts to fix some of them (like setting kernel parameters).

The installer can also help you setup SSH between the cluster nodes.

There is anew deinstall utility that cleans up a existing or failed install.

There is a new Instantaneous problem detection OS tool. This tool is the nextgen oswatcher.

And the list goes on and on.

Listed below are some of the top hardware and software requirements for the install. Please refer to the grid infrastructure install guide for all the pre-requisites.

Hardware requirements

Please refer to the install guide for all the pre-requisites.

One or more servers
Shared Disk storage (SAN, NAS)
GigE or higher network switch (For the private interconnect)
Atleast One GigE or higher Network interface card for the private interconnect
Atleast One Network interface card for the public interconnect
IP address requirements

  • One SCAN name (That resolves to 3 ip addresses) for the cluster
  • For each Node
    • 1 public IP address
    • 1 private IP address
    • 1 VIP IP address
  • The SCAN, Public IP addresses and the VIP should be in the same subnet.


Operating System requirements

Amoung other requirements (Please see the install documents for all the requirements)

RHEL4 (or OEL4), update 7 or higher
RHEL5 (or OEL5), update 2 or higher

Software requirements list for x86-64 linux platforms
Kernel parameters

If you are part of the Redhat Linux Network, or Oracle Unbreakable linux network, then you can get the oracle-validated rpm. This rpm sets up all the (or most of it in my case) required rpm’s and kernel parameters for you. You can use up2date to install this package. This really simplifies the setup steps that you need to perform.

Setup SSH between the cluster nodes.

Setup ntp deamon for clock synchronization on all the nodes. Remember to use the -x switch, or else cluvfy will always call this out and declare failure.

Configure ASMLIB

  • Install the asm packages oracleasm-support oracleasmlib oracleasm
  • Run oracleasm configure on each node to configure asmlib
  • Run oracleasm createdisk to create the needed ASM disks from one node
  • Run oracleasm scandisks to mount the disks on all the other nodes.

Download Software for Installation

Download Oracle Grid Infrastructure Software
Download Oracle Database  Software – Part 1
Download Oracle Database  Software – Part 2

Grid Infrastructure Installation

Unzip the grid infrastructure software into a staging directory on one of the server’s.
login as the oracle user.
cd <software stage home>/grid
./runInstaller


In the screen above, choose to “Install and configure grid infrastructure for a cluster”, click Next.


Choose a “Typical” installation, Click Next.

The above screen will be displayed with a default scan name, and just the details regarding the node from which you are running the installer.
Click on the Add button and add the hostname and virtual ip name for the rest of the nodes in the cluster.
The SCAN Name has to be a valid hostname that is setup in DNS which resolves to 3 ip addresses (atleast 1) in a round robin fashion.
Replace the default shown in the screen above with a valid SCAN you will be using.

If you have not configured ssh connectivity, you can click on “SSH connectivity” and configure ssh.

You can click on “Identify network interfaces” to check and/or change, the interfaces being used for the public and private interfaces.


Click Ok to return to the Cluster Configuration screen.

Click Next

In the screen above, choose the location for Oracle Base, the Home directory for the Grid infrastructure software, the passwords for the ASM instance and the group to be used as the OSASM group. Click Next to continue.


In the screen above, choose the disk group name to use for the first ASM diskgroup created, and which disks to use. The ASM instance needs to be created so that the voting disk and cluster registry can be placed on ASM. In my case i had configured the disks using ASMLIB before i started the grid infrastructure install. In my case the disks are created as Raid 10 on the array, so i chose external redundancy. This means that there will be only one voting disk created. Click Next to continue.

In the “Prerequisite checks” screen above, the installer runs through and checks all the pre-requisites (kernel parameters, ssh, ntp, packages). It displays the pre-requisites that are not being met. Some of these it is able to help us fix (like kernel parameters). For the one’s it can fix it will give you a script to run from /tmp as root.

In my case although i had used the oracle-validated package, there were some packages missing (i386 versions of lib-aio, unixODBC and unixODBC-devel), and some kernel parameters to be fixed.


I fixed them and ran the Check again, which resulted in the above two remaining. Since I know that i have enough swap space and I had not used the -x flag for the ntp setup, i choose to Ignore All and continue.


In the above screen, click finish to continue with the install.


Finally it will prompt you with the screen above to run the scripts as root.
Make sure that you run them one at a time, one node at a time.
Once the scripts are run to completion successfully, click on the Ok button.

In my case the cluster verification utility failed, because of not using the -x switch in ntp. I choose to skip it. I’ll fix it later.


In the screen above, click Close to exit the installer.

Your Oracle 11gR2 grid infrastructure installation is now complete.

Screen output from root.sh

First Node





Second Node