Category Archives: RAC

Cluster Name

/*How to display Oracle Cluster name*/

1. The command “cemutlo” provides cluster name and version.

$GI_HOME/bin/cemutlo [-n] [-w]

[oracle@rac1 ~]$ cemutlo -n
rac-scan <—– This is Cluster name.

2. $CRS_HOME/cdata/<cluster_name> directory

3. ocrdump
which will create a text file called OCRDUMPFILE open that file and look for this entry
+[SYSTEM.css.clustername]+ ORATEXT : crs_cluster In this case, “crs_cluster” is the cluster name.

4. gpnptool get
search for keyword “ClusterName

5. ASM SP File location
[root@rac1 ]# gpnptool getpval -asm_spf (or) SQL> show parameter spfile 
+DATA/<clusterName>/asmparameterfile/registry.253.783619900

Note: We cannot change the cluster name. The only to do that is to reinstall the clusterware.

 

Move/Relocate OCR

How to Move/Relocate OCR from +DATA to +VOTE diskgroup

Contents
___________________________________________________________________________________________________________________________________

1. Verify Available DiskGroups
2. Verify Current OCR location
3. Add OCR to DiskGroup +VOTE
4. Delete the old OCR location
___________________________________________________________________________________________________________________________________


1. Verify Available DiskGroups

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      2940     2110              980             565              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      7295       13                0              13              0             N  DATA1/
MOUNTED  EXTERN  N         512   4096  1048576      1019      892                0             892              0             Y  VOTE/
ASMCMD>


2. Verify Current OCR location

[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4280
         Available space (kbytes) :     257840
         ID                       : 1037097601
         Device/File Name         :      +DATA <--------
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@rac1 ~]$


3. Add OCR to DiskGroup +VOTE

As root user

[root@rac1 ~]# which ocrconfig
/u01/app/11.2.0/grid/bin/ocrconfig
[root@rac1 ~]# ocrconfig -add +VOTE
[root@rac1 ~]#
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4280
         Available space (kbytes) :     257840
         ID                       : 1037097601
         Device/File Name         :      +DATA <-----------
                                    Device/File integrity check succeeded
         Device/File Name         :      +VOTE <-----------
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 ~]#


4. Delete the old OCR location

As root user

[root@rac1 ~]# ocrconfig -delete +DATA
[root@rac1 ~]#
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4280
         Available space (kbytes) :     257840
         ID                       : 1037097601
         Device/File Name         :      +VOTE <--------
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

OSWatcher

How to Configure OSWatcher

Note: You have to follow the same steps to configure OSW in all remaining nodes in the cluster.

Contents
_________________________________________________________________________________________________________________________

1. Download OSWatcher
2. Install OSWatcher
3. Create file private.net
4. Start OS Watcher
5. Verify OSW Status
6. Stop OSWather
_________________________________________________________________________________________________________________________


Step 1: Download OSWatcher

OSWatcher (Includes: [Video]) (Doc ID 301137.1)

oswbb733.tar


Step 2: Install OSWatcher

Move the OSW tar file to target location where you want to install OSW. After this just untar

[root@rac1 share]# tar xvf oswbb733.tar
oswbb/
oswbb/docs/
oswbb/docs/The_Analyzer/
oswbb/docs/The_Analyzer/OSWatcherAnalyzerOverview.pdf
oswbb/docs/The_Analyzer/oswbbaUserGuide.pdf
oswbb/docs/The_Analyzer/oswbba_README.txt
oswbb/docs/OSWatcher/
oswbb/docs/OSWatcher/oswbb_README.txt
oswbb/docs/OSWatcher/OSWatcherUserGuide.pdf
oswbb/Exampleprivate.net
oswbb/nfssub.sh
oswbb/stopOSWbb.sh
oswbb/call_du.sh
oswbb/iosub.sh
oswbb/OSWatcherFM.sh
oswbb/ifconfigsub.sh
oswbb/ltop.sh
oswbb/mpsub.sh
oswbb/call_uptime.sh
oswbb/psmemsub.sh
oswbb/tar_up_partial_archive.sh
oswbb/oswnet.sh
oswbb/vmsub.sh
oswbb/call_sar.sh
oswbb/oswib.sh
oswbb/startOSWbb.sh
oswbb/Example_extras.txt
oswbb/oswsub.sh
oswbb/oswbba.jar
oswbb/OSWatcher.sh
oswbb/tarupfiles.sh
oswbb/xtop.sh
oswbb/src/
oswbb/src/Thumbs.db
oswbb/src/OSW_profile.htm
oswbb/src/tombody.gif
oswbb/src/missing_graphic.gif
oswbb/src/coe_logo.gif
oswbb/src/watch.gif
oswbb/src/oswbba_input.txt
oswbb/oswrds.sh
[root@rac1 share]#


3. Create file private.net for monitoring private interconnet

OS Watcher User Guide (Doc ID 301137.1)
Note: By default private interconnect statistics are not collected by OSW. You have to set it manually as per the document.If you open the OSW guide mentioned in the above document you can see the sub title 'Setting up OSW'. In that it is clearly mentioned how to set the private.net stats.document.

vi private.net     <-- add below entries and then save and exit

#Linux Example
###########################################
echo "zzz ***"`date`
traceroute -r -F rac1-priv.rajasekhar.com
traceroute -r -F rac2-priv.rajasekhar.com
############################################
rm locks/lock.file

[root@rac1 oswbb]# cat private.net
#Linux Example
###########################################
echo "zzz ***"`date`
traceroute -r -F rac1-priv.rajasekhar.com
traceroute -r -F rac2-priv.rajasekhar.com
############################################
rm locks/lock.file
[root@rac1 oswbb]#

[root@rac1 oswbb]# chown -R oracle:oinstall private.net
[root@rac1 oswbb]# chmod -R 755 private.net
[root@rac1 oswbb]# ls -ltr private.net
-rwxr-xr-x 1 oracle oinstall 228 Aug 12 02:04 private.net
[root@rac1 oswbb]#


4. Start OS Watcher

Example 1: This would start the tool and collect data at default 30 second intervals and log the last 48 hours of data to archive files.

./startOSWbb.sh 

Example 2: This would start the tool and collect data at 60 second intervals and log the last 10 hours of data to archive files and automatically compress the files.
./startOSWbb.sh 60 10 gzip

Example 3: This would start the tool and collect data at 60 second intervals and log the last 10 hours of data to archive files, compress the files and set the archive directory to a non-default location.

./startOSWbb.sh 60 10 gzip /u02/tools/oswbb/archive

Example 4: This would start the tool and collect data at 60 second intervals and log the last 48 hours of data to archive files, NOT compress the files and set the archive directory to a non-default location.

./startOSWbb.sh 60 48 NONE /u02/tools/oswbb/archive

Example 5: This would start the tool, put the process in the background, enable to the tool to continue running after the session has been terminated, collect data at 60 second intervals, and log the last 10 hours of data to archive files.

nohup ./startOSWbb.sh 60 10 &
As root user

[root@rac1 ~]# cd /u01/share/oswbb
[root@rac1 oswbb]# ls -ltr startOSWbb.sh
-rwxr-xr-x 1 oracle oinstall 2574 Feb 26 23:50 startOSWbb.sh
[root@rac1 oswbb]#
[root@rac1 oswbb]# nohup ./startOSWbb.sh 30 72 gzip & <--- Hit ENTER twice
[1] 28446
[root@rac1 oswbb]# nohup: appending output to `nohup.out'

[1]+  Done                    nohup ./startOSWbb.sh 30 72 gzip
[root@rac1 oswbb]#

Note: OSW will keep running until stop/crash and will keep data for last 72 hours only. Data automatically compress after 72 hours.


5. Verify OSW Running

[root@rac1 archive]# ps -elf | grep OSWatcher  | grep -v grep
0 S root     28450     1  0  80   0 -  2213 wait   02:48 pts/2    00:00:00 /bin/sh ./OSWatcher.sh 30 72 gzip   <-- 30 Sec, 72 Hours, output gzip format 
0 S root     28499 28450  0  80   0 -  2179 wait   02:49 pts/2    00:00:00 /bin/sh ./OSWatcherFM.sh 72 /u01/share/oswbb/archive  <-- OSW output location
[root@rac1 archive]#

[root@rac1 ~]# ls -ltr /u01/share/oswbb/archive
total 40
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswvmstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswtop
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswslabinfo
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswps
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswprvtnet
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswnetstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswmpstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswmeminfo
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswiostat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswifconfig
[root@rac1 ~]#

[root@rac1 ~]# cd /u01/share/oswbb/archive/oswprvtnet
[root@rac1 oswprvtnet]# ls -ltr
total 8
-rw-r--r-- 1 root root 4272 Aug 12 02:55 rac1.rajasekhar.com_prvtnet_15.08.12.0200.dat
[root@rac1 oswprvtnet]# tail -10 rac1.rajasekhar.com_prvtnet_15.08.12.0200.dat
zzz ***Wed Aug 12 02:55:15 IST 2015
traceroute to rac1-priv.rajasekhar.com (192.168.0.101), 30 hops max, 40 byte packets
 1  rac1-priv.rajasekhar.com (192.168.0.101)  0.023 ms  0.012 ms  0.005 ms
traceroute to rac2-priv.rajasekhar.com (192.168.0.102), 30 hops max, 40 byte packets
 1  rac2-priv.rajasekhar.com (192.168.0.102)  0.278 ms  0.185 ms  0.124 ms
zzz ***Wed Aug 12 02:55:45 IST 2015
traceroute to rac1-priv.rajasekhar.com (192.168.0.101), 30 hops max, 40 byte packets
 1  rac1-priv.rajasekhar.com (192.168.0.101)  0.022 ms  0.007 ms  0.005 ms
traceroute to rac2-priv.rajasekhar.com (192.168.0.102), 30 hops max, 40 byte packets
 1  rac2-priv.rajasekhar.com (192.168.0.102)  0.396 ms  0.310 ms  0.226 ms
[root@rac1 oswprvtnet]#


6. Stop OSWather

[root@rac1 oswbb]# pwd
/u01/share/oswbb
[root@rac1 oswbb]# ls -ltr stopOSWbb.sh
-rwxr-xr-x 1 oracle oinstall 558 Apr 17  2014 stopOSWbb.sh
[root@rac1 oswbb]#
[root@rac1 oswbb]# ./stopOSWbb.sh
[root@rac1 oswbb]#
[root@rac1 oswbb]# ps -ef | grep OSW <-- now OSW is not running
root     30248  2602  0 02:57 pts/2    00:00:00 grep OSW
[root@rac1 oswbb]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Reference:
OSWatcher (Includes: [Video]) (Doc ID 301137.1)

Restore loss of all VOTE disks

Restore Loss of All Vote Disks

Contents:
_________________________________________________________________________________________________________________

0. Environment
1. Current Status of OCR/VOTE DISK
2. Backup OCR
3. Simulate VOTE DISK corruption
4. Reboot both nodes in order to see corruption << This step is not mandatory
5. Restore loss of all Voting disk
            A. Stop CRS on all the nodes
            B. Start CRS in exclusive mode only
            C. Create New Diskgroup
            D. Restore/Move/Replace Votedisk
            E. Stop CRS on Node 1
            F. Start CRS on both nodes
6. Check Cluster Status

_________________________________________________________________________________________________________________


0. Environment

Two Node RAC 11.2.0.3
OS : RHEL5


1. Current Status of OCR/VOTE DISK.

[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4156
         Available space (kbytes) :     257964
         ID                       : 1037097601
         Device/File Name         :      +DATA  <<< OCR located in ASM diskgroup DATA.
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@rac1 ~]$

[oracle@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7a14418b50a54f9dbfda2a6b97b4f620 (/dev/oracleasm/disks/DISK5) [VOTE]  <<<  voting disk /dev/oracleasm/disks/DISK5
Located 1 voting disk(s). <<<<
[oracle@rac1 ~]$

Note: Now both OCR and Voting disks are in two different diskgroups. 
      OCR in DATA diskgroup
	  Voting disk /dev/oracleasm/disks/DISK5 in VOTE diskgroup.


2. Backup OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac2     2015/06/24 03:08:27     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150624_030827.ocr

rac1     2015/06/23 05:46:12     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_054612.ocr

rac1     2015/06/23 02:39:07     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr

rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#

Note: With OCR backup we can recover Voting Disk in case of vote disk lose.


3. Simulate VOTE DISK corruption

DISCLAIMER: The dd command given below is just for learning purposes and should only be used on testing systems. I will not take any responsibility of any consequences or loss of data caused by this command.

Corrupt the voting disk /dev/oracleasm/disks/DISK5

dd if=/dev/zero of=/dev/oracleasm/disks/DISK5 bs=4096 count=1000000

Why only 4096 bytes? because the ASM disk header is in the first block of the first AU, and the block size is 4096 bytes.

[oracle@rac1 ~]$ kfed read /dev/oracleasm/disks/DISK5 | grep kfdhdb.blksize
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
[oracle@rac1 ~]$

[oracle@rac1 ~]$ kfed read /dev/oracleasm/disks/DISK5  <<<< KFED confirms that disk got corrupted.
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
7F999709F400 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

[oracle@rac1 ~]$

[root@rac1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7a14418b50a54f9dbfda2a6b97b4f620 (/dev/oracleasm/disks/DISK5) [VOTE]  <<< Don't know why still status showing as ONLINE
Located 1 voting disk(s).
[root@rac1 ~]# 

KFED read command failed. Voting disk got corrupted, i have waited around 1 hour but some how CLUSTER DID NOT WENT DOWN. Don't know why, but i am missing something here.

Please correct me if i am wrong. Let’s bring down everything in order to see the corruption.

Note: I tried to stop the CRS on both nodes at the same time, on Node 2 CRS stopped, but Node 1 restarted while shutting down CRS.

However i have rebooted both the nodes.


4. Reboot both nodes in order to see corruption

After reboot cluster status on both nodes.

From RAC1
=========
[root@rac1 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE
ora.crf
      1        ONLINE  ONLINE       rac1
ora.crsd
      1        ONLINE  OFFLINE
ora.cssd
      1        ONLINE  OFFLINE  <<<<<<
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1
ora.ctssd
      1        ONLINE  OFFLINE
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        ONLINE  OFFLINE
ora.evmd
      1        ONLINE  OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       rac1
ora.gpnpd
      1        ONLINE  ONLINE       rac1
ora.mdnsd
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#

From RAC2
===========
[root@rac2 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE                               Instance Shutdown
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE
ora.crf
      1        ONLINE  ONLINE       rac2
ora.crsd
      1        ONLINE  OFFLINE
ora.cssd
      1        ONLINE  OFFLINE                               STARTING   <<<<< It will not start because "No voting files found"
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2
ora.ctssd
      1        ONLINE  OFFLINE
ora.diskmon
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       rac2
ora.gpnpd
      1        ONLINE  ONLINE       rac2
ora.mdnsd
      1        ONLINE  ONLINE       rac2
[root@rac2 ~]#


alertrac1.log
==============
2015-06-25 04:25:07.002
[cssd(6313)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log
2015-06-25 04:25:22.291
[cssd(6313)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log

ocssd.log from RAC1
====================
2015-06-25 04:25:06.961: [   SKGFD][1093830976]OSS discovery with :/dev/oracleasm/disks*:
2015-06-25 04:25:06.961: [   SKGFD][1093830976]Handle 0x7fbfd8002e50 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK1:
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80ead10 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK2:
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80eb540 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK3:
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80e6240 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK4:
                            <<<<<<<<< DISK5 is missing.
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80e6a70 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK6:
2015-06-25 04:25:06.963: [   SKGFD][1093830976]Handle 0x7fbfd80c7d10 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK7:
..
2015-06-25 04:25:07.001: [    CSSD][1093830976]clssnmvDiskVerify: Successful discovery of 0 disks
2015-06-25 04:25:07.002: [    CSSD][1093830976]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2015-06-25 04:25:07.002: [    CSSD][1093830976]clssnmvFindInitialConfigs: No voting files found
2015-06-25 04:25:07.002: [    CSSD][1093830976](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds

alertrac2.log
==============
2015-06-25 04:25:06.999
[cssd(6539)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac2/cssd/ocssd.log
2015-06-25 04:25:22.279
[cssd(6539)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac2/cssd/ocssd.log

ocssd.log from RAC2
=====================
2015-06-25 04:25:06.573: [   SKGFD][1087797568]OSS discovery with :/dev/oracleasm/disks*:
2015-06-25 04:25:06.573: [   SKGFD][1087797568]Handle 0x19e8640 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK1:
2015-06-25 04:25:06.573: [   SKGFD][1087797568]Handle 0x1993310 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK2:
2015-06-25 04:25:06.574: [   SKGFD][1087797568]Handle 0x1a49550 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK3:
2015-06-25 04:25:06.574: [   SKGFD][1087797568]Handle 0x18aaa40 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK4:
                                                                          <<<<<<<< DISK5 is missing.
2015-06-25 04:25:06.575: [   SKGFD][1087797568]Handle 0x19f6e90 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK6:
2015-06-25 04:25:06.575: [   SKGFD][1087797568]Handle 0x196cbf0 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK7:
..
2015-06-25 04:25:06.999: [    CSSD][1087797568]clssnmvDiskVerify: Successful discovery of 0 disks <<<
2015-06-25 04:25:06.999: [    CSSD][1087797568]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2015-06-25 04:25:06.999: [    CSSD][1087797568]clssnmvFindInitialConfigs: No voting files found <<<
2015-06-25 04:25:07.000: [    CSSD][1087797568](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds


5. Restore loss of all Voting disk.


A. Stop CRS on all the nodes

From RAC1
==========
[root@rac1 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 ~]#

From RAC2
===========
[root@rac2 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac2 ~]#


B. Start CRS in exclusive mode only

From RAC1 as root user

Note: From 11.2.0.2 onwards we should include flag “nocrs” in exclusive CRS startup

[root@rac1 ~]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failed
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac1'
CRS-2681: Clean of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
[root@rac1 ~]#

[oracle@rac1 ~]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     Started  <<
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1
ora.crf
      1        OFFLINE OFFLINE
ora.crsd
      1        OFFLINE OFFLINE  <<<< We have started CRS exclusive mode then CSSD and ASM started, but CRSD won't start
ora.cssd
      1        ONLINE  ONLINE       rac1   <<<
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        ONLINE  OFFLINE
ora.evmd
      1        OFFLINE OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       rac1
ora.gpnpd
      1        ONLINE  ONLINE       rac1
ora.mdnsd
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$


[oracle@rac1 ~]$ SQL> select NAME, STATE, VOTING_FILES from v$asm_diskgroup;

NAME                           STATE       V
------------------------------ ----------- -
DATA1                          MOUNTED     N
DATA                           MOUNTED     N
                               <<<<< VOTE Diskgroup is missing in this output. 
SQL>

SQL> select NAME, PATH, STATE, VOTING_FILE from v$asm_disk where PATH='/dev/oracleasm/disks/DISK5';

no rows selected  << no output SQL>

[oracle@rac1 ~]$ crsctl query css votedisk
Located 0 voting disk(s). <<<
[oracle@rac1 ~]$


C. Create New Diskgroup

Note: You don’t  have new disk right now, but want to resolve this issue, then use existing ASM diskgroup to restore Voting disk. In this case you can ignore this step “Create New Diskgroup”.

SQL> create diskgroup DATA2 external redundancy disk '/dev/oracleasm/disks/DISK6' attribute 'COMPATIBLE.ASM' = '11.2';

Diskgroup created.

SQL>


D. Restore/Move/Replace Votedisk.

Note: Voting Disk will be restore from OCR backup.

From Node 1 as GI HOME owner

[oracle@rac1 ~]$ crsctl replace votedisk +DATA2
Successful addition of voting disk 7ebe19bb115e4f51bfd96935eb1b92b7.
Successfully replaced voting disk group with +DATA2.
CRS-4266: Voting file(s) successfully replaced <<<
[oracle@rac1 ~]$
[oracle@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7ebe19bb115e4f51bfd96935eb1b92b7 (/dev/oracleasm/disks/DISK6) [DATA2] <<<
Located 1 voting disk(s).
[oracle@rac1 ~]$


E. Stop CRS on Node 1

From RAC1
As root user

[root@rac1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 ~]#


F. Start CRS on both nodes.

From RAC1
As root

[root@rac1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac1 ~]#
[root@rac1 ~]#

From RAC2
As root

[root@rac2 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac2 ~]#


6. Check Cluster Status

[root@rac1 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online  <<<
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 ~]#

[root@rac1 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     Started
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1
ora.crf
      1        ONLINE  ONLINE       rac1
ora.crsd
      1        ONLINE  ONLINE       rac1  <<<
ora.cssd
      1        ONLINE  ONLINE       rac1
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        ONLINE  OFFLINE
ora.evmd
      1        ONLINE  ONLINE       rac1
ora.gipcd
      1        ONLINE  ONLINE       rac1
ora.gpnpd
      1        ONLINE  ONLINE       rac1
ora.mdnsd
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#

[root@rac2 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online  <<<
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac2 ~]#

[root@rac2 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac2                     Started
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac2
ora.crf
      1        ONLINE  ONLINE       rac2
ora.crsd
      1        ONLINE  ONLINE       rac2  <<<
ora.cssd
      1        ONLINE  ONLINE       rac2
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2
ora.ctssd
      1        ONLINE  ONLINE       rac2                     ACTIVE:0
ora.diskmon
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  ONLINE       rac2
ora.gipcd
      1        ONLINE  ONLINE       rac2
ora.gpnpd
      1        ONLINE  ONLINE       rac2
ora.mdnsd
      1        ONLINE  ONLINE       rac2
[root@rac2 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Still page under construction !!! 🙂

Add Node Back which was DELETED without software

Add Node Back to Cluster which was deleted without removing GI and RDBMS binaries.

0. Environment
1. Backup OCR
2. Cluster Node Addition
3. Run root.sh on new node RAC2
4. Check cluster status
5. Pin Node
6. Add instance to OCR
7. Update Inventory


0. Environment

One node RAC 11.2.0.3 (Not RAC ONE), single node RAC. Earlier it was two node RAC setup, recently i have deleted 2nd node from cluster for testing.
Node name: RAC1
OS: RHEL 5
DATABASE: nike, Instance: nike1

Task: We are going to add node “RAC2″ to our existing cluster.

Current Status

[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


1. Backup OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac1     2015/06/23 05:46:12     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_054612.ocr

rac1     2015/06/23 02:39:07     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr

rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#


2. Cluster Node Addition.

Note: Pre-node check failed. Hence need to set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running addNode.sh otherwise, the silent node addition will fail without showing any errors to the console.

As GI Home owner
From active node RAC1

[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS
[oracle@rac1 bin]$ ./addNode.sh -silent -noCopy "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      rac2
         /u01: Required 7.50GB : Available 5.54GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.4
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Server) 11.2.0.3.0
      Installation Plugin Files 11.2.0.3.0
      Universal Storage Manager Files 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Automatic Storage Management Assistant 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Perl Interpreter 5.10.0.0.2
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Oracle Net Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.3.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Cluster Verification Utility Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle LDAP administration 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Cluster Ready Services Files 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Tuesday, June 23, 2015 6:01:55 AM IST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Saving inventory on nodes (Tuesday, June 23, 2015 6:03:32 AM IST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11.2.0/grid/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac1 bin]$

Note: set environment properly, better set ORACLE_HOME tag for above script like below

./addNode.sh -silent -noCopy ORACLE_HOME=/u01/app/11.2.0/grid “CLUSTER_NEW_NODES={rac2}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}”


3. Run root.sh on new node RAC2

Form Node RAC2
As root

[root@rac2 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#


4. Check cluster status

[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$


5. Pin Node

As root from node RAC1

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Unpinned
[root@rac1 ~]#
[root@rac1 ~]# crsctl pin css -n rac2
CRS-4664: Node rac2 successfully pinned.
[root@rac1 ~]#
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Pinned <<<<
[root@rac1 ~]#


6. Add instance to OCR

As ORACLE HOME owner

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl << you should run from ORACLE_HOME
[oracle@rac1 ~]$ 
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
[oracle@rac1 ~]$ 
[oracle@rac1 ~]$ srvctl add instance -d nike -i nike2 -n rac2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is not running on node rac2 <<<< Instance added, we need start manually
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl start instance -d nike -i nike2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is running on node rac2. Instance status: Open. <<< Now it is running
[oracle@rac1 ~]$


7. Update Inventory

Note: addNode script will automatically update NODE_LIST for GI Home.

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>  <<< automatically updated NODE_LIST by addnode script.
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rac1"/>
	                                      <<<< Need to update NODE_LIST manually for ORACLE_HOME
</NODE_LIST>           
</HOME>

[oracle@rac1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1,rac2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>  << updated node list.
   </NODE_LIST>

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Still page under construction !!! 🙂

Delete Node without remove software

Delete Node without remove GI and RDBMS binaries.

0. Environment
1. Backup OCR
2. Check status of service
3. Shutdown instance 2
4. Unpin Node
5. Disable Oracle Clusterware
6. Delete Node from Clusterware Configuration
7. Backup Inventory
8. Update Inventory for ORACLE_HOME
9. Update Inventory for GI_HOME


0. Environment

Two node RAC, version 11.2.0.3
Node name: RAC1, RAC2
Database name: nike , instances: nike1, nik2 . Admin Managed Database.
Service name: nike_srv
OS: RHEL 5.7

Task: We are going to delete node RAC2 from cluster without removing GI and RDBMS binaries because i want to add the node back later.

Current Status
===============

[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
      2        ONLINE  ONLINE       rac2                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  OFFLINE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


1. Backup OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac1     2015/06/23 02:39:07     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr

rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#


2. Check status of service

[oracle@rac2 ~]$ srvctl status service -d nike
Service nike_srv is running on instance(s) nike1
[oracle@rac2 ~]$

Note: Confirm where service is now running. If service running on instance 2 then manually failover the service.

srvctl relocate service -d <dbname> -s <service name> -i <old_inst> -t <new_inst>

Note that this does not disconnect any current sessions


3. Shutdown instance 2

[oracle@rac2 ~]$ srvctl stop instance -d nike -i nike2
[oracle@rac2 ~]$ srvctl status database -d nike
Instance nike1 is running on node rac1
Instance nike2 is not running on node rac2
[oracle@rac2 ~]$


4. Unpin Node

As root from node RAC1

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Pinned
[root@rac1 ~]# crsctl unpin css -n rac2
CRS-4667: Node rac2 successfully unpinned.
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Unpinned <<<<
[root@rac1 ~]#

Note: If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.


5. Disable Oracle Clusterware

From node RAC2, which you want to delete
As user root.

[root@rac2 ~]# cd /u01/app/11.2.0/grid/crs/install/
[root@rac2 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.2.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.2.103/192.168.2.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.2.104/192.168.2.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA1.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 install]#


6. Delete Node from Clusterware Configuration

From node RAC1
As root user

[root@rac1 ~]# crsctl delete node -n rac2
CRS-4661: Node rac2 successfully deleted.
[root@rac1 ~]#

[root@rac1 ~]# olsnodes -t -s
rac1    Active  Pinned
[root@rac1 ~]#

As ORACLE HOME owner, remove instance 2 from OCR

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl << you should run from ORACLE_HOME
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl remove instance -d nike -i nike2 <<<
Remove instance from the database nike? (y/[n]) y
[oracle@rac1 ~]$

[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$

From node RAC2

[root@rac2 ~]# ps -ef | grep init
root         1     0  0 Jun22 ?        00:00:00 init [5]
root      9125  8531  0 03:30 pts/1    00:00:00 grep init
[root@rac2 ~]# ps -ef | grep d.bin
root      9127  8531  0 03:30 pts/1    00:00:00 grep d.bin
[root@rac2 ~]#


7. Backup Inventory

From node RAC1
As root.

[root@rac1 ~]# cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
[root@rac1 ~]#
[root@rac1 ~]# cp -rp /u01/app/oraInventory /u01/app/oraInventory_bkp


8. Update Inventory for ORACLE_HOME

From node RAC1
As ORACLE_HOME owner

[oracle@rac1 bin]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$


9. Update Inventory for GI_HOME

From node RAC1
As GRID_HOME owner

[oracle@rac1 bin]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Still page under construction !!! 🙂

Add Node

Add Node to 11gR2 Oracle RAC Cluster (11.2.0.3)

0. Environment

1. Pre-installation tasks for GI for a cluster

0.i) Backup OCR
i) Install the cvuqdisk Package for Linux
ii) Verify New Node (HWOS)
iii) Verify Peer (REFNODE)
iv) Verify New Node (New Pre-Node)
v) Run fixup scripts

2. Cluster Node Addition for GI Home.

i) Run addnode.sh script
ii) Run orainstRoot.sh #On nodes rac2
iii) Run root.sh #On nodes rac2
iv) Check Clusterware Resources after ran root.sh
v) Run cluvfy post-addNode script
vi) Check Cluster Nodes
vii) Check TNS Listener
viii) Check ASM Status
ix) Check OCR
x) Check Vote disk

3. Cluster Node Addition for RDBMS Home.

i) Run addnode.sh script
ii) root.sh #On nodes rac2 from RDBMS home

4. Add Instance to Database through Command-Line or you can add via dbca.

i)Pre-task
ii) Add redo thread
iii) Add undo tablespace
iv) Add instance to OCR
v) Add service to new instance via srvctl or you can add via dbca
vi) Check the cluster stack

Let start !!!


0. Environment

One node RAC 11.2.0.3 (Not RAC ONE), single node RAC. Earlier it was two node RAC setup, recently i have deleted 2nd node from cluster for testing.
Node name: RAC1
OS: RHEL 5
DATABASE: nike, instance: nike1

Task: We are going to add node “RAC2” to our existing cluster.

Current status

[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


1. Pre-installation tasks for GI for a cluster

A) Install and Configure the Linux Operating System on the New Node     << This is step already done by SA
B) Configure Access to the Shared Storage     << This is step already done by SA
C) Install and Configure ASMLib     << This is step already done by SA
D) SSH configure    << This is step already done by SA


0.i) Backup OCR

From: Node RAC1
As root

[root@rac1 ~]# ocrconfig -manualbackup
rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#


i) Install the cvuqdisk Package for Linux

[root@rac2 oracle]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
Using default group oinstall to install package
   1:cvuqdisk               ########################################### [100%]
[root@rac2 oracle]#

Note: Without cvuqdisk package, CVU cannot discover shared disks and you will receive the error message "Package cvuqdisk not installed" when the CVU is run. 

Example below:
==============
Checking shared storage accessibility...

WARNING:
rac2:PRVF-7017 : Package cvuqdisk not installed
        rac2
No shared storage found
Shared storage check failed on nodes "rac2"


ii) Verify New Node (HWOS)

As GI Home owner
From active node RAC1.

[oracle@rac1 ~]$ cluvfy stage -post hwos -n rac2

Performing post-checks for hardware and operating system setup

Checking node reachability...
Node reachability check passed from node "rac1"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "192.168.2.0" with node(s) rac2
TCP connectivity check passed for subnet "192.168.2.0"

Node connectivity passed for subnet "192.168.0.0" with node(s) rac2
TCP connectivity check passed for subnet "192.168.0.0"

Node connectivity passed for subnet "10.0.4.0" with node(s) rac2
TCP connectivity check passed for subnet "10.0.4.0"


Interfaces found on subnet "10.0.4.0" that are likely candidates for VIP are:
rac2 eth2:10.0.4.15

Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are:
rac2 eth0:192.168.2.102

Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.0.102

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.4.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.4.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed

Checking shared storage accessibility...

  Disk                                  Sharing Nodes (1 in count)
  ------------------------------------  ------------------------
  /dev/sda                              rac2

  Disk                                  Sharing Nodes (1 in count)
  ------------------------------------  ------------------------
  /dev/sdb                              rac2


Shared storage check was successful on nodes "rac2"

Post-check for hardware and operating system setup was successful.
[oracle@rac1 ~]$


iii) Verify Peer (REFNODE)

From active node RAC1
As GI Home owner

[oracle@rac1 ~]$ cluvfy comp peer -refnode rac1 -n rac2 -orainv oinstall -osdba dba -verbose

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          3.8633GB (4050940.0KB)    3.8633GB (4050940.0KB)    matched
Physical memory check passed

Compatibility check: Available memory [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          3.6215GB (3797424.0KB)    2.6165GB (2743628.0KB)    mismatched
Available memory check failed

Compatibility check: Swap space [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          1.9994GB (2096472.0KB)    5.5385GB (5807488.0KB)    mismatched
Swap space check failed

Compatibility check: Free disk space for "/u01/app/11.2.0/grid" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          17.165GB (1.7998848E7KB)  21.2725GB (2.2305792E7KB)  mismatched
Free disk space check failed

Compatibility check: Free disk space for "/tmp" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          3.707GB (3887104.0KB)     4.4717GB (4688896.0KB)    mismatched
Free disk space check failed

Compatibility check: User existence for "oracle" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          oracle(1100)              oracle(1100)              matched
User existence for "oracle" check passed

Compatibility check: Group existence for "oinstall" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          oinstall(1000)            oinstall(1000)            matched
Group existence for "oinstall" check passed

Compatibility check: Group existence for "dba" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          dba(1200)                 dba(1200)                 matched
Group existence for "dba" check passed

Compatibility check: Group membership for "oracle" in "oinstall (Primary)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       matched
Group membership for "oracle" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "oracle" in "dba" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       matched
Group membership for "oracle" in "dba" check passed

Compatibility check: Run level [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          5                         5                         matched
Run level check passed

Compatibility check: System architecture [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          x86_64                    x86_64                    matched
System architecture check passed

Compatibility check: Kernel version [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          2.6.32-200.13.1.el5uek    2.6.32-200.13.1.el5uek    matched
Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          250                       250                       matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          32000                     32000                     matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          100                       100                       matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          128                       128                       matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          2074081280                1054504960                mismatched
Kernel param "shmmax" check failed

Compatibility check: Kernel param "shmmni" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          4096                      4096                      matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          2097152                   2097152                   matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          6815744                   6815744                   matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          between 9000.0 & 65500.0  between 9000.0 & 65500.0  matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          262144                    262144                    matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          4194304                   4194304                   matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          262144                    262144                    matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          1048586                   1048586                   matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          1048576                   1048576                   matched
Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "make" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          make-3.81-3.el5           make-3.81-3.el5           matched
Package existence for "make" check passed

Compatibility check: Package existence for "binutils" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6-14.el5  matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "gcc (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc-4.1.2-51.el5 (x86_64)  gcc-4.1.2-51.el5 (x86_64)  matched
Package existence for "gcc (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386)  libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386)  matched
Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-2.5-65 (x86_64),glibc-2.5-65 (i686)  glibc-2.5-65 (x86_64),glibc-2.5-65 (i686)  matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386)  compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386)  matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "elfutils-libelf (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386)  elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386)  matched
Package existence for "elfutils-libelf (x86_64)" check passed

Compatibility check: Package existence for "elfutils-libelf-devel" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.137-3.el5  matched
Package existence for "elfutils-libelf-devel" check passed

Compatibility check: Package existence for "glibc-common" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-common-2.5-65       glibc-common-2.5-65       matched
Package existence for "glibc-common" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-devel-2.5-65 (x86_64),glibc-devel-2.5-65 (i386)  glibc-devel-2.5-65 (x86_64),glibc-devel-2.5-65 (i386)  matched
Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "glibc-headers" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-headers-2.5-65      glibc-headers-2.5-65      matched
Package existence for "glibc-headers" check passed

Compatibility check: Package existence for "gcc-c++ (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc-c++-4.1.2-51.el5 (x86_64)  gcc-c++-4.1.2-51.el5 (x86_64)  matched
Package existence for "gcc-c++ (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64)  libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64)  matched
Package existence for "libaio-devel (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libgcc-4.1.2-51.el5 (x86_64),libgcc-4.1.2-51.el5 (i386)  libgcc-4.1.2-51.el5 (x86_64),libgcc-4.1.2-51.el5 (i386)  matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++-4.1.2-51.el5 (x86_64),libstdc++-4.1.2-51.el5 (i386)  libstdc++-4.1.2-51.el5 (x86_64),libstdc++-4.1.2-51.el5 (i386)  matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++-devel-4.1.2-51.el5 (x86_64)  libstdc++-devel-4.1.2-51.el5 (x86_64)  matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          sysstat-7.0.2-11.el5      sysstat-7.0.2-11.el5      matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "ksh" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          ksh-20100202-1.el5_6.6    ksh-20100202-1.el5_6.6    matched
Package existence for "ksh" check passed

Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        rac2
[oracle@rac1 ~]$


iv) Verify New Node (New Pre-Node)

As GI Home owner
From Node RAC1

[oracle@rac1 ~]$ cluvfy stage -pre nodeadd -n rac2 -fixup -verbose

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac2                                  yes
Result: Node reachability check passed from node "rac1"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed
  rac2                                  passed

Verification of the hosts config file successful


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.107]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac2[192.168.2.102]             yes
  rac1[192.168.2.106]             rac1[192.168.2.107]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac2[192.168.2.102]             yes
  rac1[192.168.2.107]             rac1[192.168.2.103]             yes
  rac1[192.168.2.107]             rac1[192.168.2.105]             yes
  rac1[192.168.2.107]             rac2[192.168.2.102]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac2[192.168.2.102]             yes
  rac1[192.168.2.105]             rac2[192.168.2.102]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.107              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
  rac1:192.168.2.101              rac2:192.168.2.102              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Result: Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed
  rac2                                  passed

Verification of the hosts config file successful


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.107]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac2[192.168.2.102]             yes
  rac1[192.168.2.106]             rac1[192.168.2.107]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac2[192.168.2.102]             yes
  rac1[192.168.2.107]             rac1[192.168.2.103]             yes
  rac1[192.168.2.107]             rac1[192.168.2.105]             yes
  rac1[192.168.2.107]             rac2[192.168.2.102]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac2[192.168.2.102]             yes
  rac1[192.168.2.105]             rac2[192.168.2.102]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.107              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
  rac1:192.168.2.101              rac2:192.168.2.102              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.0.101]             rac2[192.168.0.102]             yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.0.101              rac2:192.168.0.102              passed
Result: TCP connectivity check passed for subnet "192.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          3.8633GB (4050940.0KB)    1.5GB (1572864.0KB)       passed
  rac1          3.8633GB (4050940.0KB)    1.5GB (1572864.0KB)       passed
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          3.6216GB (3797500.0KB)    50MB (51200.0KB)          passed
  rac1          2.4853GB (2606016.0KB)    50MB (51200.0KB)          passed
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          1.9994GB (2096472.0KB)    3.8633GB (4050940.0KB)    failed  <<<<<
  rac1          5.5385GB (5807488.0KB)    3.8633GB (4050940.0KB)    passed
Result: Swap space check failed

Check: Free disk space for "rac2:/u01/app/11.2.0/grid"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/11.2.0/grid  rac2          /u01          17.165GB      5.5GB         passed
Result: Free disk space check passed for "rac2:/u01/app/11.2.0/grid"

Check: Free disk space for "rac1:/u01/app/11.2.0/grid"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/11.2.0/grid  rac1          /u01          21.2705GB     5.5GB         passed
Result: Free disk space check passed for "rac1:/u01/app/11.2.0/grid"

Check: Free disk space for "rac2:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              rac2          /             3.707GB       1GB           passed
Result: Free disk space check passed for "rac2:/tmp"

Check: Free disk space for "rac1:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              rac1          /             4.4717GB      1GB           passed
Result: Free disk space check passed for "rac1:/tmp"

Check: User existence for "oracle"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac2          passed                    exists(1100)
  rac1          passed                    exists(1100)

Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "oracle"

Check: Run level
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          5                         3,5                       passed
  rac1          5                         3,5                       passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              hard          65536         65536         passed
  rac2              hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              soft          1024          1024          passed
  rac2              soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              hard          16384         16384         passed
  rac2              hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              soft          2047          2047          passed
  rac2              soft          2047          2047          passed
Result: Soft limits check passed for "maximum user processes"

Check: System architecture
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          x86_64                    x86_64                    passed
  rac1          x86_64                    x86_64                    passed
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          2.6.32-200.13.1.el5uek    2.6.18                    passed
  rac1          2.6.32-200.13.1.el5uek    2.6.18                    passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              250           250           250           passed
  rac2              250           250           250           passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              32000         32000         32000         passed
  rac2              32000         32000         32000         passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              100           100           100           passed
  rac2              100           100           100           passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              128           128           128           passed
  rac2              128           128           128           passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1054504960    1054504960    2074081280    failed        Current value too low. Configured value too low.  <<< 
  rac2              2074081280    1054504960    2074081280    failed        Configured value too low.  <<<
Result: Kernel parameter check failed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4096          4096          4096          passed
  rac2              4096          4096          4096          passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              2097152       2097152       2097152       passed
  rac2              2097152       2097152       2097152       passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              6815744       6815744       6815744       passed
  rac2              6815744       6815744       6815744       passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
  rac2              between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              262144        262144        262144        passed
  rac2              262144        262144        262144        passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4194304       4194304       4194304       passed
  rac2              4194304       4194304       4194304       passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              262144        262144        262144        passed
  rac2              262144        262144        262144        passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1048586       1048586       1048576       passed
  rac2              1048586       1048586       1048576       passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1048576       1048576       1048576       passed
  rac2              1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "make"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          make-3.81-3.el5           make-3.81                 passed
  rac1          make-3.81-3.el5           make-3.81                 passed
Result: Package existence check passed for "make"

Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6      passed
  rac1          binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6      passed
Result: Package existence check passed for "binutils"

Check: Package existence for "gcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc(x86_64)-4.1.2-51.el5  gcc(x86_64)-4.1.2         passed
  rac1          gcc(x86_64)-4.1.2-51.el5  gcc(x86_64)-4.1.2         passed
Result: Package existence check passed for "gcc(x86_64)"

Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio(x86_64)-0.3.106-5  libaio(x86_64)-0.3.106    passed
  rac1          libaio(x86_64)-0.3.106-5  libaio(x86_64)-0.3.106    passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc(x86_64)-2.5-65      glibc(x86_64)-2.5-24      passed
  rac1          glibc(x86_64)-2.5-65      glibc(x86_64)-2.5-24      passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          compat-libstdc++-33(x86_64)-3.2.3-61  compat-libstdc++-33(x86_64)-3.2.3  passed
  rac1          compat-libstdc++-33(x86_64)-3.2.3-61  compat-libstdc++-33(x86_64)-3.2.3  passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "elfutils-libelf(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf(x86_64)-0.137-3.el5  elfutils-libelf(x86_64)-0.125  passed
  rac1          elfutils-libelf(x86_64)-0.137-3.el5  elfutils-libelf(x86_64)-0.125  passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"

Check: Package existence for "elfutils-libelf-devel"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.125  passed
  rac1          elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.125  passed
Result: Package existence check passed for "elfutils-libelf-devel"

Check: Package existence for "glibc-common"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-common-2.5-65       glibc-common-2.5          passed
  rac1          glibc-common-2.5-65       glibc-common-2.5          passed
Result: Package existence check passed for "glibc-common"

Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-devel(x86_64)-2.5-65  glibc-devel(x86_64)-2.5   passed
  rac1          glibc-devel(x86_64)-2.5-65  glibc-devel(x86_64)-2.5   passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "glibc-headers"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-headers-2.5-65      glibc-headers-2.5         passed
  rac1          glibc-headers-2.5-65      glibc-headers-2.5         passed
Result: Package existence check passed for "glibc-headers"

Check: Package existence for "gcc-c++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc-c++(x86_64)-4.1.2-51.el5  gcc-c++(x86_64)-4.1.2     passed
  rac1          gcc-c++(x86_64)-4.1.2-51.el5  gcc-c++(x86_64)-4.1.2     passed
Result: Package existence check passed for "gcc-c++(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio-devel(x86_64)-0.3.106-5  libaio-devel(x86_64)-0.3.106  passed
  rac1          libaio-devel(x86_64)-0.3.106-5  libaio-devel(x86_64)-0.3.106  passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libgcc(x86_64)-4.1.2-51.el5  libgcc(x86_64)-4.1.2      passed
  rac1          libgcc(x86_64)-4.1.2-51.el5  libgcc(x86_64)-4.1.2      passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++(x86_64)-4.1.2-51.el5  libstdc++(x86_64)-4.1.2   passed
  rac1          libstdc++(x86_64)-4.1.2-51.el5  libstdc++(x86_64)-4.1.2   passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++-devel(x86_64)-4.1.2-51.el5  libstdc++-devel(x86_64)-4.1.2  passed
  rac1          libstdc++-devel(x86_64)-4.1.2-51.el5  libstdc++-devel(x86_64)-4.1.2  passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          sysstat-7.0.2-11.el5      sysstat-7.0.2             passed
  rac1          sysstat-7.0.2-11.el5      sysstat-7.0.2             passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          ksh-20100202-1.el5_6.6    ksh-20060214              passed
  rac1          ksh-20100202-1.el5_6.6    ksh-20060214              passed
Result: Package existence check passed for "ksh"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
  rac1                                  passed

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed


Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac2          passed                    does not exist
  rac1          passed                    does not exist
Result: User "oracle" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "rajasekhar.com" as found on node "rac1"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  failed
  rac2                                  failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1,rac2

File "/etc/resolv.conf" is not consistent across nodes

Fixup information has been generated for following node(s):
rac2,rac1
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.3.0_oracle/runfixup.sh'

Pre-check for node addition was unsuccessful on all the nodes.
[oracle@rac1 ~]$


v) Run fixup scripts

On RAC 1, as root user

[root@rac1 ~]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log
Setting Kernel Parameters...
kernel.shmmax = 68719476736
kernel.shmmax = 1054504960
/tmp/CVU_11.2.0.3.0_oracle/orarun.sh: line 230: [: 68719476736kernel.shmmax: integer expression expected
The value for shmmax in response file is not greater than value for shmmax in /etc/sysctl.conf file. Hence not changing it.
kernel.shmmax = 2074081280
[root@rac1 ~]#

On RAC2, as root user

[root@rac2 ~]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log
Setting Kernel Parameters...
kernel.shmmax = 68719476736
kernel.shmmax = 1054504960
/tmp/CVU_11.2.0.3.0_oracle/orarun.sh: line 230: [: 68719476736kernel.shmmax: integer expression expected
The value for shmmax in response file is not greater than value for shmmax in /etc/sysctl.conf file. Hence not changing it.
The value for shmmax in response file is not greater than value of shmmax for current session. Hence not changing it.
[root@rac2 ~]#


2. Cluster Node Addition for GI Home.

Note: Pre-node check failed. Hence need to set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running addNode.sh otherwise, the silent node addition will fail without showing any errors to the console.


i) Run addnode.sh script

As GI Home owner
From active node RAC1

[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      rac2
         /u01: Required 6.99GB : Available 15.98GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.4
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Server) 11.2.0.3.0
      Installation Plugin Files 11.2.0.3.0
      Universal Storage Manager Files 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Automatic Storage Management Assistant 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Perl Interpreter 5.10.0.0.2
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Oracle Net Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.3.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Cluster Verification Utility Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle LDAP administration 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Cluster Ready Services Files 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Sunday, June 21, 2015 12:35:19 PM IST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Sunday, June 21, 2015 12:35:22 PM IST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Sunday, June 21, 2015 12:45:29 PM IST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac2'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac2
/u01/app/11.2.0/grid/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac1 bin]$

Note: If you are using GNS:
cd $GI_HOME/oui/bin
./addNode.sh -silent “CLUSTER_NEW_NODES={rac2}”


ii) Run orainstRoot.sh #On nodes rac2

On node RAC2
As root

[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]#


iii) Run root.sh #On nodes rac2

On node RAC2
As root

[root@rac2 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#

Note: The root.sh script will configuring Grid Infrastructure on the new node and includes adding High Availability Services to the /etc/inittab so that CRS starts up when the machine starts.


iv) Check Clusterware Resources after ran root.sh

[root@rac2 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac2 ~]#

[root@rac2 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac2 ~]#


v) Run cluvfy post-node add script

As GI Home owner
From node RAC1 as best practice, because initially we ran pre-node cluvfy from RAC 1 only

[oracle@rac1 ~]$ cluvfy stage -post nodeadd -n rac2 -verbose

Performing post-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac2                                  yes
Result: Node reachability check passed from node "rac1"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
  rac1                                  passed

Verification of the hosts config file successful


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.104   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth1   169.254.215.111 169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.2.102]             rac2[192.168.2.107]             yes
  rac2[192.168.2.102]             rac2[192.168.2.104]             yes
  rac2[192.168.2.102]             rac1[192.168.2.101]             yes
  rac2[192.168.2.102]             rac1[192.168.2.106]             yes
  rac2[192.168.2.102]             rac1[192.168.2.103]             yes
  rac2[192.168.2.102]             rac1[192.168.2.105]             yes
  rac2[192.168.2.107]             rac2[192.168.2.104]             yes
  rac2[192.168.2.107]             rac1[192.168.2.101]             yes
  rac2[192.168.2.107]             rac1[192.168.2.106]             yes
  rac2[192.168.2.107]             rac1[192.168.2.103]             yes
  rac2[192.168.2.107]             rac1[192.168.2.105]             yes
  rac2[192.168.2.104]             rac1[192.168.2.101]             yes
  rac2[192.168.2.104]             rac1[192.168.2.106]             yes
  rac2[192.168.2.104]             rac1[192.168.2.103]             yes
  rac2[192.168.2.104]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac2:192.168.2.102              passed
  rac1:192.168.2.101              rac2:192.168.2.107              passed
  rac1:192.168.2.101              rac2:192.168.2.104              passed
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking cluster integrity...

  Node Name
  ------------------------------------
  rac1
  rac2

Cluster integrity check passed


Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac2"
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid" is not shared
Result: Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
  rac1                                  passed

Verification of the hosts config file successful


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.104   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth1   169.254.215.111 169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.2.102]             rac2[192.168.2.107]             yes
  rac2[192.168.2.102]             rac2[192.168.2.104]             yes
  rac2[192.168.2.102]             rac1[192.168.2.101]             yes
  rac2[192.168.2.102]             rac1[192.168.2.106]             yes
  rac2[192.168.2.102]             rac1[192.168.2.103]             yes
  rac2[192.168.2.102]             rac1[192.168.2.105]             yes
  rac2[192.168.2.107]             rac2[192.168.2.104]             yes
  rac2[192.168.2.107]             rac1[192.168.2.101]             yes
  rac2[192.168.2.107]             rac1[192.168.2.106]             yes
  rac2[192.168.2.107]             rac1[192.168.2.103]             yes
  rac2[192.168.2.107]             rac1[192.168.2.105]             yes
  rac2[192.168.2.104]             rac1[192.168.2.101]             yes
  rac2[192.168.2.104]             rac1[192.168.2.106]             yes
  rac2[192.168.2.104]             rac1[192.168.2.103]             yes
  rac2[192.168.2.104]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac2:192.168.2.102              passed
  rac1:192.168.2.101              rac2:192.168.2.107              passed
  rac1:192.168.2.101              rac2:192.168.2.104              passed
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.0.102]             rac1[192.168.0.101]             yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.0.101              rac2:192.168.0.102              passed
Result: TCP connectivity check passed for subnet "192.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking node application existence...

Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       passed
  rac1          yes                       yes                       passed
VIP node application check passed

Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       passed
  rac1          yes                       yes                       passed
NETWORK node application check passed

Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          no                        no                        exists
  rac1          no                        no                        exists
GSD node application is offline on nodes "rac2,rac1"

Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          no                        yes                       passed
  rac1          no                        yes                       passed
ONS node application check passed


Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac-scan.rajasekhar.com  rac2          true          LISTENER_SCAN1  1521          true
  rac-scan.rajasekhar.com  rac1          true          LISTENER_SCAN2  1521          true
  rac-scan.rajasekhar.com  rac1          true          LISTENER_SCAN3  1521          true

Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?
  ------------  ------------------------  ------------------------
  rac1          LISTENER_SCAN1            yes
  rac1          LISTENER_SCAN2            yes
  rac1          LISTENER_SCAN3            yes
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "rac-scan.rajasekhar.com"...
  SCAN Name     IP Address                Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac-scan.rajasekhar.com  192.168.2.107             passed
  rac-scan.rajasekhar.com  192.168.2.105             passed
  rac-scan.rajasekhar.com  192.168.2.106             passed

Verification of SCAN VIP and Listener setup passed

Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac2          passed                    does not exist
Result: User "oracle" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  rac2                                  Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  rac2          0.0                       passed

Time offset is within the specified limits on the following set of nodes:
"[rac2]"
Result: Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful. <<<
[oracle@rac1 ~]$


vi) Check Cluster Nodes

[oracle@rac2 ~]$ olsnodes -n
rac1    1
rac2    2
[oracle@rac2 ~]$


vii) Check TNS Listener

On node RAC2

[oracle@rac2 ~]$ ps -ef | grep tns
root        13     2  0 Jun20 ?        00:00:00 [netns]
oracle   24168     1  0 12:57 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle   24639     1  0 12:57 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
oracle   28355 28292  0 13:19 pts/1    00:00:00 grep tns
[oracle@rac2 ~]$


viii) Check ASM Status

On node RAC2

[oracle@rac2 ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
[oracle@rac2 ~]$


ix) Check OCR

On node RAC2

[oracle@rac2 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4076
         Available space (kbytes) :     258044
         ID                       : 1037097601
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@rac2 ~]$


x) Check Vote disk

On node RAC2

[oracle@rac2 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a785661b81264f8ebfa8538128d4e1fe (/dev/oracleasm/disks/DISK1) [DATA]
 2. ONLINE   7a41922f13254f61bf4cee3f53b9aa74 (/dev/oracleasm/disks/DISK2) [DATA]
 3. ONLINE   492244b5021f4fc7bf7d75b74cfe841a (/dev/oracleasm/disks/DISK3) [DATA]
Located 3 voting disk(s).
[oracle@rac2 ~]$


3. Cluster Node Addition for RDBMS Home.


i) Run addnode.sh script

From Node 1 RAC1
As RDBMS HOME owner

[oracle@rac1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin/    <<< This is RDBMS home

[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5670 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

........
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/oracle/product/11.2.0/db_1
   New Nodes
Space Requirements
   New Nodes
      rac2
         /u01: Required 5.04GB : Available 10.57GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Client 10.3.2.1.0
      Oracle Configuration Manager 10.3.5.0.1
      Oracle ODBC Driverfor Instant Client 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      Oracle Real Application Testing 11.2.0.3.0
      Oracle Database Vault J2EE Application 11.2.0.3.0
      Oracle Label Security 11.2.0.3.0
      Oracle Data Mining RDBMS Files 11.2.0.3.0
      Oracle OLAP RDBMS Files 11.2.0.3.0
      Oracle OLAP API 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle Database Vault option 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      Oracle Display Fonts 9.0.2.0.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle JDBC Server Support Package 11.2.0.3.0
      Oracle SQL Developer 11.2.0.3.0
      Oracle Application Express 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      SQLJ Runtime 11.2.0.3.0
      Database Workspace Manager 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Exadata Storage Server 11.2.0.1.0
      Provisioning Advisor Framework 10.2.0.4.3
      Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0
      Enterprise Manager Repository Core Files 10.2.0.4.4
      Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0
      Enterprise Manager Grid Control Core Files 10.2.0.4.4
      Enterprise Manager Common Core Files 10.2.0.4.4
      Enterprise Manager Agent Core Files 10.2.0.4.4
      RDBMS Required Support Files 11.2.0.3.0
      regexp 2.1.9.0.0
      Agent Required Support Files 10.2.0.4.3
      Oracle 11g Warehouse Builder Required Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Parser Generator Required Support Files 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Multimedia Annotator 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Sample Schema Data 11.2.0.3.0
      Oracle Starter Database 11.2.0.3.0
      Oracle Message Gateway Common Files 11.2.0.3.0
      Oracle XML Query 11.2.0.3.0
      XML Parser for Oracle JVM 11.2.0.3.0
      Oracle Help For Java 4.2.9.0.0
      Installation Plugin Files 11.2.0.3.0
      Enterprise Manager Common Files 10.2.0.4.3
      Expat libraries 2.0.1.0.1
      Deinstallation Tool 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      JAccelerator (COMPANION) 11.2.0.3.0
      Oracle Containers for Java 11.2.0.3.0
      Perl Interpreter 5.10.0.0.2
      Oracle Net Required Support Files 11.2.0.3.0
      Secure Socket Layer 11.2.0.3.0
      Oracle Universal Connection Pool 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Oracle Code Editor 1.2.1.0.0I
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      Oracle ODBC Driver 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle UIX 2.2.24.6.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Precompiler Common Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Oracle Help for the  Web 2.0.14.0.0
      Oracle LDAP administration 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      Generic Connectivity Common Files 11.2.0.3.0
      Oracle Database Gateway for ODBC 11.2.0.3.0
      Oracle Programmer 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Enterprise Manager Agent 10.2.0.4.3
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Call Interface (OCI) 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle XML Development Kit 11.2.0.3.0
      Database Configuration and Upgrade Assistants 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Advanced Security 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Enterprise Manager Console DB 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Oracle Text 11.2.0.3.0
      Oracle Net Services 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
      Oracle OLAP 11.2.0.3.0
      Oracle Spatial 11.2.0.3.0
      Oracle Partitioning 11.2.0.3.0
      Enterprise Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Sunday, June 21, 2015 2:06:51 PM IST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Sunday, June 21, 2015 2:08:47 PM IST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Sunday, June 21, 2015 8:10:31 PM IST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac1 bin]$


ii) root.sh #On nodes rac2 from RDBMS home

[root@rac2 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac2 ~]#


4. Add Instance to Database through Command-Line or you can add via dbca.


i) Pre-task

On RAC2
As RDBMS Home owner

[oracle@rac2 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/dbs
[oracle@rac2 dbs]$ ls -ltr
total 28
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwnike1 <<
-rw-r----- 1 oracle oinstall  161 Jun 21 19:56 initDBUA5216639.ora
-rw-r----- 1 oracle oinstall   36 Jun 21 19:56 initnike1.ora << 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_DBUA5216639.dat 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_nike1.dat 
-rw-r--r-- 1 oracle oinstall 2851 Jun 21 19:56 init.ora 
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwDBUA5216639 [oracle@rac2 dbs]$ 
[oracle@rac2 dbs]$ mv initnike1.ora initnike2.ora
[oracle@rac2 dbs]$ mv orapwnike1 orapwnike2 
[oracle@rac2 dbs]$ ls -ltr 
total 28 
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwnike2 
-rw-r----- 1 oracle oinstall  161 Jun 21 19:56 initDBUA5216639.ora 
-rw-r----- 1 oracle oinstall   36 Jun 21 19:56 initnike2.ora 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_DBUA5216639.dat 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_nike1.dat 
-rw-r--r-- 1 oracle oinstall 2851 Jun 21 19:56 init.ora 
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwDBUA5216639 [oracle@rac2 dbs]$ 
[oracle@rac2 dbs]$ cat initnike2.ora 
SPFILE='+DATA1/nike/spfilenike.ora' 
[oracle@rac2 dbs]$ 
[oracle@rac2 dbs]$ echo "nike2:/u01/app/oracle/product/11.2.0/db_1:N" >> /etc/oratab
[oracle@rac2 dbs]$ echo "nike:/u01/app/oracle/product/11.2.0/db_1:N" >> /etc/oratab

cat /etc/oratab
..
#
+ASM2:/u01/app/11.2.0/grid:N            # line added by Agent
nike2:/u01/app/oracle/product/11.2.0/db_1:N
nike:/u01/app/oracle/product/11.2.0/db_1:N


[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/adump
[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/dpdump
[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/hdump
[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/pfile


ii) Add redo thread

On RAC1, As ORACLE HOME owner

SQL> set lines 180
SQL> col MEMBER for a60
SQL> select b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

   THREAD#     GROUP# MEMBER                                                            BYTES
---------- ---------- ------------------------------------------------------------ ----------
         1          2 +DATA1/nike/onlinelog/group_2.290.847761035                    52428800
         1          1 +DATA1/nike/onlinelog/group_1.289.847761031                    52428800

SQL>

SQL> alter database add logfile thread 2 group 3 ('+DATA1') size 52428800, group 4 ('+DATA1') size 52428800;

Database altered.

SQL>

SQL> select b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

   THREAD#     GROUP# MEMBER                                                            BYTES
---------- ---------- ------------------------------------------------------------ ----------
         1          2 +DATA1/nike/onlinelog/group_2.290.847761035                    52428800
         1          1 +DATA1/nike/onlinelog/group_1.289.847761031                    52428800
         2          3 +DATA1/nike/onlinelog/group_3.271.883012175                    52428800
         2          4 +DATA1/nike/onlinelog/group_4.270.883012181                    52428800

SQL>

SQL> alter database enable public thread 2;

Database altered.

SQL>


iii) Add undo tablespace

On node RAC1

SQL> set pages 0
SQL> set long 9999999
SQL> select dbms_metadata.get_ddl('TABLESPACE','UNDOTBS1') from dual;

  CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE
   ALTER DATABASE DATAFILE
  '+DATA1/nike/datafile/undotbs1.357.847760839' RESIZE 41943040


SQL> create undo tablespace undotbs2 datafile '+DATA1' size 25M autoextend on next 5m maxsize 40M;

Tablespace created.

SQL>

SQL> alter system set undo_tablespace=undotbs2 scope=spfile sid='nike2';

System altered.

SQL> alter system set instance_number=2 scope=spfile sid='nike2';

System altered.

SQL> alter system set thread=2 scope=spfile sid='nike2';

System altered.

SQL>

SQL> alter system set cluster_database_instances=2 scope=spfile sid='*';

System altered.

SQL>

SQL> select inst_id,name,value from gv$parameter where name like 'undo_table%';

INST_ID NAME                 VALUE
------- -------------------- ---------------
      2 undo_tablespace      UNDOTBS2
      1 undo_tablespace      UNDOTBS1

SQL>


iv) Add instance to OCR

From node RAC1 as ORACLE_HOME owner

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl   <<< You should run from RDBMS Home
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl add instance -d nike -i nike2 -n rac2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is not running on node rac2 <<<
[oracle@rac1 ~]$ 
[oracle@rac1 ~]$ srvctl start instance -d nike -i nike2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is running on node rac2. Instance status: Open. <<<
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl config database -d nike
Database unique name: nike
Database name: nike
Oracle home: /u01/app/oracle/product/11.2.0/db_1 <<<
Oracle user: oracle
Spfile: +DATA1/nike/spfilenike.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: nike
Database instances: nike1,nike2  <<<<
Disk Groups: DATA1
Mount point paths:
Services: nike_srv <<
Type: RAC
Database is administrator managed <<< [oracle@rac1 ~]$ SQL> col host_name format a22
SQL> set lines 180
SQL> select host_name, inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;

HOST_NAME                 INST_ID INSTANCE_NAME    STATUS       START_TIME
---------------------- ---------- ---------------- ------------ --------------------
rac1.rajasekhar.com             1 nike1            OPEN         21-JUN-2015 11:38:48
rac2.rajasekhar.com             2 nike2            OPEN         22-JUN-2015 01:25:13

SQL>


v) Add New Instance to service via srvctl or you can add via dbca

From node RAC1, as ORACLE_HOME owner

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl   <<< You should run from RDBMS Home
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl add service -d nike -s nike_srv -a nike2 -u  <<<< -a means available instance.
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv
Service name: nike_srv <<<
Service is enabled
Server pool: nike_nike_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike1
Available instances: nike2 <<<<
[oracle@rac1 ~]$

Note: If you want add instance as preferred then follow below

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl   <<< You should run from RDBMS Home
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl add service -d nike -s nike_srv -r nike2 -u
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv
Service name: nike_srv
Service is enabled
Server pool: nike_nike_srv
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike2,nike1 <<<
Available instances:
[oracle@rac1 ~]$ srvctl status service -d nike
Service nike_srv is running on instance(s) nike1 <<<< 
[oracle@rac1 ~]$ srvctl start service -d nike
[oracle@rac1 ~]$ srvctl status service -d nike -v
Service nike_srv is running on instance(s) nike1,nike2 <<<<
[oracle@rac1 ~]$

Note: Modify tnsnames.ora file if required.


vi) Check the cluster stack

[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
      2        ONLINE  ONLINE       rac2                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Still page under construction !!! 🙂

Delete Node

Delete Node from Cluster in 11gR2 (11.2.0.3)

0. Environment

1. Remove Oracle Instance

i) Remove Instance from OEM Database Control Monitoring
ii) Backup OCR
iii) Remove instance name from services
iv) Remove Instance from the Cluster Database

2. Remove Oracle Database Software

i) Verify Listener Not Running in Oracle Home
ii) Update Oracle Inventory – (Node Being Removed)
iii) Remove instance nike2 entry from /etc/oratab
iv) De-install Oracle Home (Non-shared Oracle Home)
v) Update Oracle Inventory – (All Remaining Nodes)

3. Remove Node from Clusterware

i) Unpin Node
ii) Disable Oracle Clusterware
iii) Delete Node from Clusterware Configuration
iv) Update Oracle Inventory – (Node Being Removed) for GI Home
v) De-install Oracle Grid Infrastructure Software (Non-shared GI Home)
vi) After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.
vii) Update Oracle Inventory – (All Remaining Nodes)
viii) Verify New Cluster Configuration


0. Environment:

– Two Node RAC version 11.2.0.3
– Node Name: RAC1, RAC2
– OS: RHEL 5
– Database name: nike and instances are nike1 and nike2
– The existing Oracle RAC database is administrator-managed (not policy-managed).
– The existing Oracle RAC does not use shared Oracle homes for the Grid Infrastructure or Database software.

Task: We are going to delete node RAC2 from cluster.

Cluster status
===============
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
      2        ONLINE  ONLINE       rac2                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
      2        OFFLINE OFFLINE
ora.oc4j
      1        ONLINE  OFFLINE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]# 


1. Remove Oracle Instance


i) Remove Instance from OEM Database Control MonitoringI did not configured. Hence  ignoring.
From: Node RAC1
Note: Run the emca command from any node in the cluster, except from the node where the instance we want to stop from being monitored is running.

emctl status dbconsole
emctl status agnet
emca -displayConfig dbcontrol -cluster —
emca -deleteInst db


ii) Backup OCR
From: Node RAC1

[root@rac1 ~]# ocrconfig -manualbackup
rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#

Note: that voting disks are automatically backed up in OCR after the changes we will be making to the cluster.


iii) Remove instance name from services
From node RAC1
Note:
Before deleting an instance from an Oracle RAC database, use either SRVCTL or Oracle Enterprise Manager to do the following:
If you have services configured, then relocate the services
Modify the services so that each service can run on one remaining instance
Ensure that the instance to be removed from an administrator-managed database is neither a preferred nor an available instance of any service

[oracle@rac1 ~]$ srvctl status service -d nike -s nike_srv -v
Service nike_srv is running on instance(s) nike1  <<<< service running only on instance nike1. Hence no issue here. If service running on instance 2, then we need to relocate service before instance delete.
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv -v
Service name: nike_srv
Service is enabled
Server pool: nike_nike_srv
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike1  
Available instances: nike2 <<< here instance nike2 as available instance. we have to remove available instance for the service.
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl modify service -d nike -s nike_srv -n -i nike1 <<<
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv -v
Service name: nike_srv
Service is enabled
Server pool: nike_nike_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike1  
Available instances:  <<<< we have removed instance nike2 entry "srvctl modify service -d nike -s nike_srv -n -i nike1"
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl status service -d nike -s nike_srv -v
Service nike_srv is running on instance(s) nike1  <<<
[oracle@rac1 ~]$


iv) Remove Instance from the Cluster Database
From Node RAC1 as Oracle Home owner.

[oracle@rac1 ~]$ srvctl config database -d nike -v
Database unique name: nike
Database name: nike
Oracle home: /u01/app/oracle/product/11.2.0/db_1 <<<<<
Oracle user: oracle
Spfile: +DATA1/nike/spfilenike.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: nike
Database instances: nike1,nike2 <<<<
Disk Groups: DATA1
Mount point paths:
Services: nike_srv
Type: RAC
Database is administrator managed <<<< This is Admin managed database.
[oracle@rac1 ~]$

[oracle@rac1 ~]$ dbca -silent -deleteInstance -nodeList rac2 -gdbName nike -instanceName nike2 -sysDBAUserName sys -sysDBAPassword sys
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/nike.log" for further details.
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl config database -d nike -v
Database unique name: nike
Database name: nike
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA1/nike/spfilenike.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: nike
Database instances: nike1 <<<<<< instance nike2 removed. 
Disk Groups: DATA1 
Mount point paths: 
Services: nike_srv 
Type: RAC Database is administrator managed 
[oracle@rac1 ~]$ 
SQL> select inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;
   INST_ID INSTANCE_NAME    STATUS       START_TIME
---------- ---------------- ------------ --------------------
         1 nike1            OPEN         19-JUN-2015 01:15:39  <<<< Instance is removed from the cluster. 
SQL>


2. Remove Oracle Database Software

i) Verify Listener Not Running in Oracle Home >>> Please ignore this step because no listener is running from RDBMS HOME.

From Node RAC2

[oracle@rac2 ~]$ ps -ef | grep tns
root         9     2  0 Jun19 ?        00:00:00 [netns]
oracle    4372     1  0 Jun19 ?        00:00:01 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit  <<< Listner is running form GI Home.
oracle    4408     1  0 Jun19 ?        00:00:01 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle   11983 11943  0 00:43 pts/1    00:00:00 grep tns
[oracle@rac2 ~]$

[oracle@rac2 ~]$ srvctl config listener -a (If listener is running from GI Home then ignore this step)
Name: LISTENER
Network: 1, Owner: oracle
Home: 
  /u01/app/11.2.0/grid on node(s) rac1,rac2 
End points: TCP:1521
[oracle@rac2 ~]$

Note: If any listeners were explicitly created to run from the Oracle home being removed, they would need to be disabled and stopped.
srvctl disable listener -l  -n 
srvctl stop listener -l  -n 


ii) Update Oracle Inventory – (Node Being Removed)

From node RAC2

[oracle@rac2 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac2}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac2 bin]$


iii) Remove instance nike2 entry from /etc/oratab

From node RAC2

+ASM2:/u01/app/11.2.0/grid:N            # line added by Agent >> We need to remove all database instance entries from oratab except ASM entries.
[oracle@rac2 ~]$


iv) De-install Oracle Home (Non-shared Oracle Home)

From Node RAC2 as Oracle Home owner

[oracle@rac2 ~]$ cd $ORACLE_HOME/deinstall
[oracle@rac2 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: rac2
Checking for sufficient temp space availability on node(s) : 'rac2'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2015-06-20_01-56-25-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2015-06-20_01-56-28-AM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2015-06-20_01-56-32-AM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check4882.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac2
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y <<<<<
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2015-06-20_01-56-02-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2015-06-20_01-56-02-AM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2015-06-20_01-56-32-AM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2015-06-20_02-02-14-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2015-06-20_02-02-14-AM.log

De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean4882.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/11.2.0/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2015-06-20_01-50-11AM' on node 'rac2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@rac2 deinstall]$

Note: If this were a shared home then instead of de-installing the Oracle Database software, you would simply detach the Oracle home from the inventory.
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location


v) Update Oracle Inventory – (All Remaining Nodes)
From Node RAC1

[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ pwd
/u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$
[oracle@rac1 bin]$
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$


3. Remove Node from Clusterware


i) Unpin Node
As root from node RAC1

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Pinned
[root@rac1 ~]# crsctl unpin css -n rac2
CRS-4667: Node rac2 successfully unpinned.
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Unpinned <<<<
[root@rac1 ~]#

Note: If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.


ii) Disable Oracle Clusterware

From node RAC2, which you want to delete
As user root.

[root@rac2 ~]# cd /u01/app/11.2.0/grid/crs/install/
[root@rac2 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.2.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.2.103/192.168.2.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.2.104/192.168.2.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2613: Could not find resource 'ora.registry.acfs'.
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA1.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
You have new mail in /var/spool/mail/root
[root@rac2 install]#


iii) Delete Node from Clusterware Configuration

From node RAC1
As root user

[root@rac1 ~]# crsctl delete node -n rac2
CRS-4661: Node rac2 successfully deleted.
[root@rac1 ~]#
[root@rac1 ~]# olsnodes -t -s
rac1    Active  Pinned
[root@rac1 ~]#
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  OFFLINE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


iv) Update Oracle Inventory – (Node Being Removed) for GI Home

From node RAC2, which we want to remove
As GI home owner

	  
[oracle@rac2 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac2 bin]$


v) De-install Oracle Grid Infrastructure Software (Non-shared GI Home)

From Node 2, which we want to delete
As GI Home owner

[oracle@rac2 deinstall]$ pwd
/u01/app/11.2.0/grid/deinstall
[oracle@rac2 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2015-06-20_05-14-18AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac2
Checking for sufficient temp space availability on node(s) : 'rac2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2015-06-20_05-14-18AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]
 >
[ENTER]
The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"
Enter the IP netmask of Virtual IP "192.168.2.104" on node "rac2"[255.255.255.0]
 >
[ENTER]
Enter the network interface name on which the virtual IP address "192.168.2.104" is active
 >
[ENTER]
Enter an address or the name of the virtual IP[]
 >
[ENTER]
Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/netdc_check2015-06-20_05-43-09-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER_1,LISTENER,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER_1,LISTENER,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/asmcadc_check2015-06-20_05-44-06-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac2
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2015-06-20_05-14-18AM/logs/deinstall_deconfig2015-06-20_05-34-12-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2015-06-20_05-14-18AM/logs/deinstall_deconfig2015-06-20_05-34-12-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/asmcadc_clean2015-06-20_05-44-25-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/netdc_clean2015-06-20_05-44-25-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "rac2": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2015-06-20_05-14-18AM/perl/bin/perl -I/tmp/deinstall2015-06-20_05-14-18AM/perl/lib -I/tmp/deinstall2015-06-20_05-14-18AM/crs/install /tmp/deinstall2015-06-20_05-14-18AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands


Run the above command as root on the specified node(s) from a different shell

[root@rac2 ~]# /tmp/deinstall2015-06-20_05-14-18AM/perl/bin/perl -I/tmp/deinstall2015-06-20_05-14-18AM/perl/lib -I/tmp/deinstall2015-06-20_05-14-18AM/crs/install /tmp/deinstall2015-06-20_05-14-18AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 ~]#

Once completed press [ENTER] on the first shell session

Remove the directory: /tmp/deinstall2015-06-20_05-14-18AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done


Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2015-06-20_05-14-18AM' on node 'rac2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac2' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Note: If this were a shared home then instead of de-installing the Grid Infrastructure software, you would simply detach the Grid home from the inventory.
./runInstaller -detachHome ORACLE_HOME=Grid_home_location

[root@rac2 ~]# rm -rf /etc/oraInst.loc
[root@rac2 ~]# rm -rf /opt/ORCLfmap
[root@rac2 ~]# rm -rf /u01/app/11.2.0
[root@rac2 ~]# rm -rf /u01/app/oracle


vi) After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.

[root@rac2 ~]# diff /etc/inittab /etc/inittab.no_crs
[root@rac2 ~]#


vii) Update Oracle Inventory – (All Remaining Nodes)

From Node 1.
As GI Home owner

[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2036 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$


viii) Verify New Cluster Configuration

[oracle@rac1 ~]$ cluvfy stage -post nodedel -n rac2 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed
Result:
Node removal check passed

Post-check for node removal was successful.
[oracle@rac1 ~]$

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Still page under construction !!! 🙂

CLUVFY

Cluster Verification Utility Command Reference

1. Pre-check for CRS installation
2. Post-Check for CRS Installation
3. Post-check for hardware and operating system
4. Pre-check for ACFS Configuration
5. Post-check for ACFS Configuration
6. Pre-check for OCFS2 or OCFS
7. Post-check for OCFS2 or OCFS
8. Pre-check for database configuration
9. Pre-check for database installation
10. Pre-check for configuring Oracle Restart
11. Post-check for configuring Oracle Restart
12. Pre-check for add node
13. Post-check for add node
14. Post-check for node delete
15. Check ACFS integrity
16. Checks user accounts and administrative permissions
17. Check ASM integrity
18. Check CFS integrity
19. Check Clock Synchronization
20. Check cluster integrity
21. Check cluster manager integrity
22. Check CRS integrity
23. Check DHCP
24. Check DNS
25. Check HA integrity
26. Check space availability
27. Check GNS
28. Check GPNP
29. Check healthcheck
30. Checks node applications existence
31. Check node connectivity
32. Checks reachability between nodes
33. Check OCR integrity
34. Check OHASD integrity
35. Check OLR integrity
36. Check node comparison and verification
37. Checks SCAN configuration
38. Checks software component verification
39. Checks space availability
40. Checks shared storage accessibility
41. Check minimum system requirements
42. Check Voting Disk Udev settings
43. Run cluvfy before doing an upgrade
44. strace the command to get more details

~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~
cluvfy stage {-pre|-post} stage_name stage_specific_options [-verbose]
~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~

1. Pre-check for CRS installation


Use the cluvfy stage -pre crsinst command to check the specified nodes before installing Oracle Clusterware. CVU performs additional checks on OCR and voting disks if you specify the -c and -q options.
 
cluvfy stage -pre crsinst -n node1,node2 -verbose


2. Post-Check for CRS Installation


Use the cluvfy stage -post crsinst command to check the specified nodes after installing Oracle Clusterware.
 
cluvfy stage -post crsinst -n node1,node2 -verbose


3. Post-check for hardware and operating system


-- Use the cluvfy stage -post hwos stage verification command to perform network and storage verifications on the specified nodes in the cluster before installing 
   Oracle software. This command also checks for supported storage types and checks each one for sharing.
 
cluvfy stage -post hwos -n node_list [-s storageID_list] [-verbose]
cluvfy.sh stage -post hwos -n node1,node2 -verbose


4. Pre-check for ACFS Configuration


-- the cluvfy stage -pre acfscfg command to verify your cluster nodes are set up correctly before configuring Oracle ASM Cluster File System (Oracle ACFS).
 
cluvfy stage -pre acfscfg -n node_list [-asmdev asm_device_list] [-verbose]
cluvfy stage -pre acfscfg -n node1,node2 -verbose


5. Post-check for ACFS Configuration


-- Use the cluvfy stage -post acfscfg to check an existing cluster after you configure Oracle ACFS.
 
cluvfy stage -post acfscfg -n node_list [-verbose]
cluvfy stage -post acfscfg -n node1,node2 -verbose


6. Pre-check for OCFS2 or OCFS


-- Use the cluvfy stage -pre cfs stage verification command to verify your cluster nodes are set up correctly before setting up OCFS2 or OCFS for Windows.
 
cluvfy stage -pre cfs -n node_list -s storageID_list [-verbose]
cluvfy stage -pre cfs -n node1,node2 -verbose


7. Post-check for OCFS2 or OCFS


-- Use the cluvfy stage -post cfs stage verification command to perform the appropriate checks on the specified nodes after setting up OCFS2 or OCFS for Windows.
 
cluvfy stage -post cfs -n node_list -f file_system [-verbose]
cluvfy stage -post cfs -n node1,node2 -verbose


8. Pre-check for database configuration


-- Use the cluvfy stage -pre dbcfg command to check the specified nodes before configuring an Oracle RAC database to verify whether your system meets all of the
   criteria for creating a database or for making a database configuration change.
 
cluvfy stage -pre dbcfg -n node_list -d Oracle_home [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre dbcfg -n node1,node2 -d Oracle_home -verbose


9. Pre-check for database installation


-- Use the cluvfy stage -pre dbinst command to check the specified nodes before installing or creating an Oracle RAC database to verify that your system meets all of
   the criteria for installing or creating an Oracle RAC database.
 
cluvfy stage -pre dbinst -n node_list [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}]  [-osdba osdba_group] [-d Oracle_home] [-fixup [-fixupdir fixup_dir] [-verbose]   


10. Pre-check for configuring Oracle Restart


-- Use the cluvfy stage -pre hacfg command to check a local node before configuring Oracle Restart.
 
cluvfy stage -pre hacfg [-osdba osdba_group] [-orainv orainventory_group] [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre hacfg -verbose 


11. Post-check for configuring Oracle Restart


-- Use the cluvfy stage -post hacfg command to check the local node after configuring Oracle Restart.
 
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post hacfg -verbose


12. Pre-check for add node.


/*Use the cluvfy stage -pre nodeadd command to verify the specified nodes are configured correctly before adding them to your existing cluster, and to verify the integrity of the cluster before you add the nodes.

This command verifies that the system configuration, such as the operating system version, software patches, packages, and kernel parameters, for the nodes that you want to add, is compatible with the existing cluster nodes, and that the clusterware is successfully operating on the existing nodes. Run this node on any node of the existing cluster.
*/
 
cluvfy stage -pre nodeadd -n node_list [-vip vip_list]  [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre nodeadd -n node1,node2 -verbose


13. Post-check for add node.


/*
Use the cluvfy stage -post nodeadd command to verify that the specified nodes have been successfully added to the cluster at the network, shared storage, and clusterware levels.
*/
 
cluvfy stage -post nodeadd -n node_list [-verbose]
cluvfy stage -post nodeadd -n node1,node2 -verbose


14. Post-check for node delete.


/*
Use the cluvfy stage -post nodedel command to verify that specific nodes have been successfully deleted from a cluster. Typically, this command verifies that the node-specific interface configuration details have been removed, the nodes are no longer a part of cluster configuration, and proper Oracle ASM cleanup has been performed.
*/
 
cluvfy stage -post nodedel -n node_list [-verbose]
cluvfy stage -post nodedel -n node1, node2 -verbose

~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~
cluvfy comp component_name component_specific_options [-verbose]
~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~


15. Check ACFS integrity


-- Use the cluvfy comp acfs component verification command to check the integrity of Oracle ASM Cluster File System on all nodes in a cluster.
 
cluvfy comp acfs [-n [node_list] | [all]] [-f file_system] [-verbose]
cluvfy comp acfs -n node1,node2 -f /acfs/share -verbose


16. Checks user accounts and administrative permissions


/*
Use the cluvfy comp admprv command to verify user accounts and administrative permissions for installing Oracle Clusterware and Oracle RAC software, and for creating an Oracle RAC database or modifying an Oracle RAC database configuration.
*/
 
cluvfy comp admprv [-n node_list]
{ -o user_equiv [-sshonly] |
 -o crs_inst [-orainv orainventory_group] |
 -o db_inst [-osdba osdba_group] [-fixup [-fixupdir fixup_dir]] | 
 -o db_config -d oracle_home [-fixup [-fixupdir fixup_dir]] }
 [-verbose]


17. Check ASM integrity


Use the cluvfy comp asm component verification command to check the integrity of Oracle Automatic Storage Management (Oracle ASM) on all nodes in the cluster. This check ensures that the ASM instances on the specified nodes are running from the same Oracle home and that asmlib, if it exists, has a valid version and ownership.
 
cluvfy comp asm [-n node_list | all ] [-verbose]
cluvfy comp asm -n node1,node2 -verbose


18. Check CFS integrity


Use the cluvfy comp cfs component verification command to check the integrity of the clustered file system (OCFS for Windows or OCFS2) you provide using the -f option. CVU checks the sharing of the file system from the nodes in the node list.
 
cluvfy comp cfs [-n node_list] -f file_system [-verbose]
cluvfy comp cfs -n node1,node2 -f /ocfs2/share -verbose


19. Check Clock Synchronization


Use the cluvfy comp clocksync component verification command to clock synchronization across all the nodes in the node list. CVU verifies a time synchronization service is running (Oracle Cluster Time Synchronization Service (CTSS) or Network Time Protocol (NTP)), that each node is using the same reference server for clock synchronization, and that the time offset for each node is within permissible limits.
 
cluvfy comp clocksync [-noctss] [-n node_list [all]] [-verbose]
cluvfy comp clocksync [-noctss] [-n node_list [all]] [-verbose]

-noctss
If you specify this option, then CVU does not perform a check on CTSS. Instead, CVU checks the platform's native time synchronization service, such as NTP.


20. Check cluster integrity


Use the cluvfy comp clu component verification command to check the integrity of the cluster on all the nodes in the node list.
 
cluvfy comp clu [-n node_list] [-verbose]
cluvfy comp clu -n node1,node2 -verbose


21. Check cluster manager integrity


Use the cluvfy comp clumgr component verification command to check the integrity of cluster manager subcomponent, or Oracle Cluster Synchronization Services (CSS), on all the nodes in the node list.
 
cluvfy comp clumgr [-n node_list] [-verbose]
cluvfy comp clumgr -n node1, node2 -verbose


22. Check CRS integrity


Run the cluvfy comp crs component verification command to check the integrity of the Cluster Ready Services (CRS) daemon on the specified nodes.
 
cluvfy comp crs [-n node_list] [-verbose]
cluvfy comp crs -n node1, node2 -verbose


23. Check DHCP


Starting with Oracle Database 11g release 2 (11.2.0.2), use the cluvfy comp dhcp component verification command to verify that the DHCP server exists on the network and is capable of providing a required number of IP addresses. This verification also verifies the response time for the DHCP server. You must run this command as root.
 
# cluvfy comp dhcp -clustername cluster_name [-vipresname vip_resource_name] [-port dhcp_port] [-n node_list] [-verbose]

-clustername cluster_name
The name of the cluster of which you want to check the integrity of DHCP.

-vipresname vip_resource_name
The name of the VIP resource.

-port dhcp_port
The port on which DHCP listens. The default port is 67.


24. Check DNS


Starting with Oracle Database 11g release 2 (11.2.0.2), use the cluvfy comp dns component verification command to verify that the Grid Naming Service (GNS) subdomain delegation has been properly set up in the Domain Name Service (DNS) server.
 
Run cluvfy comp dns -server on one node of the cluster. On each node of the cluster run cluvfy comp dns -client to verify the DNS server setup for the cluster.


25. Check HA integrity


Use the cluvfy comp ha component verification command to check the integrity of Oracle Restart on the local node.
 
cluvfy comp ha [-verbose]
cluvfy comp ha -verbose


26. Check space availability


Use the cluvfy comp freespace component verification command to check the free space available in the Oracle Clusterware home storage and ensure that there is at least 5% of the total space available. For example, if the total storage is 10GB, then the check ensures that at least 500MB of it is free.
 
cluvfy comp freespace [-n node_list | all]
cluvfy comp freespace -n node1, node2


27. Check GNS


Use the cluvfy comp gns component verification command to verify the integrity of the Oracle Grid Naming Service (GNS) on the cluster.
 
cluvfy comp gns -precrsinst -domain gns_domain -vip gns_vip [-n node_list]  [-verbose]

cluvfy comp gns -postcrsinst [-verbose]


28. Check GPNP


Use the cluvfy comp gpnp component verification command to check the integrity of Grid Plug and Play on all of the nodes in a cluster.
 
cluvfy comp gpnp [-n node_list] [-verbose]
cluvfy comp gpnp -n node1,node2 -verbose


29. Check healthcheck


Use the cluvfy comp healthcheck component verification command to check your Oracle Clusterware and Oracle Database installations for their compliance with mandatory requirements and best practices guidelines, and to ensure that they are functioning properly.
 
cluvfy comp healthcheck [-collect {cluster|database}] [-db db_unique_name]
 [-bestpractice|-mandatory] [-deviations] [-html] [-save [-savedir directory_path]]


30. Checks node applications existence


Use the component cluvfy comp nodeapp command to check for the existence of node applications, namely VIP, NETWORK, ONS, and GSD, on all of the specified nodes.
 
cluvfy comp nodeapp [-n node_list] [-verbose]
cluvfy comp nodeapp -n node1, node2 -verbose


31. Check node connectivity


Use the cluvfy comp nodecon component verification command to check the connectivity among the nodes specified in the node list. If you provide an interface list, then CVU checks the connectivity using only the specified interfaces.
 
cluvfy comp nodecon -n node_list [-i interface_list] [-verbose]
cluvfy comp nodecon -i eth2 -n node1,node2 -verbose
cluvfy comp nodecon -i eth3 -n node1,node2 -verbose


32. Checks reachability between nodes


Use the cluvfy comp nodereach component verification command to check the reachability of specified nodes from a source node.
 
cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]

-srcnode node
The name of the source node from which CVU performs the reachability test. If you do not specify a source node, then the node on which you run the command is used as the source node.


33. Check OCR integrity


Use the cluvfy comp ocr component verification command to check the integrity of Oracle Cluster Registry (OCR) on all the specified nodes.
 
cluvfy comp ocr [-n node_list] [-verbose]
cluvfy comp ocr -n node1,node2 -verbose


34. Check OHASD integrity


Use the cluvfy comp ohasd component verification command to check the integrity of the Oracle High Availability Services daemon.
 
cluvfy comp ohasd [-n node_list] [-verbose]
cluvfy comp ohasd -n node1,node2 -verbose


35. Check OLR integrity


Use the cluvfy comp olr component verification command to check the integrity of Oracle Local Registry (OLR) on the local node.
 
cluvfy comp olr [-verbose]
cluvfy comp olr -verbose


36. Check node comparison and verification


Use the cluvfy comp peer component verification command to check the compatibility and properties of the specified nodes against a reference node. You can check compatibility for non-default user group names and for different releases of the Oracle software. This command compares physical attributes, such as memory and swap space, as well as user and group values, kernel settings, and installed operating system packages.
 
cluvfy comp peer -n node_list [-refnode node]  [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-orainv orainventory_group]  [-osdba osdba_group] [-verbose]

-refnode
The node that CVU uses as a reference for checking compatibility with other nodes. If you do not specify this option, then CVU reports values for all the nodes in the node list.


37. Checks SCAN configuration


Use the cluvfy comp scan component verification command to check the Single Client Access Name (SCAN) configuration.
 
cluvfy comp scan -verbose


38. Checks software component verification


Use the cluvfy comp software component verification command to check the files and attributes installed with the Oracle software.
 
cluvfy comp software [-n node_list] [-d oracle_home] [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-verbose]


39. Checks space availability


Use the cluvfy comp space component verification command to check for free disk space at the location you specify in the -l option on all the specified nodes.
 
cluvfy comp space [-n node_list] -l storage_location -z disk_space {B | K | M | G} [-verbose]

cluvfy comp space -n all -l /u01/oracle -z 2g -verbose


40. Checks shared storage accessibility


Use the cluvfy comp ssa component verification command to discover and check the sharing of the specified storage locations. CVU checks sharing for nodes in the node list.
 
cluvfy comp ssa [-n node_list] [-s storageID_list] [-t {software | data | ocr_vdisk}] [-verbose]

cluvfy comp ssa -n node1,node2 -verbose
cluvfy comp ssa -n node1,node2 -s /dev/sdb


41. Check minimum system requirements


Use the cluvfy comp sys component verification command to check that the minimum system requirements are met for the specified product on all the specified nodes.
 
cluvfy comp sys [-n node_list] -p {crs | ha | database}  [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-osdba osdba_group]  [-orainv orainventory_group] [-fixup [-fixupdir fixup_dir]] [-verbose]

cluvfy comp sys -n node1,node2 -p crs -verbose
cluvfy comp sys -n node1,node2 -p ha -verbose
cluvfy comp sys -n node1,node2 -p database -verbose


42. Check Voting Disk Udev settings


Use the cluvfy comp vdisk component verification command to check the voting disks configuration and the udev settings for the voting disks on all the specified nodes.
 
cluvfy comp vdisk [-n node_list] [-verbose]
cluvfy comp vdisk -n node1,node2 -verbose


43. Run cluvfy before doing an upgrade

runcluvfy stage -pre crsinst -upgrade -n  -rolling -src_crshome  -dest_crshome  -dest_version  -verbose
runcluvfy stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/grid/11.2.0.1 -dest_crshome /u01/app/grid/11.2.0.3 -dest_version 11.2.0.4.0 -verbose


44. Strace the command

Strace the command to get more details
eg: strace -t -f -o clu.trc cluvfy comp olr -verbose
/*
[oracle@rac1 ~]$ strace -t -f -o clu.trc cluvfy comp olr -verbose

Verifying OLR integrity

Checking OLR integrity...

Checking OLR config file...

OLR config file check successful


Checking OLR file attributes...

OLR file check successful


WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Verification of OLR integrity was successful.
[oracle@rac1 ~]$ ls -ltr clu.trc
-rw-r--r-- 1 oracle oinstall 4206376 Jun 12 01:15 clu.trc
[oracle@rac1 ~]$

*/

Reference:
http://docs.oracle.com/cd/E11882_01/rac.112/e41959/cvu.htm#CWADD1100

Still page under construction !!!