Category Archives: RAC

Configure UDEV Rules for Oracle ASM

Configure UDEV Rules for Oracle ASM

Table of Contents
___________________________________________________________________________________________________

Pre-requisites:

a. Create OS user and groups for Oracle ASM disk owner
b. Add RAW disks to server (Check with sysadmin)

Configure UDEV Rules for Oracle ASM:

1. List the disks
2. Create partitions for the disks
3. Load updated block device partition tables
4. Find SCSI ID
5. Create udev rules
6. Reload the udev rules
7. List oracleasm disks

___________________________________________________________________________________________________


Pre-requisites


a. Create OS user and groups for Oracle ASM disk owner

http://www.br8dba.com/create-users-groups-and-paths-for-oracle-rac/


b. Add RAW disks to server

Operating SystemRed Hat Enterprise Linux release 8.7
Storage
/dev/sda100G for Linux and others
/dev/sdb100G for /u01
/dev/sdc100G for /orabackup
/dev/sdd100G for ASM DISK
/dev/sde100G for ASM DISK


Configure UDEV Rules for Oracle ASM

If ASMLIB kernel drivers are not available then we have to use udev rules to create the disks for Oracle ASM.

Setting up Oracle ASM udev rules is not so complicated. All you need is the udevadm command and editing one file.


1. List the disks

[root@testbox ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  100G  0 disk
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   99G  0 part
  ├─ol-root 252:0    0 61.2G  0 lvm  /
  ├─ol-swap 252:1    0  7.9G  0 lvm  [SWAP]
  └─ol-home 252:2    0 29.9G  0 lvm  /home
sdb           8:16   0  100G  0 disk
└─sdb1        8:17   0  100G  0 part /u01
sdc           8:32   0  100G  0 disk
└─sdc1        8:33   0  100G  0 part /orabackup
sdd           8:48   0  100G  0 disk
sde           8:64   0  100G  0 disk
sr0          11:0    1 47.3M  0 rom
[root@testbox ~]#


2. Create partitions for the disks

fdisk /dev/sdd
fdisk /dev/sde

[root@testbox ~]# fdisk /dev/sdd

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8c782d71.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (1-4, default 1):
First sector (2048-209715199, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-209715199, default 209715199):

Created a new partition 1 of type 'Linux' and of size 100 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@testbox ~]#
[root@testbox ~]# fdisk /dev/sde

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x36462c72.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (1-4, default 1):
First sector (2048-209715199, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-209715199, default 209715199):

Created a new partition 1 of type 'Linux' and of size 100 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@testbox ~]#


3. Load updated block device partition tables

# For Linux 5,6 and 7

# /sbin/partprobe /dev/sdd1
# /sbin/partprobe /dev/sde1

# For Linux8

[root@testbox ~]# /sbin/partx -u /dev/sdd1
[root@testbox ~]#
[root@testbox ~]# /sbin/partx -u /dev/sde1
[root@testbox ~]#


4. Find SCSI ID

[root@testbox ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd
1ATA_VBOX_HARDDISK_VB0adc00d9-c5938e95
[root@testbox ~]#

[root@testbox ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sde
1ATA_VBOX_HARDDISK_VBdaa5e829-52e4b9b1
[root@testbox ~]#


5. Create udev rules

[root@testbox ~]# ls -ltr /etc/udev/rules.d
total 12
-rw-r--r--. 1 root root  67 Oct  2 18:03 69-vdo-start-by-dev.rules
-rw-r--r--. 1 root root 148 Nov  9 06:11 99-vmware-scsi-timeout.rules
-rw-r--r--. 1 root root 134 Apr  1 07:52 60-vboxadd.rules
[root@testbox ~]#

vi /etc/udev/rules.d/99-oracle-asmdevices.rules
and below lines and then save it.

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB0adc00d9-c5938e95", SYMLINK+="oracleasm/disks/DISK01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBdaa5e829-52e4b9b1", SYMLINK+="oracleasm/disks/DISK02", OWNER="grid", GROUP="asmadmin", MODE="0660"


[root@testbox ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB0adc00d9-c5938e95", SYMLINK+="oracleasm/disks/DISK01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBdaa5e829-52e4b9b1", SYMLINK+="oracleasm/disks/DISK02", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@testbox ~]#

[root@testbox ~]# ls -ltr /etc/udev/rules.d/99-oracle-asmdevices.rules
-rw-r--r--. 1 root root 428 Apr  2 02:18 /etc/udev/rules.d/99-oracle-asmdevices.rules
[root@testbox ~]#


6. Reload the udev rules

The below commands will reload the complete udev configuration and will trigger all the udev rules. 
On a busy production system this could disrupt ongoing operations, applications running on the server. Please use the below command during a scheduled maintenance window only.

[root@testbox ~]# /sbin/udevadm control --reload-rules
[root@testbox ~]#
[root@testbox ~]# ls -ld /dev/sd*1
brw-rw----. 1 root disk 8,  1 Apr  2 02:23 /dev/sda1
brw-rw----. 1 root disk 8, 17 Apr  2 02:23 /dev/sdb1
brw-rw----. 1 root disk 8, 33 Apr  2 02:23 /dev/sdc1
brw-rw----. 1 root disk 8, 49 Apr  2 02:23 /dev/sdd1
brw-rw----. 1 root disk 8, 65 Apr  2 02:23 /dev/sde1
[root@testbox ~]#
[root@testbox ~]# /sbin/udevadm trigger
[root@testbox ~]#
[root@testbox ~]# ls -ld /dev/sd*1
brw-rw----. 1 root disk     8,  1 Apr  2 02:34 /dev/sda1
brw-rw----. 1 root disk     8, 17 Apr  2 02:34 /dev/sdb1
brw-rw----. 1 root disk     8, 33 Apr  2 02:34 /dev/sdc1
brw-rw----. 1 grid asmadmin 8, 49 Apr  2 02:34 /dev/sdd1
brw-rw----. 1 grid asmadmin 8, 65 Apr  2 02:34 /dev/sde1
[root@testbox ~]#


7. List the oracleasm disks

[root@testbox ~]# ls -ltra /dev/oracleasm/disks/*
lrwxrwxrwx. 1 root root 10 Apr  2 02:34 /dev/oracleasm/disks/DISK01 -> ../../sdd1
lrwxrwxrwx. 1 root root 10 Apr  2 02:34 /dev/oracleasm/disks/DISK02 -> ../../sde1
[root@testbox ~]#

[root@testbox ~]# ls -ld /dev/sd*1
brw-rw----. 1 root disk     8,  1 Apr  2 02:34 /dev/sda1
brw-rw----. 1 root disk     8, 17 Apr  2 02:34 /dev/sdb1
brw-rw----. 1 root disk     8, 33 Apr  2 02:34 /dev/sdc1
brw-rw----. 1 grid asmadmin 8, 49 Apr  2 02:34 /dev/sdd1
brw-rw----. 1 grid asmadmin 8, 65 Apr  2 02:34 /dev/sde1
[root@testbox ~]#

Note: symboliclinks are owned by root, but devices will be owned by grid:asmadmin

 

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com
Linkedin: https://www.linkedin.com/in/rajasekhar-amudala/

Create users, groups and Paths for Oracle RAC

Create users, groups and Paths for Oracle RAC

Table of Contents
___________________________________________________________________________________________________

Note: When you create the user and group, make sure that you specify a user and group ID that is not in use.

Create the necessary Oracle groups and users. Be sure to assign the same group ID, user ID, and home directory for the user on each system

Step 1: Create groups
Step 2: Create users
Step 3: Verify users and groups
Step 4: Create directory Paths for grid and oracle installation
Step 5: Verify ownership and permissions

___________________________________________________________________________________________________


Step 1: Create groups

[root@testbox ~]# /usr/sbin/groupadd -g 3001 oinstall
[root@testbox ~]# /usr/sbin/groupadd -g 3002 dba
[root@testbox ~]# /usr/sbin/groupadd -g 3003 asmadmin
[root@testbox ~]# /usr/sbin/groupadd -g 3004 asmdba
[root@testbox ~]# /usr/sbin/groupadd -g 3005 asmoper
[root@testbox ~]#


Step 2: Create users

[root@testbox ~]# /usr/sbin/useradd -u 3000 -g oinstall -G asmdba,dba,asmadmin,asmoper grid
[root@testbox ~]# /usr/sbin/useradd -u 3001 -g oinstall -G asmdba,dba,asmadmin oracle
[root@testbox ~]#


Step 3: Verify users and groups

[root@testbox ~]# id oracle
uid=3001(oracle) gid=3001(oinstall) groups=3001(oinstall),3002(dba),3003(asmadmin),3004(asmdba)
[root@testbox ~]#
[root@testbox ~]# id grid
uid=3000(grid) gid=3001(oinstall) groups=3001(oinstall),3002(dba),3003(asmadmin),3004(asmdba),3005(asmoper)
[root@testbox ~]#

[root@testbox ~]# grep oracle /etc/passwd
oracle:x:3001:3001::/home/oracle:/bin/bash
[root@testbox ~]#

[root@testbox ~]# grep grid /etc/passwd
grid:x:3000:3001::/home/grid:/bin/bash
[root@testbox ~]#

[root@testbox ~]# ls -ld /home/grid
drwx------. 3 grid oinstall 78 Apr  2 02:06 /home/grid
[root@testbox ~]#
[root@testbox ~]# ls -ld /home/oracle
drwx------. 3 oracle oinstall 78 Apr  2 02:06 /home/oracle
[root@testbox ~]#


Step 4: Verify users and groups

# mkdir -p /u01/app/grid ( ORACLE_BASE for GRID HOME )
# mkdir -p /u01/app/19.0.0/grid         ( GRID_HOME )
# chown -R grid:oinstall /u01

# mkdir -p /u01/app/oracle              ( ORACLE_BASE for ORACLE HOME ) 
# mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1 ( ORACLE HOME )
# chown -R oracle:oinstall /u01/app/oracle

# chmod -R 775 /u01/

[root@testbox ~]# mkdir -p /u01/app/19.0.0/grid
[root@testbox ~]# mkdir -p /u01/app/grid
[root@testbox ~]# chown -R grid:oinstall /u01
[root@testbox ~]#
[root@testbox ~]# mkdir -p /u01/app/oracle
[root@testbox ~]# mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1
[root@testbox ~]# chown -R oracle:oinstall /u01/app/oracle
[root@testbox ~]# chmod -R 775 /u01/
[root@testbox ~]#


Step 5: Verify ownership and permissions

[root@testbox ~]# ls -ld /u01/app/19.0.0/grid
drwxrwxr-x. 2 grid oinstall 4096 Apr  2 03:08 /u01/app/19.0.0/grid
[root@testbox ~]# ls -ld /u01/app/grid
drwxrwxr-x. 2 grid oinstall 4096 Apr  2 03:08 /u01/app/grid
[root@testbox ~]#

[root@testbox ~]# ls -ld /u01/app/oracle
drwxrwxr-x. 3 oracle oinstall 4096 Apr  2 03:10 /u01/app/oracle
[root@testbox ~]# ls -ld /u01/app/oracle/product/19.0.0/dbhome_1
drwxrwxr-x. 2 oracle oinstall 4096 Apr  2 03:10 /u01/app/oracle/product/19.0.0/dbhome_1
[root@testbox ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com
Linkedin: https://www.linkedin.com/in/rajasekhar-amudala/

Create ACFS File System on RAC

Create ACFS File System on RAC

Table of Contents
___________________________________________________________________________________________________

1. Overview
2. Environment
3. Verify ACFS modules
4. Create ASM Disk group
5. Create Volume
6. Create File System
7. Register File System on OCR
8. Verify Mount Point on All Nodes
9. Find ACFS mountpoints
10. Unmount ACFS filesystem
11. Start/Stop ACFS filesystem
_________________________________________________________________________________________________


1. Overview

ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)

Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of Oracle Database. 

Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.

Oracle ACFS does not support files for the Oracle Grid Infrastructure home.

Oracle ACFS does not support Oracle Cluster Registry (OCR) and voting files.

Oracle ACFS functionality requires that the disk group compatibility attributes for ASM and ADVM be set to 11.2 or greater.


2. Environment

Nodes			: RAC1,RAC2
GI Version 		: 12.2
RDBMS Version	        : 12.2


3. Verify ACFS modules

[root@rac1 ~]# lsmod | grep ora
oracleacfs           4616192  0
oracleadvm            782336  0
oracleoks             655360  2 oracleacfs,oracleadvm
oracleasm              65536  1
[root@rac1 ~]#

[root@rac2 ~]# lsmod | grep ora
oracleacfs           4616192  0
oracleadvm            782336  0
oracleoks             655360  2 oracleacfs,oracleadvm
oracleasm              61440  1
[root@rac2 ~]#


4. Create ASM Diskgroup

SQL> set lines 250
set pages 9999
column path format a20

select path, group_number group_#, disk_number disk_#, mount_status,header_status, state, total_mb, free_mb from v$asm_disk order by group_number;

PATH                                        GROUP_#     DISK_# MOUNT_S HEADER_STATU STATE      TOTAL_MB    FREE_MB
---------------------------------------- ---------- ---------- ------- ------------ -------- ---------- ----------
/dev/oracleasm/disks/DISK9                        0          0 CLOSED  PROVISIONED  NORMAL            0          0
/dev/oracleasm/disks/DISK8                        0          1 CLOSED  PROVISIONED  NORMAL            0          0
/dev/oracleasm/disks/DISK7                        1          1 CACHED  MEMBER       NORMAL         1020        948
/dev/oracleasm/disks/DISK6                        1          0 CACHED  MEMBER       NORMAL         1020        936
/dev/oracleasm/disks/DISK5                        2          4 CACHED  MEMBER       NORMAL         1020        512
/dev/oracleasm/disks/DISK3                        2          2 CACHED  MEMBER       NORMAL         1020        524
/dev/oracleasm/disks/DISK4                        2          3 CACHED  MEMBER       NORMAL         1020        508
/dev/oracleasm/disks/DISK2                        2          1 CACHED  MEMBER       NORMAL         1020        516
/dev/oracleasm/disks/DISK1                        2          0 CACHED  MEMBER       NORMAL         1020        528
/dev/oracleasm/disks/GIMR3                        3          3 CACHED  MEMBER       NORMAL        10236      10144
/dev/oracleasm/disks/GIMR4                        3          2 CACHED  MEMBER       NORMAL        10236      10160
/dev/oracleasm/disks/GIMR1                        3          1 CACHED  MEMBER       NORMAL        10236      10140
/dev/oracleasm/disks/GIMR2                        3          0 CACHED  MEMBER       NORMAL        10236      10116

13 rows selected.

SQL>

Find ASM physical disk mapping

[root@rac1 ~]# oracleasm querydisk -d DISK8
Disk "DISK8" is a valid ASM disk on device [8,129]
[root@rac1 ~]# oracleasm querydisk -d DISK9
Disk "DISK9" is a valid ASM disk on device [8,145]
[root@rac1 ~]# ls -l /dev | grep 8, | grep 129
brw-rw----. 1 root disk       8, 129 Sep 21 14:31 sdi1 <---
[root@rac1 ~]#
[root@rac1 ~]# ls -l /dev | grep 8, | grep 145
brw-rw----. 1 root disk       8, 145 Sep 21 14:31 sdj1 <---
[root@rac1 ~]#

[OR]

[root@rac1 ~]# oracleasm querydisk -p DISK8 | head -2 | grep /dev | awk -F: '{print $1}'
/dev/sdi1
[root@rac1 ~]#
[root@rac1 ~]# oracleasm querydisk -p DISK9 | head -2 | grep /dev | awk -F: '{print $1}'
/dev/sdj1
[root@rac1 ~]#

[OR]

#!/bin/bash
echo "ASM Disk Mappings"
echo "----------------------------------------------------"
for f in `oracleasm listdisks`
do
dp=`oracleasm querydisk -p  $f | head -2 | grep /dev | awk -F: '{print $1}'`
echo "$f: $dp"
done

[OR]

[root@rac1 ~]# oracleasm querydisk -p DISK8
Disk "DISK8" is a valid ASM disk
/dev/sdi1: LABEL="DISK8" TYPE="oracleasm"
[root@rac1 ~]#
[root@rac1 ~]# oracleasm querydisk -p DISK9
Disk "DISK9" is a valid ASM disk
/dev/sdj1: LABEL="DISK9" TYPE="oracleasm"
[root@rac1 ~]#

[root@rac1 ~]# fdisk -l /dev/sdi1

Disk /dev/sdi1: 1072 MB, 1072693248 bytes, 2095104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rac1 ~]#
[root@rac1 ~]# fdisk -l /dev/sdj1

Disk /dev/sdj1: 1072 MB, 1072693248 bytes, 2095104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rac1 ~]#

SQL> SELECT NAME,VALUE,GROUP_NUMBER FROM v$asm_attribute where name like '%com%';

NAME                           VALUE      GROUP_NUMBER
------------------------------ ---------- ------------
compatible.asm                 12.2.0.1.0            1
compatible.rdbms               10.1.0.0.0            1
compatible.advm                12.2.0.1.0            1
compatible.asm                 12.2.0.1.0            2
compatible.rdbms               10.1.0.0.0            2
compatible.advm                12.2.0.1.0            2
compatible.asm                 12.2.0.1.0            3
compatible.rdbms               10.1.0.0.0            3
compatible.advm                12.2.0.1.0            3

9 rows selected.

SQL>

SQL> CREATE DISKGROUP ACFSDG EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/DISK8','/dev/oracleasm/disks/DISK9'
ATTRIBUTE 'compatible.asm' = '12.2.0.1.0',
'compatible.rdbms'='10.1.0.0.0' ,
'compatible.advm' = '12.2.0.1.0';

Diskgroup created.

SQL>

SQL> SELECT NAME,VALUE,GROUP_NUMBER FROM v$asm_attribute where name like '%com%';

NAME                           VALUE      GROUP_NUMBER
------------------------------ ---------- ------------
compatible.asm                 12.2.0.1.0            1
compatible.rdbms               10.1.0.0.0            1
compatible.advm                12.2.0.1.0            1

compatible.asm                 12.2.0.1.0            2
compatible.rdbms               10.1.0.0.0            2
compatible.advm                12.2.0.1.0            2

compatible.asm                 12.2.0.1.0            3
compatible.rdbms               10.1.0.0.0            3
compatible.advm                12.2.0.1.0            3

compatible.asm                 12.2.0.1.0            4
compatible.rdbms               10.1.0.0.0            4
compatible.advm                12.2.0.1.0            4

12 rows selected.

SQL>

SQL> COL % FORMAT 99.0
SQL> SELECT name, free_mb, total_mb, ((total_mb-free_mb)/total_mb)*100 as "USED %", free_mb/total_mb*100 "FREE%" from v$asm_diskgroup order by 1;

NAME                              FREE_MB   TOTAL_MB     USED %      FREE%
------------------------------ ---------- ---------- ---------- ----------
ACFSDG                               1989       2046 2.78592375 97.2140762
ARCH                                 1884       2040 7.64705882 92.3529412
DATA                                 2556       5100 49.8823529 50.1176471
GIMR                                40560      40944 .937866354 99.0621336

SQL>

SQL> select path, group_number group_#, disk_number disk_#, mount_status,header_status, state, total_mb, free_mb from v$asm_disk order by group_number;

PATH                              GROUP_#     DISK_# MOUNT_S HEADER_STATU STATE      TOTAL_MB    FREE_MB
------------------------------ ---------- ---------- ------- ------------ -------- ---------- ----------
/dev/oracleasm/disks/DISK7              1          1 CACHED  MEMBER       NORMAL         1020        760
/dev/oracleasm/disks/DISK6              1          0 CACHED  MEMBER       NORMAL         1020        748
/dev/oracleasm/disks/DISK2              2          1 CACHED  MEMBER       NORMAL         1020        500
/dev/oracleasm/disks/DISK3              2          2 CACHED  MEMBER       NORMAL         1020        504
/dev/oracleasm/disks/DISK4              2          3 CACHED  MEMBER       NORMAL         1020        496
/dev/oracleasm/disks/DISK5              2          4 CACHED  MEMBER       NORMAL         1020        496
/dev/oracleasm/disks/DISK1              2          0 CACHED  MEMBER       NORMAL         1020        512
/dev/oracleasm/disks/GIMR4              3          2 CACHED  MEMBER       NORMAL        10236      10160
/dev/oracleasm/disks/GIMR3              3          3 CACHED  MEMBER       NORMAL        10236      10144
/dev/oracleasm/disks/GIMR1              3          1 CACHED  MEMBER       NORMAL        10236      10136
/dev/oracleasm/disks/GIMR2              3          0 CACHED  MEMBER       NORMAL        10236      10116
/dev/oracleasm/disks/DISK8              4          0 CACHED  MEMBER       NORMAL         1023        459
/dev/oracleasm/disks/DISK9              4          1 CACHED  MEMBER       NORMAL         1023        459

13 rows selected.

SQL>

[root@rac1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@rac1 ~]# crsctl stat res ora.ACFSDG.dg -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.dg
               ONLINE  ONLINE       rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]#

--- Logon to Node2 

[oracle@rac2 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM2
The Oracle base has been set to /u01/app/oracle
[oracle@rac2 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 21 22:37:44 2020

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> ALTER DISKGROUP ACFSDG MOUNT;

Diskgroup altered.

SQL>

[root@rac1 ~]# crsctl stat res ora.ACFSDG.dg -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]#


5. Create Volume

[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1 ~]$ asmcmd  volcreate -G ACFSDG -s 1G acfs_test
[oracle@rac1 ~]$

[oracle@rac1 ~]$ asmcmd volinfo -G ACFSDG acfs_test
Diskgroup Name: ACFSDG

         Volume Name: ACFS_TEST
         Volume Device: /dev/asm/acfs_test-463
         State: ENABLED
         Size (MB): 1024
         Resize Unit (MB): 64
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

[oracle@rac1 ~]$

[root@rac1 ~]# crsctl stat res -t | grep -i "advm"
ora.ACFSDG.ACFS_TEST.advm
ora.proxy_advm
[root@rac1 ~]# crsctl stat res ora.ACFSDG.ACFS_TEST.advm -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.ACFS_TEST.advm
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]#

[root@rac1 ~]# fdisk -l /dev/asm/acfs_test-463

Disk /dev/asm/acfs_test-463: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rac1 ~]#

[root@rac2 ~]# fdisk -l /dev/asm/acfs_test-463

Disk /dev/asm/acfs_test-463: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rac2 ~]#


6. Create ACFS filesystem

[oracle@rac1 ~]$ mkfs -t acfs /dev/asm/acfs_test-463
mkfs.acfs: version                   = 12.2.0.1.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/acfs_test-463
mkfs.acfs: volume size               = 1073741824  (   1.00 GB )
mkfs.acfs: Format complete.
[oracle@rac1 ~]$


7. Register FileSystem on OCR

[root@rac1 ~]# mkdir -p /acfs_test
[root@rac1 ~]# chown oracle:oinstall /acfs_test

[root@rac1 ~]# /sbin/acfsutil registry -a  /dev/asm/acfs_test-463 /acfs_test -u oracle
acfsutil registry: mount point /acfs_test successfully added to Oracle Registry
[root@rac1 ~]#

[OR] -- The above command equivalent to below command.

[root@rac1 ~]# . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[root@rac1 ~]# which srvctl
/u01/app/grid/product/12.2/bin/srvctl
[root@rac1 ~]# srvctl add filesystem -d /dev/asm/acfs_test-463 -m /acfs_test -u oracle -fstype ACFS  -autostart ALWAYS


[root@rac1 ~]# crsctl stat res -t | grep -i "acfsdg"
ora.ACFSDG.ACFS_TEST.advm
ora.ACFSDG.dg
ora.acfsdg.acfs_test.acfs
[root@rac1 ~]#
[root@rac1 ~]# crsctl stat res ora.acfsdg.acfs_test.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfsdg.acfs_test.acfs
               ONLINE  ONLINE       rac1                     mounted on /acfs_tes
                                                             t,STABLE
               ONLINE  ONLINE       rac2                     mounted on /acfs_tes
                                                             t,STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]#


8. Verify Mount Point on All Nodes

[oracle@rac1 ~]$ df -h /acfs_test
Filesystem              Size  Used Avail Use% Mounted on
/dev/asm/acfs_test-463  1.0G  487M  538M  48% /acfs_test
[oracle@rac1 ~]$
[oracle@rac1 ~]$ touch /acfs_test/raj
[oracle@rac1 ~]$ ls -ltr /acfs_test/raj
-rw-r--r--. 1 oracle oinstall 0 Sep 21 21:20 /acfs_test/raj
[oracle@rac1 ~]$

[oracle@rac2 ~]$ df -h /acfs_test
Filesystem              Size  Used Avail Use% Mounted on
/dev/asm/acfs_test-463  1.0G  487M  538M  48% /acfs_test
[oracle@rac2 ~]$
[oracle@rac2 ~]$ ls -ltr /acfs_test/raj
-rw-r--r--. 1 oracle oinstall 0 Sep 21 21:20 /acfs_test/raj
[oracle@rac2 ~]$

[root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463
ACFS file system /acfs_test is mounted on nodes rac1,rac2
[root@rac1 ~]#

[root@rac1 ~]# srvctl config filesystem
Volume device: /dev/asm/acfs_test-463
Diskgroup name: acfsdg
Volume name: acfs_test
Canonical volume device: /dev/asm/acfs_test-463
Accelerator volume devices:
Mountpoint path: /acfs_test
Mount point owner: oracle
Mount users:
Type: ACFS
Mount options:
Description:
ACFS file system is enabled
ACFS file system is individually enabled on nodes:
ACFS file system is individually disabled on nodes:
[root@rac1 ~]#


9. Find ACFS mountpoints

[oracle@rac1 ~]$ /sbin/acfsutil registry -l
Device : /dev/asm/acfs_test-463 : Mount Point : /acfs_test : Options : none : Nodes : all : Disk Group: ACFSDG : Primary Volume : ACFS_TEST : Accelerator Volumes :
[oracle@rac1 ~]$

[oracle@rac1 ~]$ asmcmd volinfo -G ACFSDG ACFS_TEST
Diskgroup Name: ACFSDG

         Volume Name: ACFS_TEST
         Volume Device: /dev/asm/acfs_test-463
         State: ENABLED
         Size (MB): 1024
         Resize Unit (MB): 64
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /acfs_test

[oracle@rac1 ~]$

[oracle@rac1 ~]$ mount -t acfs
/dev/asm/acfs_test-463 on /acfs_test type acfs (rw,relatime,device,rootsuid,ordered)
[oracle@rac1 ~]$


10. Unmount ACFS filesystem

[oracle@rac1 ~]$ /sbin/acfsutil registry -l
Device : /dev/asm/acfs_test-463 : Mount Point : /acfs_test : Options : none : Nodes : all : Disk Group: ACFSDG : Primary Volume : ACFS_TEST : Accelerator Volumes :
[oracle@rac1 ~]$

[root@rac1 ~]# umount /dev/asm/acfs_test-463
[root@rac1 ~]#

[root@rac2 ~]# umount /dev/asm/acfs_test-463
[root@rac2 ~]#

[root@rac1 ~]# crsctl stat res ora.acfsdg.acfs_test.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfsdg.acfs_test.acfs
               OFFLINE OFFLINE      rac1                     admin unmounted /acf
                                                             s_test,STABLE
               OFFLINE OFFLINE      rac2                     admin unmounted /acf
                                                             s_test,STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]#


More information:

[root@rac1 ~]#  umount /dev/asm/acfs_test-463
umount: /acfs_test: target is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
umount.acfs: CLSU-00100: operating system function: OfsWaitProc failed with error data: 32
umount.acfs: CLSU-00101: operating system error message: Broken pipe
umount.acfs: CLSU-00103: error location: OWPR_1
umount.acfs: ACFS-04151: unmount of mount point /acfs_test failed
[root@rac1 ~]#

[root@rac1 ~]# lsof | grep /acfs_test
bash       6022                 root  cwd       DIR         248,237057     32768                    2 /acfs_test
vi        30169                 root  cwd       DIR         248,237057     32768                    2 /acfs_test
vi        30169                 root    3u      REG         248,237057     12288                   77 /acfs_test/.test.swp
[root@rac1 ~]#
[root@rac1 ~]#

After stopping or killing these processes and the dismount should go through

[root@rac1 ~]# kill -9 30169
[root@rac1 ~]# kill -9 6022
[root@rac1 ~]# lsof | grep /acfs_test
[root@rac1 ~]#


11. Start/Stop ACFS filesystem

[root@rac1 ~]# /sbin/acfsutil registry -l
Device : /dev/asm/acfs_test-463 : Mount Point : /acfs_test : Options : none : Nodes : all : Disk Group: ACFSDG : Primary Volume : ACFS_TEST : Accelerator Volumes :
[root@rac1 ~]#

[root@rac1 ~]# srvctl start filesystem -d /dev/asm/acfs_test-463
[root@rac1 ~]#

[root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463
ACFS file system /acfs_test is mounted on nodes rac1,rac2
[root@rac1 ~]#

[root@rac1 ~]# srvctl stop filesystem -d /dev/asm/acfs_test-463
[root@rac1 ~]# 
[root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463
ACFS file system /acfs_test is not mounted
[root@rac1 ~]#
[root@rac1 ~]# srvctl start filesystem -d /dev/asm/acfs_test-463
[root@rac1 ~]#
[root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463
ACFS file system /acfs_test is mounted on nodes rac1,rac2
[root@rac1 ~]#

[root@rac1 ~]# crsctl stat res ora.acfsdg.acfs_test.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfsdg.acfs_test.acfs
               ONLINE  ONLINE       rac1                     mounted on /acfs_tes
                                                             t,STABLE
               ONLINE  ONLINE       rac2                     mounted on /acfs_tes
                                                             t,STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]#

[root@rac1 ~]# df -h /acfs_test
Filesystem              Size  Used Avail Use% Mounted on
/dev/asm/acfs_test-463  1.0G  487M  538M  48% /acfs_test
[root@rac1 ~]#

[root@rac2 ~]# df -h /acfs_test
Filesystem              Size  Used Avail Use% Mounted on
/dev/asm/acfs_test-463  1.0G  487M  538M  48% /acfs_test
[root@rac2 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com
Linkedin: https://www.linkedin.com/in/rajasekhar-amudala/

Reference:
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)

RAC Standby 12.2

Create RAC Physical Standby Database using RMAN Active Duplicate Command

Table of Contents

___________________________________________________________________________________________________

1. Overview

2. Environment

On Primary (Step 3 to Step 9)

3. Enable Forced Logging on Primary
4. Copy Password File from Primary to standby
5. Configure Standby Redo Log on Primary
6. Verify Archive Mode Enabled on Primary
7. Set Primary Database Initialization Parameters
8. Configure LISTENER Entries on Primary
9. Configure TNS Entries on Primary

On STANDBY (Step 10 to Step 22)

10. Set Standby Database Initialization Parameters
11. Create required directories on Standby
12. Add below entry in ORATAB on Standby
13. Startup nomount
14. Configure LISTENER Entries on Standby
15. Configure TNS Entries on Standby
16. Verify TNS connectvity
17. Run the duplicate command
18. Verify Standby redo logs
19. Create spfile
20. Add init parameters for Instance 2 (DELL_DG2)
21. Add database to OCR
22. Enable MRP on Standby

Verification

23. Verify Sync

___________________________________________________________________________________________________


1. Overview

AIM:Without shutting down primary, we need to create physical standby database using RMAN DUPLICATE FROM ACTIVE DATABASE command (No need to take backup of primary database) 

Active Data Guard is a new option from Oracle Database 11g Enterprise Edition.

PLEASE NOTE in 12c Data Guard is set up at the Container level and not the individual Pluggable database level as the redo log files only belong to the Container database and the individual pluggable databases do not have their own online redo log files.

Definition of Active Dataguard:

Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production/primary database.


2. Environment

Primary RAC cluster : rac-cluster

Platform		: Linuxx86_64
Server Name		: RAC1.RAJASEKHAR.COM,RAC2.RAJASEKHAR.COM
DB Version		: Oracle 12.2.0.1
File system             : ASM
Disk Groups 	        : +DATA,+FRA
Database Name	        : DELL
DB_UNIQUE_NAME          : DELL
INSTANCES		: DELL1,DELL2
Flashback		: Disabled
Oracle Home Path        : /u01/app/oracle/product/12.2.0/dbhome_1

Primary Cluster Status: 

[oracle@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@rac1 ~]$


[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.FRA.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.dell.db
      1        ONLINE  ONLINE       rac1                     Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE
      2        ONLINE  ONLINE       rac2                     Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------
[oracle@rac1 ~]$ 

Standby RAC Cluster: racdg-cluster

Platform		: Linuxx86_64
Server Name		: RAC1.RAJASEKHAR.COM,RAC2.RAJASEKHAR.COM
DB Version		: Oracle 12.2.0.1
File system             : ASM
Disk Groups 	        : +DATA,+DATA_DG	
Database Name	        : DELL
DB_UNIQUE_NAME          : DELL_DG
INSTANCES		: DELL_DG1,DELL_DG2
Flashback		: Disabled
Oracle Home Path        : /u01/app/oracle/product/12.2.0/dbhome_1

Standby Cluster Status

[grid@racdg1 ~]$ crsctl check cluster -all
**************************************************************
racdg1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racdg2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@racdg1 ~]$

[grid@racdg1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.DATA.dg
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.DATA_DG.dg
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.net1.network
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.ons
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racdg1                   STABLE
               OFFLINE OFFLINE      racdg2                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racdg2                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.asm
      1        ONLINE  ONLINE       racdg1                   Started,STABLE
      2        ONLINE  ONLINE       racdg2                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.racdg1.vip
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.racdg2.vip
      1        ONLINE  ONLINE       racdg2                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racdg2                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racdg1                   STABLE
--------------------------------------------------------------------------------
[grid@racdg1 ~]$

On Primary (Step 3 to Step 12)


3. Enable Forced Logging on Primary

SQL> select name, open_mode,cdb from v$database;

NAME      OPEN_MODE            CDB
--------- -------------------- ---
DELL      READ WRITE           NO

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
NO   <---------

SQL> ALTER DATABASE FORCE LOGGING;

Database altered.

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
YES  <-------

SQL>


4. Copy Password File from Primary to standby

ASMCMD> pwd
+data/dell/password
ASMCMD> pwcopy pwddell.258.1000514183 /tmp
copying +data/dell/password/pwddell.258.1000514183 -> /tmp/pwddell.258.1000514183
ASMCMD>

[root@rac2 ~]# cd /tmp
[root@rac2 tmp]# ls -ltr pwddell.258.1000514183
-rw-r-----. 1 grid oinstall 2048 Feb 20 13:16 pwddell.258.1000514183
[root@rac2 tmp]#
[root@rac2 tmp]# chown oracle:oinstall pwddell.258.1000514183

[oracle@rac2 tmp]$ ls -ltr pwddell.258.1000514183
-rw-r-----. 1 oracle oinstall 2048 Feb 20 13:16 pwddell.258.1000514183
[oracle@rac2 tmp]$

[oracle@rac2 tmp]$ scp -p pwddell.258.1000514183 oracle@racdg1:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwDELL_DG1
[oracle@rac2 tmp]$ scp -p pwddell.258.1000514183 oracle@racdg2:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwDELL_DG2


5. Configure Standby Redo Log on Primary

The standby redo logs must be the same size as the primary database online logs. The recommended number of standby redo logs is:

(maximum # of logfiles +1) * maximum # of threads

This example uses two online log files for each thread. Thus, the number of standby redo logs should be (2 + 1) * 2 = 6. 
That is, one more standby redo log file for each thread. 

-- Standy Redo logs created in the primary and RMAN will create them in standby automatically while running duplicate command.

-- Standy Redo logs files come into picture only when protection mode is Maximum Availability and Maximum Protection.


SQL> set lines 180
col MEMBER for a60
select b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;SQL> SQL>

   THREAD#     GROUP# MEMBER                                                            BYTES
---------- ---------- ------------------------------------------------------------ ----------
         1          2 +DATA/DELL/redo02.log                                         209715200
         1          1 +DATA/DELL/redo01.log                                         209715200
         2          3 +DATA/DELL/redo03.log                                         209715200
         2          4 +DATA/DELL/redo04.log                                         209715200

SQL>

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1
GROUP 5 ('+DATA/DELL/redo05.log') SIZE 200M,
GROUP 6 ('+DATA/DELL/redo06.log') SIZE 200M,
GROUP 7 ('+DATA/DELL/redo07.log') SIZE 200M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
GROUP 8 ('+DATA/DELL/redo08.log') SIZE 200M,
GROUP 9 ('+DATA/DELL/redo09.log') SIZE 200M,
GROUP 10 ('+DATA/DELL/redo10.log') SIZE 200M;

Database altered.

SQL> 


SQL> select * from v$logfile;

    GROUP# STATUS  TYPE    MEMBER                     IS_     CON_ID
---------- ------- ------- -------------------------- --- ----------
         2         ONLINE  +DATA/DELL/redo02.log      NO           0
         1         ONLINE  +DATA/DELL/redo01.log      NO           0
         3         ONLINE  +DATA/DELL/redo03.log      NO           0
         4         ONLINE  +DATA/DELL/redo04.log      NO           0
         5         STANDBY +DATA/DELL/redo05.log      NO           0
         6         STANDBY +DATA/DELL/redo06.log      NO           0
         7         STANDBY +DATA/DELL/redo07.log      NO           0
         8         STANDBY +DATA/DELL/redo08.log      NO           0
         9         STANDBY +DATA/DELL/redo09.log      NO           0
        10         STANDBY +DATA/DELL/redo10.log      NO           0

10 rows selected.

SQL>

SQL> select b.thread#,a.group#, a.member, b.bytes FROM v$logfile a, v$standby_log b WHERE a.group# = b.group#;

   THREAD#     GROUP# MEMBER                                                            BYTES
---------- ---------- ------------------------------------------------------------ ----------
         1          5 +DATA/DELL/redo05.log                                         209715200
         1          6 +DATA/DELL/redo06.log                                         209715200
         1          7 +DATA/DELL/redo07.log                                         209715200
         2          8 +DATA/DELL/redo08.log                                         209715200
         2          9 +DATA/DELL/redo09.log                                         209715200
         2         10 +DATA/DELL/redo10.log                                         209715200

6 rows selected.

SQL>


6. Verify Archive Mode Enabled on Primary

SQL> archive log list
Database log mode              Archive Mode <------
Automatic archival             Enabled
Archive destination            +FRA
Oldest online log sequence     5
Next log sequence to archive   6
Current log sequence           6
SQL>


7. Set Primary Database Initialization Parameters

SQL> create pfile='/home/oracle/initDELL.ora.bkp' from spfile;

File created.

SQL> alter system set db_unique_name='DELL' scope=spfile sid='*';

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DELL,DELL_DG)' scope=both sid='*';

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=+FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DELL' scope=both sid='*';

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DELL_DG LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DELL_DG' scope=both sid='*';

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both sid='*';

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both sid='*';

System altered.

SQL> ALTER SYSTEM SET fal_client=DELL scope=both sid='*';

System altered.

SQL>

Please note: The FAL_CLIENT database initialization parameter is no longer required from 11gR2

SQL> ALTER SYSTEM SET fal_server=DELL_DG scope=both sid='*';

System altered.

SQL> ALTER SYSTEM SET DB_FILE_NAME_CONVERT='+DATA_DG','+DATA' SCOPE=SPFILE sid='*';

System altered.

SQL> ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='+DATA_DG','+DATA' SCOPE=SPFILE sid='*';

System altered.

SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO scope=both sid='*';

System altered.

SQL> create pfile='/home/oracle/initDELL.ora' from spfile;

File created.

SQL>

[oracle@rac1 ~]$ cat /home/oracle/initDELL.ora
DELL1.__data_transfer_cache_size=0
DELL2.__data_transfer_cache_size=0
DELL2.__db_cache_size=541065216
DELL1.__db_cache_size=520093696
DELL1.__inmemory_ext_roarea=0
DELL2.__inmemory_ext_roarea=0
DELL1.__inmemory_ext_rwarea=0
DELL2.__inmemory_ext_rwarea=0
DELL1.__java_pool_size=4194304
DELL2.__java_pool_size=4194304
DELL1.__large_pool_size=8388608
DELL2.__large_pool_size=8388608
DELL1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
DELL2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
DELL1.__pga_aggregate_target=301989888
DELL2.__pga_aggregate_target=301989888
DELL1.__sga_target=905969664
DELL2.__sga_target=905969664
DELL2.__shared_io_pool_size=37748736
DELL1.__shared_io_pool_size=37748736
DELL2.__shared_pool_size=301989888
DELL1.__shared_pool_size=322961408
DELL1.__streams_pool_size=0
DELL2.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/DELL/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.2.0'
*.control_files='+DATA/DELL/control01.ctl','+DATA/DELL/control02.ctl'
*.db_block_size=8192
*.db_file_name_convert='+DATA_DG','+DATA'
*.db_name='DELL'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=8016m
*.db_unique_name='DELL'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DELLXDB)'
*.fal_client='DELL'
*.fal_server='DELL_DG'
family:dw_helper.instance_mode='read-only'
DELL1.instance_number=1
DELL2.instance_number=2
*.local_listener='-oraagent-dummy-'
*.log_archive_config='DG_CONFIG=(DELL,DELL_DG)'
*.log_archive_dest_1='LOCATION=+FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DELL'
*.log_archive_dest_2='SERVICE=DELL_DG LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DELL_DG'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_file_name_convert='+DATA_DG','+DATA'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.pga_aggregate_target=288m
*.processes=300
*.remote_listener='rac-scan:1622'
*.remote_login_passwordfile='exclusive'
*.sga_target=864m
*.standby_file_management='AUTO'
DELL2.thread=2
DELL1.thread=1
DELL2.undo_tablespace='UNDOTBS2'
DELL1.undo_tablespace='UNDOTBS1'
[oracle@rac1 ~]$


8. Configure LISTENER Entries on Primary

[oracle@rac1 ~]$ ps -ef | grep tns
root        15     2  0 11:31 ?        00:00:00 [netns]
grid      6429     1  0 11:33 ?        00:00:03 /u01/app/12.2.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid      6451     1  0 11:33 ?        00:00:00 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
grid      6453     1  0 11:33 ?        00:00:00 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
grid      6477     1  0 11:33 ?        00:00:00 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
oracle   31300 16939  0 15:07 pts/0    00:00:00 grep tns
[oracle@rac1 ~]$ lsnrctl status LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 20-FEB-2019 15:07:47

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1622))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                20-FEB-2019 11:33:28
Uptime                    0 days 3 hr. 34 min. 19 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.101)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.203)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_FRA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "DELL" has 1 instance(s).
  Instance "DELL1", status READY, has 1 handler(s) for this service...
Service "DELLXDB" has 1 instance(s).
  Instance "DELL1", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac1 ~]$

[oracle@rac1 ~]$ lsnrctl status LISTENER_SCAN3

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 20-FEB-2019 15:37:28

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN3
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                20-FEB-2019 11:33:28
Uptime                    0 days 4 hr. 4 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/rac1/listener_scan3/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN3)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.207)(PORT=1622)))
Services Summary...
Service "DELL" has 2 instance(s).
  Instance "DELL1", status READY, has 1 handler(s) for this service...
  Instance "DELL2", status READY, has 1 handler(s) for this service...
Service "DELLXDB" has 2 instance(s).
  Instance "DELL1", status READY, has 1 handler(s) for this service...
  Instance "DELL2", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac1 ~]$
[oracle@rac1 ~]$ lsnrctl status LISTENER_SCAN2

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 20-FEB-2019 15:37:40

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN2
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                20-FEB-2019 11:33:29
Uptime                    0 days 4 hr. 4 min. 11 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/rac1/listener_scan2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.206)(PORT=1622)))
Services Summary...
Service "DELL" has 2 instance(s).
  Instance "DELL1", status READY, has 1 handler(s) for this service...
  Instance "DELL2", status READY, has 1 handler(s) for this service...
Service "DELLXDB" has 2 instance(s).
  Instance "DELL1", status READY, has 1 handler(s) for this service...
  Instance "DELL2", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac1 ~]$

[oracle@rac1 ~]$ cat /u01/app/12.2.0/grid/network/admin/listener.ora
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
# listener.ora Network Configuration File: /u01/app/12.2.0/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF

VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM = SUBNET

ASMNET1LSNR_ASM =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = ASMNET1LSNR_ASM))
    )
  )

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
    )
  )

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
    )
  )

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3=OFF             # line added by Agent
[oracle@rac1 ~]$


[grid@rac2 admin]$ cat listener.ora  <--- 2nd node of Primary 
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM))))              # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON               # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET         # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent
REGISTRATION_INVITED_NODES_LISTENER_SCAN1=()            # line added by Agent
REGISTRATION_INVITED_NODES_LISTENER=()          # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3=OFF             # line added by Agent
[grid@rac2 admin]$


9. Configure TNS Entries on Primary

[oracle@rac1 admin]$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

DELL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL)
    )
  )

DELL_DG =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = racdg-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL_DG)(UR=A)
    )
  )

[oracle@rac1 admin]$

[oracle@rac2 admin]$ cat tnsnames.ora  <--- 2nd node of Primary
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

DELL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL)
    )
  )

DELL_DG =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = racdg-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL_DG)(UR=A)
    )
  )

[oracle@rac2 admin]$

[oracle@rac1 ~]$ tnsping dell

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-FEB-2019 15:24:20

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1622)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DELL)))
OK (10 msec)
[oracle@rac1 ~]$
[oracle@rac1 ~]$ tnsping dell_dg

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-FEB-2019 16:26:23

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = racdg-scan)(PORT = 1622)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DELL_DG)(UR=A)))
OK (0 msec)
[oracle@rac1 ~]$

On STANDBY (Step 10 – Step 22)


10. Set Standby Database Initialization Parameters

[oracle@racdg1 dbs]$ cat initDELL_DG1.ora
DELL_DG1.__data_transfer_cache_size=0
DELL_DG1.__db_cache_size=520093696
DELL_DG1.__inmemory_ext_roarea=0
DELL_DG1.__inmemory_ext_rwarea=0
DELL_DG1.__java_pool_size=4194304
DELL_DG1.__large_pool_size=8388608
DELL_DG1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
DELL_DG1.__pga_aggregate_target=301989888
DELL_DG1.__sga_target=905969664
DELL_DG1.__shared_io_pool_size=37748736
DELL_DG1.__shared_pool_size=322961408
DELL_DG1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/DELL_DG/adump'
*.audit_trail='db'
*.cluster_database=false
*.compatible='12.2.0'
*.control_files='+DATA_DG/DELL_DG/control01.ctl','+DATA_DG/DELL_DG/control02.ctl'
*.db_block_size=8192
*.db_file_name_convert='+DATA/DELL','+DATA_DG/DELL_DG'
*.db_name='DELL'
*.db_recovery_file_dest='+DATA_DG'
*.db_recovery_file_dest_size=8016m
*.db_unique_name='DELL_DG'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DELL_DGXDB)'
*.fal_client='DELL_DG'
*.fal_server='DELL'
family:dw_helper.instance_mode='read-only'
*.instance_name='DELL_DG1'
DELL_DG1.instance_number=1
*.log_archive_config='DG_CONFIG=(DELL,DELL_DG)'
*.log_archive_dest_1='LOCATION=+DATA_DG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DELL_DG'
*.log_archive_dest_2='SERVICE=DELL LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DELL'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_file_name_convert='+DATA/DELL','+DATA_DG/DELL_DG'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.pga_aggregate_target=288m
*.processes=300
*.remote_listener='racdg-scan:1622'
*.remote_login_passwordfile='exclusive'
*.sga_target=864m
*.standby_file_management='AUTO'
DELL_DG1.thread=1
DELL_DG1.undo_tablespace='UNDOTBS1'
[oracle@racdg1 dbs]$


11. Crete required directories on Standby

[oracle@racdg1 ~]$ mkdir -p /u01/app/oracle/admin/DELL_DG/adump
[oracle@racdg2 ~]$ mkdir -p /u01/app/oracle/admin/DELL_DG/adump


12. Add below entry in ORATAB on Standby

[oracle@racdg1 ~]$ echo "DELL:/u01/app/oracle/product/12.2.0/dbhome_1:N" >> /etc/oratab
[oracle@racdg1 ~]$ echo "DELL_DG1:/u01/app/oracle/product/12.2.0/dbhome_1:N" >> /etc/oratab

[oracle@racdg2 ~]$ echo "DELL:/u01/app/oracle/product/12.2.0/dbhome_1:N" >> /etc/oratab
[oracle@racdg2 ~]$ echo "DELL_DG2:/u01/app/oracle/product/12.2.0/dbhome_1:N" >> /etc/oratab


13. Startup nomount

SQL> startup nomount pfile='/u01/app/oracle/product/12.2.0/dbhome_1/dbs/initDELL_DG1.ora';
ORACLE instance started.

Total System Global Area  905969664 bytes
Fixed Size                  8627008 bytes
Variable Size             348130496 bytes
Database Buffers          545259520 bytes
Redo Buffers                3952640 bytes
SQL> 


14. Configure LISTENER Entries on Standby

[oracle@racdg1 ~]$ ps -ef | grep tns
root        15     2  0 11:36 ?        00:00:00 [netns]
oracle    2239 31551  0 15:38 pts/0    00:00:00 grep tns
grid      6070     1  0 11:38 ?        00:00:04 /u01/app/12.2.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid      6090     1  0 11:38 ?        00:00:00 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
grid      6099     1  0 11:38 ?        00:00:00 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
grid      6122     1  0 11:38 ?        00:00:00 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
[oracle@racdg1 ~]$

[grid@racdg1 ~]$ lsnrctl status LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 23-FEB-2019 23:55:46

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                23-FEB-2019 12:52:44
Uptime                    0 days 11 hr. 3 min. 2 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/racdg1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.103)(PORT=1621)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.105)(PORT=1621)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA_DG" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "DELL_DG" has 1 instance(s).
  Instance "DELL_DG1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[grid@racdg1 ~]$


[grid@racdg1 ~]$ cat /u01/app/12.2.0/grid/network/admin/listener.ora
# listener.ora Network Configuration File: /u01/app/12.2.0/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3 = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2 = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3 = OFF

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2 = OFF

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = DELL_DG)
      (ORACLE_HOME = /u01/app/oracle/product/12.2.0/dbhome_1)
      (SID_NAME = DELL_DG)
    )
  )

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF

VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM = SUBNET

ASMNET1LSNR_ASM =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = ASMNET1LSNR_ASM))
  )

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

LISTENER =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
  )

ADR_BASE_LISTENER = /u01/app/grid

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

ADR_BASE_ASMNET1LSNR_ASM = /u01/app/grid

LISTENER_SCAN3 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN3))
  )

LISTENER_SCAN2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN2))
  )

ADR_BASE_LISTENER_SCAN3 = /u01/app/grid

LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
  )

ADR_BASE_LISTENER_SCAN2 = /u01/app/grid

ADR_BASE_LISTENER_SCAN1 = /u01/app/grid

[grid@racdg1 ~]$

[grid@racdg2 admin]$ cat listener.ora  <--- 2nd of standby
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
# listener.ora Network Configuration File: /u01/app/12.2.0/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM = SUBNET

ASMNET1LSNR_ASM =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = ASMNET1LSNR_ASM))
    )
  )

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
    )
  )

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent
REGISTRATION_INVITED_NODES_LISTENER_SCAN1=()            # line added by Agent
REGISTRATION_INVITED_NODES_LISTENER=()          # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF             # line added by Agent
[grid@racdg2 admin]$


15. Configure TNS Entries on Standby

[oracle@racdg1 admin]$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

DELL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL)
    )
  )

DELL_DG =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = racdg-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL_DG) (UR=A)
    )
  )


[oracle@racdg1 admin]$

[oracle@racdg2 admin]$ cat tnsnames.ora  <--- 2nd node of Standby 
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

DELL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL)
    )
  )

DELL_DG =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = racdg-scan)(PORT = 1622))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DELL_DG) (UR=A)
    )
  )


[oracle@racdg2 admin]$


16. Verify TNS connectvity

On Primary

[oracle@rac1 ~]$ sqlplus sys/sys@dell as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Feb 20 16:28:46 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>


[oracle@rac1 ~]$ sqlplus sys/sys@dell_dg as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Feb 20 16:28:54 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

On Standby

[oracle@racdg1 ~]$ sqlplus sys/sys@dell as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Feb 20 16:29:28 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

[oracle@racdg1 ~]$ sqlplus sys/sys@dell_dg as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Feb 20 16:29:35 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>


17. Run the duplicate command

Please note DB_CREATE_FILE_DEST parameter cannot be set together with DB_FILE_NAME_CONVERT during RMAN active duplication.

[oracle@racdg1 ~]$ rman target sys/sys@DELL auxiliary sys/sys@DELL_DG

Recovery Manager: Release 12.2.0.1.0 - Production on Sat Feb 23 23:41:12 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DELL (DBID=3971311101)
connected to auxiliary database: DELL (not mounted)

RMAN> duplicate target database for standby from active database nofilenamecheck;

Starting Duplicate Db at 23-FEB-19
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=50 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '+DATA/DELL/PASSWORD/pwddell.260.1000570117' auxiliary format
 '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwDELL_DG1'   ;
}
executing Memory Script

Starting backup at 23-FEB-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 instance=DELL1 device type=DISK
Finished backup at 23-FEB-19

contents of Memory Script:
{
   restore clone from service  'DELL1' standby controlfile;
}
executing Memory Script

Starting restore at 23-FEB-19
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service DELL1
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
output file name=+DATA_DG/DELL_DG/control01.ctl
output file name=+DATA_DG/DELL_DG/control02.ctl
Finished restore at 23-FEB-19

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for tempfile  1 to
 "+DATA_DG/DELL_DG/temp01.dbf";
   switch clone tempfile all;
   set newname for datafile  1 to
 "+DATA_DG/DELL_DG/system01.dbf";
   set newname for datafile  3 to
 "+DATA_DG/DELL_DG/sysaux01.dbf";
   set newname for datafile  4 to
 "+DATA_DG/DELL_DG/undotbs01.dbf";
   set newname for datafile  5 to
 "+DATA_DG/DELL_DG/undotbs02.dbf";
   set newname for datafile  7 to
 "+DATA_DG/DELL_DG/users01.dbf";
   restore
   from  nonsparse   from service
 'DELL1'   clone database
   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to +DATA_DG/DELL_DG/temp01.dbf in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 23-FEB-19
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service DELL1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATA_DG/DELL_DG/system01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:16
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service DELL1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to +DATA_DG/DELL_DG/sysaux01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:09
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service DELL1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATA_DG/DELL_DG/undotbs01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service DELL1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATA_DG/DELL_DG/undotbs02.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service DELL1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to +DATA_DG/DELL_DG/users01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 23-FEB-19

sql statement: alter system archive log current

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=1001029332 file name=+DATA_DG/DELL_DG/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1001029332 file name=+DATA_DG/DELL_DG/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=1001029332 file name=+DATA_DG/DELL_DG/undotbs01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=1001029332 file name=+DATA_DG/DELL_DG/undotbs02.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=5 STAMP=1001029332 file name=+DATA_DG/DELL_DG/users01.dbf
Finished Duplicate Db at 23-FEB-19

RMAN>


18. Verify Standby redo logs

SQL> select * from v$logfile;

    GROUP# STATUS  TYPE    MEMBER                                                       IS_     CON_ID
---------- ------- ------- ------------------------------------------------------------ --- ----------
         2         ONLINE  +DATA_DG/DELL_DG/redo02.log                                  NO           0
         1         ONLINE  +DATA_DG/DELL_DG/redo01.log                                  NO           0
         3         ONLINE  +DATA_DG/DELL_DG/redo03.log                                  NO           0
         4         ONLINE  +DATA_DG/DELL_DG/redo04.log                                  NO           0
         5         STANDBY +DATA_DG/DELL_DG/redo05.log                                  NO           0
         6         STANDBY +DATA_DG/DELL_DG/redo06.log                                  NO           0
         7         STANDBY +DATA_DG/DELL_DG/redo07.log                                  NO           0
         8         STANDBY +DATA_DG/DELL_DG/redo08.log                                  NO           0
         9         STANDBY +DATA_DG/DELL_DG/redo09.log                                  NO           0
        10         STANDBY +DATA_DG/DELL_DG/redo10.log                                  NO           0

10 rows selected.

SQL> select b.thread#,a.group#, a.type, a.member, b.bytes FROM v$logfile a, v$standby_log b WHERE a.group# = b.group#;

   THREAD#     GROUP# TYPE    MEMBER                                                            BYTES
---------- ---------- ------- ------------------------------------------------------------ ----------
         1          5 STANDBY +DATA_DG/DELL_DG/redo05.log                                   209715200
         1          6 STANDBY +DATA_DG/DELL_DG/redo06.log                                   209715200
         1          7 STANDBY +DATA_DG/DELL_DG/redo07.log                                   209715200
         2          8 STANDBY +DATA_DG/DELL_DG/redo08.log                                   209715200
         2          9 STANDBY +DATA_DG/DELL_DG/redo09.log                                   209715200
         2         10 STANDBY +DATA_DG/DELL_DG/redo10.log                                   209715200

6 rows selected.

SQL>


19. Create spfile

SQL> create spfile='+DATA_DG/DELL_DG/PARAMETERFILE/spfileDELL_DG.ora' from pfile='/u01/app/oracle/product/12.2.0/dbhome_1/dbs/initDELL_DG1.ora';

File created.

SQL> shut immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> 

[oracle@racdg1 ~]$ cd $ORACLE_HOME/dbs
[oracle@racdg1 dbs]$ ls -ltr initDELL_DG1.ora
-rw-r--r--. 1 oracle oinstall 1802 Feb 23 23:40 initDELL_DG1.ora
[oracle@racdg1 dbs]$ mv initDELL_DG1.ora initDELL_DG1.ora.bkp
[oracle@racdg1 dbs]$ echo "SPFILE='+DATA_DG/DELL_DG/PARAMETERFILE/spfileDELL_DG.ora'" > initDELL_DG1.ora
[oracle@racdg1 dbs]$ scp initDELL_DG1.ora oracle@racdg2:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/initDELL_DG2.ora
initDELL_DG1.ora                                             100%   58     0.1KB/s   00:00
[oracle@racdg1 dbs]$

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1269366784 bytes
Fixed Size                  2252864 bytes
Variable Size             805310400 bytes
Database Buffers          452984832 bytes
Redo Buffers                8818688 bytes
Database mounted.
SQL> 


20. Add init parameters for Instance 2 (DELL_DG2)

SQL> alter system set undo_tablespace=UNDOTBS2 sid='DELL_DG2' scope=spfile;

System altered.

SQL> alter system set instance_number=1 sid='DELL_DG1' scope=spfile;

System altered.

SQL> alter system set instance_number=2 sid='DELL_DG2' scope=spfile;

System altered.

SQL> alter system set instance_name='DELL_DG1' sid='DELL_DG1' scope=spfile;

System altered.

SQL> alter system set instance_name='DELL_DG2' sid='DELL_DG2' scope=spfile;

System altered.

SQL> alter system set thread=1 sid='DELL_DG1' scope=spfile;

System altered.

SQL> alter system set thread=2 sid='DELL_DG2' scope=spfile;

System altered.

SQL> alter system set cluster_database=TRUE scope=spfile;

System altered.

SQL> alter system set remote_listener='racdg-scan:1622' scope=spfile;

System altered.

SQL> 


SQL> shut immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL>

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1269366784 bytes
Fixed Size                  2252864 bytes
Variable Size             805310400 bytes
Database Buffers          452984832 bytes
Redo Buffers                8818688 bytes
Database mounted.
SQL> 

SQL> select name,open_mode,database_role,cdb from v$database;

NAME      OPEN_MODE            DATABASE_ROLE    CDB
--------- -------------------- ---------------- ---
DELL      MOUNTED              PHYSICAL STANDBY NO

SQL>


21. Add database to OCR

[oracle@racdg1 dbs]$ srvctl add database -db DELL_DG -oraclehome /u01/app/oracle/product/12.2.0/dbhome_1 -role physical_standby -startoption mount -spfile +DATA_DG/DELL_DG/PARAMETERFILE/spfileDELL_DG.ora
[oracle@racdg1 dbs]$
[oracle@racdg1 dbs]$ srvctl add instance -db DELL_DG -instance DELL_DG1 -node racdg1
[oracle@racdg1 dbs]$ srvctl add instance -db DELL_DG -instance DELL_DG2 -node racdg2
[oracle@racdg1 dbs]$
[oracle@racdg1 dbs]$ srvctl start database -d DELL_DG
[oracle@racdg1 dbs]$ srvctl status database -d DELL_DG
Instance DELL_DG1 is running on node racdg1
Instance DELL_DG2 is running on node racdg2
[oracle@racdg1 dbs]$


[grid@racdg1 trace]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.DATA.dg
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.DATA_DG.dg
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.net1.network
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.ons
               ONLINE  ONLINE       racdg1                   STABLE
               ONLINE  ONLINE       racdg2                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racdg1                   STABLE
               OFFLINE OFFLINE      racdg2                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racdg2                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.asm
      1        ONLINE  ONLINE       racdg1                   Started,STABLE
      2        ONLINE  ONLINE       racdg2                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.dell_dg.db
      1        ONLINE  INTERMEDIATE racdg1                   Mounted (Closed),HOM
                                                             E=/u01/app/oracle/pr
                                                             oduct/12.2.0/dbhome_
                                                             1,STABLE
      2        ONLINE  INTERMEDIATE racdg2                   Mounted (Closed),HOM
                                                             E=/u01/app/oracle/pr
                                                             oduct/12.2.0/dbhome_
                                                             1,STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.racdg1.vip
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.racdg2.vip
      1        ONLINE  ONLINE       racdg2                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racdg2                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racdg1                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racdg1                   STABLE
--------------------------------------------------------------------------------
[grid@racdg1 trace]$


22. Enable MRP on Standby

SQL> select name,open_mode,database_role,cdb from v$database;

NAME      OPEN_MODE            DATABASE_ROLE    CDB
--------- -------------------- ---------------- ---
DELL      MOUNTED              PHYSICAL STANDBY NO

SQL>

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>


23. Verify Sync

On Primary

SQL> select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             65  <----
         2             49  <----

SQL>

On Primary Instance 1:

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL>

On Primary Instance 2:


SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> 


SQL> select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             69  <----
         2             53  <----

SQL>

On Standby

SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             68 <------
         2             53 <------

SQL>


SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

    Thread Last Sequence Received Last Sequence Applied Difference
---------- ---------------------- --------------------- ----------
         1                     69                    69          0 <---
         2                     53                    53          0 <---

SQL>

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com
WhatsApp : +65-94693551
Linkedin: https://www.linkedin.com/in/rajasekhar-amudala/

How to take OCR backup on 11.2.0.4

How to take OCR backup on 11.2.0.4

1. Overview

11g Release 2 on wards voting disk data is automatically backed up in the OCR whenever there is a configuration change.
No need to take voting disk backup separately, OCR backup will contain both OCR and Voting data. We can restore voting disk using OCR backup.

OCR BACKUP

Automatic backups :

a) CRSD automatically creates OCR backups every 4 hours.
b) Each full day.
c) End of each week.
d) Retains the last three copies of OCR.

Manual backups:

a) can be taken using the "ocrconfig -manualbackup" command

2. Show Backup

All Backups

[root@rac1 ~]# ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available

rac1     2018/11/03 08:10:12     /u01/app/11.2.0.4/grid/cdata/rac-scan/backup_20181103_081012.ocr
[root@rac1 ~]#

Auto Backups

[root@rac1 ~]# ocrconfig -showbackup auto

Manual Backups

[root@rac1 ~]# ocrconfig -showbackup manual

rac1     2018/11/03 08:10:12     /u01/app/11.2.0.4/grid/cdata/rac-scan/backup_20181103_081012.ocr
[root@rac1 ~]#

3. Take OCR backup manually

[root@rac1 ~]# ocrconfig -manualbackup

rac1     2018/11/03 08:30:42     /u01/app/11.2.0.4/grid/cdata/rac-scan/backup_20181103_083042.ocr  <------

rac1     2018/11/03 08:10:12     /u01/app/11.2.0.4/grid/cdata/rac-scan/backup_20181103_081012.ocr
[root@rac1 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com

root.sh failed with ORA-29783 on RAC

Issue Description:

Environment: Single node RAC on 11.2.0.4

root.sh failed with error ORA-29783 on node 1 while install RAC GI 11.2.0.4

Creation of ASM spfile in disk group failed. Following error occured: ORA-29783: GPnP attribute SET failed with error [CLSGPNP_NOT_FOUND]


Configuration of ASM ... failed
see asmca logs at /u01/app/oracle/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0.4/grid/crs/install/crsconfig_lib.pm line 6912.
/u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib -I/u01/app/11.2.0.4/grid/crs/install /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl execution failed
[root@rac1 ~]#

Action Plan:

1. Disable firewall

[root@rac1 ~]# service iptables status <----
Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED
2    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22
5    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination
1    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

[root@rac1 ~]# 

[root@rac1 ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@rac1 ~]#

[root@rac1 ~]# service iptables status
iptables: Firewall is not running.
[root@rac1 ~]# 

[root@rac1 ~]# chkconfig iptables off

2. Deconfig root.sh changes

[root@rac1 ~]# /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -deconfig -force
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@rac1 ~]# 
[root@rac1 ~]#

3. Re-run the root.sh on node 1

[root@rac1 ~]# /u01/app/11.2.0.4/grid/root.sh <-------
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0.4/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

ASM created and started successfully.

Disk Group DATA mounted successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 91d9f39644994f76bf7775f3a8e3929f.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   91d9f39644994f76bf7775f3a8e3929f (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com

Cluster Name

/*How to display Oracle Cluster name*/

1. The command “cemutlo” provides cluster name and version.

$GI_HOME/bin/cemutlo [-n] [-w]

[oracle@rac1 ~]$ cemutlo -n
rac-scan <—– This is Cluster name.

2. $CRS_HOME/cdata/<cluster_name> directory

3. ocrdump
which will create a text file called OCRDUMPFILE open that file and look for this entry
+[SYSTEM.css.clustername]+ ORATEXT : crs_cluster In this case, “crs_cluster” is the cluster name.

4. gpnptool get
search for keyword “ClusterName

5. ASM SP File location
[root@rac1 ]# gpnptool getpval -asm_spf (or) SQL> show parameter spfile 
+DATA/<clusterName>/asmparameterfile/registry.253.783619900

Note: We cannot change the cluster name. The only to do that is to reinstall the clusterware.

 

Move/Relocate OCR

How to Move/Relocate OCR from +DATA to +VOTE diskgroup

Contents
___________________________________________________________________________________________________________________________________

1. Verify Available DiskGroups
2. Verify Current OCR location
3. Add OCR to DiskGroup +VOTE
4. Delete the old OCR location
___________________________________________________________________________________________________________________________________


1. Verify Available DiskGroups

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      2940     2110              980             565              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      7295       13                0              13              0             N  DATA1/
MOUNTED  EXTERN  N         512   4096  1048576      1019      892                0             892              0             Y  VOTE/
ASMCMD>


2. Verify Current OCR location

[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4280
         Available space (kbytes) :     257840
         ID                       : 1037097601
         Device/File Name         :      +DATA <--------
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@rac1 ~]$


3. Add OCR to DiskGroup +VOTE

As root user

[root@rac1 ~]# which ocrconfig
/u01/app/11.2.0/grid/bin/ocrconfig
[root@rac1 ~]# ocrconfig -add +VOTE
[root@rac1 ~]#
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4280
         Available space (kbytes) :     257840
         ID                       : 1037097601
         Device/File Name         :      +DATA <-----------
                                    Device/File integrity check succeeded
         Device/File Name         :      +VOTE <-----------
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 ~]#


4. Delete the old OCR location

As root user

[root@rac1 ~]# ocrconfig -delete +DATA
[root@rac1 ~]#
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4280
         Available space (kbytes) :     257840
         ID                       : 1037097601
         Device/File Name         :      +VOTE <--------
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

OSWatcher

How to Configure OSWatcher

Note: You have to follow the same steps to configure OSW in all remaining nodes in the cluster.

Contents
_________________________________________________________________________________________________________________________

1. Download OSWatcher
2. Install OSWatcher
3. Create file private.net
4. Start OS Watcher
5. Verify OSW Status
6. Stop OSWather
_________________________________________________________________________________________________________________________


Step 1: Download OSWatcher

OSWatcher (Includes: [Video]) (Doc ID 301137.1)

oswbb733.tar


Step 2: Install OSWatcher

Move the OSW tar file to target location where you want to install OSW. After this just untar

[root@rac1 share]# tar xvf oswbb733.tar
oswbb/
oswbb/docs/
oswbb/docs/The_Analyzer/
oswbb/docs/The_Analyzer/OSWatcherAnalyzerOverview.pdf
oswbb/docs/The_Analyzer/oswbbaUserGuide.pdf
oswbb/docs/The_Analyzer/oswbba_README.txt
oswbb/docs/OSWatcher/
oswbb/docs/OSWatcher/oswbb_README.txt
oswbb/docs/OSWatcher/OSWatcherUserGuide.pdf
oswbb/Exampleprivate.net
oswbb/nfssub.sh
oswbb/stopOSWbb.sh
oswbb/call_du.sh
oswbb/iosub.sh
oswbb/OSWatcherFM.sh
oswbb/ifconfigsub.sh
oswbb/ltop.sh
oswbb/mpsub.sh
oswbb/call_uptime.sh
oswbb/psmemsub.sh
oswbb/tar_up_partial_archive.sh
oswbb/oswnet.sh
oswbb/vmsub.sh
oswbb/call_sar.sh
oswbb/oswib.sh
oswbb/startOSWbb.sh
oswbb/Example_extras.txt
oswbb/oswsub.sh
oswbb/oswbba.jar
oswbb/OSWatcher.sh
oswbb/tarupfiles.sh
oswbb/xtop.sh
oswbb/src/
oswbb/src/Thumbs.db
oswbb/src/OSW_profile.htm
oswbb/src/tombody.gif
oswbb/src/missing_graphic.gif
oswbb/src/coe_logo.gif
oswbb/src/watch.gif
oswbb/src/oswbba_input.txt
oswbb/oswrds.sh
[root@rac1 share]#


3. Create file private.net for monitoring private interconnet

OS Watcher User Guide (Doc ID 301137.1)
Note: By default private interconnect statistics are not collected by OSW. You have to set it manually as per the document.If you open the OSW guide mentioned in the above document you can see the sub title 'Setting up OSW'. In that it is clearly mentioned how to set the private.net stats.document.

vi private.net     <-- add below entries and then save and exit

#Linux Example
###########################################
echo "zzz ***"`date`
traceroute -r -F rac1-priv.rajasekhar.com
traceroute -r -F rac2-priv.rajasekhar.com
############################################
rm locks/lock.file

[root@rac1 oswbb]# cat private.net
#Linux Example
###########################################
echo "zzz ***"`date`
traceroute -r -F rac1-priv.rajasekhar.com
traceroute -r -F rac2-priv.rajasekhar.com
############################################
rm locks/lock.file
[root@rac1 oswbb]#

[root@rac1 oswbb]# chown -R oracle:oinstall private.net
[root@rac1 oswbb]# chmod -R 755 private.net
[root@rac1 oswbb]# ls -ltr private.net
-rwxr-xr-x 1 oracle oinstall 228 Aug 12 02:04 private.net
[root@rac1 oswbb]#


4. Start OS Watcher

Example 1: This would start the tool and collect data at default 30 second intervals and log the last 48 hours of data to archive files.

./startOSWbb.sh 

Example 2: This would start the tool and collect data at 60 second intervals and log the last 10 hours of data to archive files and automatically compress the files.
./startOSWbb.sh 60 10 gzip

Example 3: This would start the tool and collect data at 60 second intervals and log the last 10 hours of data to archive files, compress the files and set the archive directory to a non-default location.

./startOSWbb.sh 60 10 gzip /u02/tools/oswbb/archive

Example 4: This would start the tool and collect data at 60 second intervals and log the last 48 hours of data to archive files, NOT compress the files and set the archive directory to a non-default location.

./startOSWbb.sh 60 48 NONE /u02/tools/oswbb/archive

Example 5: This would start the tool, put the process in the background, enable to the tool to continue running after the session has been terminated, collect data at 60 second intervals, and log the last 10 hours of data to archive files.

nohup ./startOSWbb.sh 60 10 &
As root user

[root@rac1 ~]# cd /u01/share/oswbb
[root@rac1 oswbb]# ls -ltr startOSWbb.sh
-rwxr-xr-x 1 oracle oinstall 2574 Feb 26 23:50 startOSWbb.sh
[root@rac1 oswbb]#
[root@rac1 oswbb]# nohup ./startOSWbb.sh 30 72 gzip & <--- Hit ENTER twice
[1] 28446
[root@rac1 oswbb]# nohup: appending output to `nohup.out'

[1]+  Done                    nohup ./startOSWbb.sh 30 72 gzip
[root@rac1 oswbb]#

Note: OSW will keep running until stop/crash and will keep data for last 72 hours only. Data automatically compress after 72 hours.


5. Verify OSW Running

[root@rac1 archive]# ps -elf | grep OSWatcher  | grep -v grep
0 S root     28450     1  0  80   0 -  2213 wait   02:48 pts/2    00:00:00 /bin/sh ./OSWatcher.sh 30 72 gzip   <-- 30 Sec, 72 Hours, output gzip format 
0 S root     28499 28450  0  80   0 -  2179 wait   02:49 pts/2    00:00:00 /bin/sh ./OSWatcherFM.sh 72 /u01/share/oswbb/archive  <-- OSW output location
[root@rac1 archive]#

[root@rac1 ~]# ls -ltr /u01/share/oswbb/archive
total 40
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswvmstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswtop
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswslabinfo
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswps
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswprvtnet
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswnetstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswmpstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswmeminfo
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswiostat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswifconfig
[root@rac1 ~]#

[root@rac1 ~]# cd /u01/share/oswbb/archive/oswprvtnet
[root@rac1 oswprvtnet]# ls -ltr
total 8
-rw-r--r-- 1 root root 4272 Aug 12 02:55 rac1.rajasekhar.com_prvtnet_15.08.12.0200.dat
[root@rac1 oswprvtnet]# tail -10 rac1.rajasekhar.com_prvtnet_15.08.12.0200.dat
zzz ***Wed Aug 12 02:55:15 IST 2015
traceroute to rac1-priv.rajasekhar.com (192.168.0.101), 30 hops max, 40 byte packets
 1  rac1-priv.rajasekhar.com (192.168.0.101)  0.023 ms  0.012 ms  0.005 ms
traceroute to rac2-priv.rajasekhar.com (192.168.0.102), 30 hops max, 40 byte packets
 1  rac2-priv.rajasekhar.com (192.168.0.102)  0.278 ms  0.185 ms  0.124 ms
zzz ***Wed Aug 12 02:55:45 IST 2015
traceroute to rac1-priv.rajasekhar.com (192.168.0.101), 30 hops max, 40 byte packets
 1  rac1-priv.rajasekhar.com (192.168.0.101)  0.022 ms  0.007 ms  0.005 ms
traceroute to rac2-priv.rajasekhar.com (192.168.0.102), 30 hops max, 40 byte packets
 1  rac2-priv.rajasekhar.com (192.168.0.102)  0.396 ms  0.310 ms  0.226 ms
[root@rac1 oswprvtnet]#


6. Stop OSWather

[root@rac1 oswbb]# pwd
/u01/share/oswbb
[root@rac1 oswbb]# ls -ltr stopOSWbb.sh
-rwxr-xr-x 1 oracle oinstall 558 Apr 17  2014 stopOSWbb.sh
[root@rac1 oswbb]#
[root@rac1 oswbb]# ./stopOSWbb.sh
[root@rac1 oswbb]#
[root@rac1 oswbb]# ps -ef | grep OSW <-- now OSW is not running
root     30248  2602  0 02:57 pts/2    00:00:00 grep OSW
[root@rac1 oswbb]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Reference:
OSWatcher (Includes: [Video]) (Doc ID 301137.1)

Restore loss of all VOTE disks

Restore Loss of All Vote Disks

Contents:
_________________________________________________________________________________________________________________

0. Environment
1. Current Status of OCR/VOTE DISK
2. Backup OCR
3. Simulate VOTE DISK corruption
4. Reboot both nodes in order to see corruption << This step is not mandatory
5. Restore loss of all Voting disk
            A. Stop CRS on all the nodes
            B. Start CRS in exclusive mode only
            C. Create New Diskgroup
            D. Restore/Move/Replace Votedisk
            E. Stop CRS on Node 1
            F. Start CRS on both nodes
6. Check Cluster Status

_________________________________________________________________________________________________________________


0. Environment

Two Node RAC 11.2.0.3
OS : RHEL5


1. Current Status of OCR/VOTE DISK.

[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4156
         Available space (kbytes) :     257964
         ID                       : 1037097601
         Device/File Name         :      +DATA  <<< OCR located in ASM diskgroup DATA.
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@rac1 ~]$

[oracle@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7a14418b50a54f9dbfda2a6b97b4f620 (/dev/oracleasm/disks/DISK5) [VOTE]  <<<  voting disk /dev/oracleasm/disks/DISK5
Located 1 voting disk(s). <<<<
[oracle@rac1 ~]$

Note: Now both OCR and Voting disks are in two different diskgroups. 
      OCR in DATA diskgroup
	  Voting disk /dev/oracleasm/disks/DISK5 in VOTE diskgroup.


2. Backup OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac2     2015/06/24 03:08:27     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150624_030827.ocr

rac1     2015/06/23 05:46:12     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_054612.ocr

rac1     2015/06/23 02:39:07     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr

rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#

Note: With OCR backup we can recover Voting Disk in case of vote disk lose.


3. Simulate VOTE DISK corruption

DISCLAIMER: The dd command given below is just for learning purposes and should only be used on testing systems. I will not take any responsibility of any consequences or loss of data caused by this command.

Corrupt the voting disk /dev/oracleasm/disks/DISK5

dd if=/dev/zero of=/dev/oracleasm/disks/DISK5 bs=4096 count=1000000

Why only 4096 bytes? because the ASM disk header is in the first block of the first AU, and the block size is 4096 bytes.

[oracle@rac1 ~]$ kfed read /dev/oracleasm/disks/DISK5 | grep kfdhdb.blksize
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
[oracle@rac1 ~]$

[oracle@rac1 ~]$ kfed read /dev/oracleasm/disks/DISK5  <<<< KFED confirms that disk got corrupted.
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
7F999709F400 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

[oracle@rac1 ~]$

[root@rac1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7a14418b50a54f9dbfda2a6b97b4f620 (/dev/oracleasm/disks/DISK5) [VOTE]  <<< Don't know why still status showing as ONLINE
Located 1 voting disk(s).
[root@rac1 ~]# 

KFED read command failed. Voting disk got corrupted, i have waited around 1 hour but some how CLUSTER DID NOT WENT DOWN. Don't know why, but i am missing something here.

Please correct me if i am wrong. Let’s bring down everything in order to see the corruption.

Note: I tried to stop the CRS on both nodes at the same time, on Node 2 CRS stopped, but Node 1 restarted while shutting down CRS.

However i have rebooted both the nodes.


4. Reboot both nodes in order to see corruption

After reboot cluster status on both nodes.

From RAC1
=========
[root@rac1 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE
ora.crf
      1        ONLINE  ONLINE       rac1
ora.crsd
      1        ONLINE  OFFLINE
ora.cssd
      1        ONLINE  OFFLINE  <<<<<<
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1
ora.ctssd
      1        ONLINE  OFFLINE
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        ONLINE  OFFLINE
ora.evmd
      1        ONLINE  OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       rac1
ora.gpnpd
      1        ONLINE  ONLINE       rac1
ora.mdnsd
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#

From RAC2
===========
[root@rac2 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE                               Instance Shutdown
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE
ora.crf
      1        ONLINE  ONLINE       rac2
ora.crsd
      1        ONLINE  OFFLINE
ora.cssd
      1        ONLINE  OFFLINE                               STARTING   <<<<< It will not start because "No voting files found"
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2
ora.ctssd
      1        ONLINE  OFFLINE
ora.diskmon
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       rac2
ora.gpnpd
      1        ONLINE  ONLINE       rac2
ora.mdnsd
      1        ONLINE  ONLINE       rac2
[root@rac2 ~]#


alertrac1.log
==============
2015-06-25 04:25:07.002
[cssd(6313)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log
2015-06-25 04:25:22.291
[cssd(6313)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log

ocssd.log from RAC1
====================
2015-06-25 04:25:06.961: [   SKGFD][1093830976]OSS discovery with :/dev/oracleasm/disks*:
2015-06-25 04:25:06.961: [   SKGFD][1093830976]Handle 0x7fbfd8002e50 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK1:
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80ead10 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK2:
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80eb540 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK3:
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80e6240 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK4:
                            <<<<<<<<< DISK5 is missing.
2015-06-25 04:25:06.962: [   SKGFD][1093830976]Handle 0x7fbfd80e6a70 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK6:
2015-06-25 04:25:06.963: [   SKGFD][1093830976]Handle 0x7fbfd80c7d10 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK7:
..
2015-06-25 04:25:07.001: [    CSSD][1093830976]clssnmvDiskVerify: Successful discovery of 0 disks
2015-06-25 04:25:07.002: [    CSSD][1093830976]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2015-06-25 04:25:07.002: [    CSSD][1093830976]clssnmvFindInitialConfigs: No voting files found
2015-06-25 04:25:07.002: [    CSSD][1093830976](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds

alertrac2.log
==============
2015-06-25 04:25:06.999
[cssd(6539)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac2/cssd/ocssd.log
2015-06-25 04:25:22.279
[cssd(6539)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac2/cssd/ocssd.log

ocssd.log from RAC2
=====================
2015-06-25 04:25:06.573: [   SKGFD][1087797568]OSS discovery with :/dev/oracleasm/disks*:
2015-06-25 04:25:06.573: [   SKGFD][1087797568]Handle 0x19e8640 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK1:
2015-06-25 04:25:06.573: [   SKGFD][1087797568]Handle 0x1993310 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK2:
2015-06-25 04:25:06.574: [   SKGFD][1087797568]Handle 0x1a49550 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK3:
2015-06-25 04:25:06.574: [   SKGFD][1087797568]Handle 0x18aaa40 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK4:
                                                                          <<<<<<<< DISK5 is missing.
2015-06-25 04:25:06.575: [   SKGFD][1087797568]Handle 0x19f6e90 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK6:
2015-06-25 04:25:06.575: [   SKGFD][1087797568]Handle 0x196cbf0 from lib :UFS:: for disk :/dev/oracleasm/disks/DISK7:
..
2015-06-25 04:25:06.999: [    CSSD][1087797568]clssnmvDiskVerify: Successful discovery of 0 disks <<<
2015-06-25 04:25:06.999: [    CSSD][1087797568]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2015-06-25 04:25:06.999: [    CSSD][1087797568]clssnmvFindInitialConfigs: No voting files found <<<
2015-06-25 04:25:07.000: [    CSSD][1087797568](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds


5. Restore loss of all Voting disk.


A. Stop CRS on all the nodes

From RAC1
==========
[root@rac1 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 ~]#

From RAC2
===========
[root@rac2 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac2 ~]#


B. Start CRS in exclusive mode only

From RAC1 as root user

Note: From 11.2.0.2 onwards we should include flag “nocrs” in exclusive CRS startup

[root@rac1 ~]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failed
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac1'
CRS-2681: Clean of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
[root@rac1 ~]#

[oracle@rac1 ~]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     Started  <<
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1
ora.crf
      1        OFFLINE OFFLINE
ora.crsd
      1        OFFLINE OFFLINE  <<<< We have started CRS exclusive mode then CSSD and ASM started, but CRSD won't start
ora.cssd
      1        ONLINE  ONLINE       rac1   <<<
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        ONLINE  OFFLINE
ora.evmd
      1        OFFLINE OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       rac1
ora.gpnpd
      1        ONLINE  ONLINE       rac1
ora.mdnsd
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$


[oracle@rac1 ~]$ SQL> select NAME, STATE, VOTING_FILES from v$asm_diskgroup;

NAME                           STATE       V
------------------------------ ----------- -
DATA1                          MOUNTED     N
DATA                           MOUNTED     N
                               <<<<< VOTE Diskgroup is missing in this output. 
SQL>

SQL> select NAME, PATH, STATE, VOTING_FILE from v$asm_disk where PATH='/dev/oracleasm/disks/DISK5';

no rows selected  << no output SQL>

[oracle@rac1 ~]$ crsctl query css votedisk
Located 0 voting disk(s). <<<
[oracle@rac1 ~]$


C. Create New Diskgroup

Note: You don’t  have new disk right now, but want to resolve this issue, then use existing ASM diskgroup to restore Voting disk. In this case you can ignore this step “Create New Diskgroup”.

SQL> create diskgroup DATA2 external redundancy disk '/dev/oracleasm/disks/DISK6' attribute 'COMPATIBLE.ASM' = '11.2';

Diskgroup created.

SQL>


D. Restore/Move/Replace Votedisk.

Note: Voting Disk will be restore from OCR backup.

From Node 1 as GI HOME owner

[oracle@rac1 ~]$ crsctl replace votedisk +DATA2
Successful addition of voting disk 7ebe19bb115e4f51bfd96935eb1b92b7.
Successfully replaced voting disk group with +DATA2.
CRS-4266: Voting file(s) successfully replaced <<<
[oracle@rac1 ~]$
[oracle@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7ebe19bb115e4f51bfd96935eb1b92b7 (/dev/oracleasm/disks/DISK6) [DATA2] <<<
Located 1 voting disk(s).
[oracle@rac1 ~]$


E. Stop CRS on Node 1

From RAC1
As root user

[root@rac1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 ~]#


F. Start CRS on both nodes.

From RAC1
As root

[root@rac1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac1 ~]#
[root@rac1 ~]#

From RAC2
As root

[root@rac2 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac2 ~]#


6. Check Cluster Status

[root@rac1 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online  <<<
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 ~]#

[root@rac1 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     Started
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1
ora.crf
      1        ONLINE  ONLINE       rac1
ora.crsd
      1        ONLINE  ONLINE       rac1  <<<
ora.cssd
      1        ONLINE  ONLINE       rac1
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        ONLINE  OFFLINE
ora.evmd
      1        ONLINE  ONLINE       rac1
ora.gipcd
      1        ONLINE  ONLINE       rac1
ora.gpnpd
      1        ONLINE  ONLINE       rac1
ora.mdnsd
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#

[root@rac2 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online  <<<
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac2 ~]#

[root@rac2 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac2                     Started
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac2
ora.crf
      1        ONLINE  ONLINE       rac2
ora.crsd
      1        ONLINE  ONLINE       rac2  <<<
ora.cssd
      1        ONLINE  ONLINE       rac2
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2
ora.ctssd
      1        ONLINE  ONLINE       rac2                     ACTIVE:0
ora.diskmon
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  ONLINE       rac2
ora.gipcd
      1        ONLINE  ONLINE       rac2
ora.gpnpd
      1        ONLINE  ONLINE       rac2
ora.mdnsd
      1        ONLINE  ONLINE       rac2
[root@rac2 ~]#

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Still page under construction !!! 🙂

Add Node Back which was DELETED without software

Add Node Back to Cluster which was deleted without removing GI and RDBMS binaries.

0. Environment
1. Backup OCR
2. Cluster Node Addition
3. Run root.sh on new node RAC2
4. Check cluster status
5. Pin Node
6. Add instance to OCR
7. Update Inventory


0. Environment

One node RAC 11.2.0.3 (Not RAC ONE), single node RAC. Earlier it was two node RAC setup, recently i have deleted 2nd node from cluster for testing.
Node name: RAC1
OS: RHEL 5
DATABASE: nike, Instance: nike1

Task: We are going to add node “RAC2″ to our existing cluster.

Current Status

[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


1. Backup OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac1     2015/06/23 05:46:12     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_054612.ocr

rac1     2015/06/23 02:39:07     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr

rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#


2. Cluster Node Addition.

Note: Pre-node check failed. Hence need to set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running addNode.sh otherwise, the silent node addition will fail without showing any errors to the console.

As GI Home owner
From active node RAC1

[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS
[oracle@rac1 bin]$ ./addNode.sh -silent -noCopy "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      rac2
         /u01: Required 7.50GB : Available 5.54GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.4
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Server) 11.2.0.3.0
      Installation Plugin Files 11.2.0.3.0
      Universal Storage Manager Files 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Automatic Storage Management Assistant 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Perl Interpreter 5.10.0.0.2
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Oracle Net Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.3.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Cluster Verification Utility Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle LDAP administration 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Cluster Ready Services Files 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Tuesday, June 23, 2015 6:01:55 AM IST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Saving inventory on nodes (Tuesday, June 23, 2015 6:03:32 AM IST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11.2.0/grid/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac1 bin]$

Note: set environment properly, better set ORACLE_HOME tag for above script like below

./addNode.sh -silent -noCopy ORACLE_HOME=/u01/app/11.2.0/grid “CLUSTER_NEW_NODES={rac2}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}”


3. Run root.sh on new node RAC2

Form Node RAC2
As root

[root@rac2 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#


4. Check cluster status

[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$


5. Pin Node

As root from node RAC1

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Unpinned
[root@rac1 ~]#
[root@rac1 ~]# crsctl pin css -n rac2
CRS-4664: Node rac2 successfully pinned.
[root@rac1 ~]#
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Pinned <<<<
[root@rac1 ~]#


6. Add instance to OCR

As ORACLE HOME owner

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl << you should run from ORACLE_HOME
[oracle@rac1 ~]$ 
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
[oracle@rac1 ~]$ 
[oracle@rac1 ~]$ srvctl add instance -d nike -i nike2 -n rac2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is not running on node rac2 <<<< Instance added, we need start manually
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl start instance -d nike -i nike2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is running on node rac2. Instance status: Open. <<< Now it is running
[oracle@rac1 ~]$


7. Update Inventory

Note: addNode script will automatically update NODE_LIST for GI Home.

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>  <<< automatically updated NODE_LIST by addnode script.
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rac1"/>
	                                      <<<< Need to update NODE_LIST manually for ORACLE_HOME
</NODE_LIST>           
</HOME>

[oracle@rac1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1,rac2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>  << updated node list.
   </NODE_LIST>

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Still page under construction !!! 🙂

Delete Node without remove software

Delete Node without remove GI and RDBMS binaries.

0. Environment
1. Backup OCR
2. Check status of service
3. Shutdown instance 2
4. Unpin Node
5. Disable Oracle Clusterware
6. Delete Node from Clusterware Configuration
7. Backup Inventory
8. Update Inventory for ORACLE_HOME
9. Update Inventory for GI_HOME


0. Environment

Two node RAC, version 11.2.0.3
Node name: RAC1, RAC2
Database name: nike , instances: nike1, nik2 . Admin Managed Database.
Service name: nike_srv
OS: RHEL 5.7

Task: We are going to delete node RAC2 from cluster without removing GI and RDBMS binaries because i want to add the node back later.

Current Status
===============

[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
      2        ONLINE  ONLINE       rac2                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  OFFLINE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


1. Backup OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac1     2015/06/23 02:39:07     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr

rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#


2. Check status of service

[oracle@rac2 ~]$ srvctl status service -d nike
Service nike_srv is running on instance(s) nike1
[oracle@rac2 ~]$

Note: Confirm where service is now running. If service running on instance 2 then manually failover the service.

srvctl relocate service -d <dbname> -s <service name> -i <old_inst> -t <new_inst>

Note that this does not disconnect any current sessions


3. Shutdown instance 2

[oracle@rac2 ~]$ srvctl stop instance -d nike -i nike2
[oracle@rac2 ~]$ srvctl status database -d nike
Instance nike1 is running on node rac1
Instance nike2 is not running on node rac2
[oracle@rac2 ~]$


4. Unpin Node

As root from node RAC1

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Pinned
[root@rac1 ~]# crsctl unpin css -n rac2
CRS-4667: Node rac2 successfully unpinned.
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Unpinned <<<<
[root@rac1 ~]#

Note: If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.


5. Disable Oracle Clusterware

From node RAC2, which you want to delete
As user root.

[root@rac2 ~]# cd /u01/app/11.2.0/grid/crs/install/
[root@rac2 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.2.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.2.103/192.168.2.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.2.104/192.168.2.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA1.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 install]#


6. Delete Node from Clusterware Configuration

From node RAC1
As root user

[root@rac1 ~]# crsctl delete node -n rac2
CRS-4661: Node rac2 successfully deleted.
[root@rac1 ~]#

[root@rac1 ~]# olsnodes -t -s
rac1    Active  Pinned
[root@rac1 ~]#

As ORACLE HOME owner, remove instance 2 from OCR

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl << you should run from ORACLE_HOME
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl remove instance -d nike -i nike2 <<<
Remove instance from the database nike? (y/[n]) y
[oracle@rac1 ~]$

[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$

From node RAC2

[root@rac2 ~]# ps -ef | grep init
root         1     0  0 Jun22 ?        00:00:00 init [5]
root      9125  8531  0 03:30 pts/1    00:00:00 grep init
[root@rac2 ~]# ps -ef | grep d.bin
root      9127  8531  0 03:30 pts/1    00:00:00 grep d.bin
[root@rac2 ~]#


7. Backup Inventory

From node RAC1
As root.

[root@rac1 ~]# cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
[root@rac1 ~]#
[root@rac1 ~]# cp -rp /u01/app/oraInventory /u01/app/oraInventory_bkp


8. Update Inventory for ORACLE_HOME

From node RAC1
As ORACLE_HOME owner

[oracle@rac1 bin]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$


9. Update Inventory for GI_HOME

From node RAC1
As GRID_HOME owner

[oracle@rac1 bin]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Still page under construction !!! 🙂

Add Node

Add Node to 11gR2 Oracle RAC Cluster (11.2.0.3)

0. Environment

1. Pre-installation tasks for GI for a cluster

0.i) Backup OCR
i) Install the cvuqdisk Package for Linux
ii) Verify New Node (HWOS)
iii) Verify Peer (REFNODE)
iv) Verify New Node (New Pre-Node)
v) Run fixup scripts

2. Cluster Node Addition for GI Home.

i) Run addnode.sh script
ii) Run orainstRoot.sh #On nodes rac2
iii) Run root.sh #On nodes rac2
iv) Check Clusterware Resources after ran root.sh
v) Run cluvfy post-addNode script
vi) Check Cluster Nodes
vii) Check TNS Listener
viii) Check ASM Status
ix) Check OCR
x) Check Vote disk

3. Cluster Node Addition for RDBMS Home.

i) Run addnode.sh script
ii) root.sh #On nodes rac2 from RDBMS home

4. Add Instance to Database through Command-Line or you can add via dbca.

i)Pre-task
ii) Add redo thread
iii) Add undo tablespace
iv) Add instance to OCR
v) Add service to new instance via srvctl or you can add via dbca
vi) Check the cluster stack

Let start !!!


0. Environment

One node RAC 11.2.0.3 (Not RAC ONE), single node RAC. Earlier it was two node RAC setup, recently i have deleted 2nd node from cluster for testing.
Node name: RAC1
OS: RHEL 5
DATABASE: nike, instance: nike1

Task: We are going to add node “RAC2” to our existing cluster.

Current status

[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


1. Pre-installation tasks for GI for a cluster

A) Install and Configure the Linux Operating System on the New Node     << This is step already done by SA
B) Configure Access to the Shared Storage     << This is step already done by SA
C) Install and Configure ASMLib     << This is step already done by SA
D) SSH configure    << This is step already done by SA


0.i) Backup OCR

From: Node RAC1
As root

[root@rac1 ~]# ocrconfig -manualbackup
rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#


i) Install the cvuqdisk Package for Linux

[root@rac2 oracle]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
Using default group oinstall to install package
   1:cvuqdisk               ########################################### [100%]
[root@rac2 oracle]#

Note: Without cvuqdisk package, CVU cannot discover shared disks and you will receive the error message "Package cvuqdisk not installed" when the CVU is run. 

Example below:
==============
Checking shared storage accessibility...

WARNING:
rac2:PRVF-7017 : Package cvuqdisk not installed
        rac2
No shared storage found
Shared storage check failed on nodes "rac2"


ii) Verify New Node (HWOS)

As GI Home owner
From active node RAC1.

[oracle@rac1 ~]$ cluvfy stage -post hwos -n rac2

Performing post-checks for hardware and operating system setup

Checking node reachability...
Node reachability check passed from node "rac1"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "192.168.2.0" with node(s) rac2
TCP connectivity check passed for subnet "192.168.2.0"

Node connectivity passed for subnet "192.168.0.0" with node(s) rac2
TCP connectivity check passed for subnet "192.168.0.0"

Node connectivity passed for subnet "10.0.4.0" with node(s) rac2
TCP connectivity check passed for subnet "10.0.4.0"


Interfaces found on subnet "10.0.4.0" that are likely candidates for VIP are:
rac2 eth2:10.0.4.15

Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are:
rac2 eth0:192.168.2.102

Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.0.102

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.4.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.4.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed

Checking shared storage accessibility...

  Disk                                  Sharing Nodes (1 in count)
  ------------------------------------  ------------------------
  /dev/sda                              rac2

  Disk                                  Sharing Nodes (1 in count)
  ------------------------------------  ------------------------
  /dev/sdb                              rac2


Shared storage check was successful on nodes "rac2"

Post-check for hardware and operating system setup was successful.
[oracle@rac1 ~]$


iii) Verify Peer (REFNODE)

From active node RAC1
As GI Home owner

[oracle@rac1 ~]$ cluvfy comp peer -refnode rac1 -n rac2 -orainv oinstall -osdba dba -verbose

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          3.8633GB (4050940.0KB)    3.8633GB (4050940.0KB)    matched
Physical memory check passed

Compatibility check: Available memory [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          3.6215GB (3797424.0KB)    2.6165GB (2743628.0KB)    mismatched
Available memory check failed

Compatibility check: Swap space [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          1.9994GB (2096472.0KB)    5.5385GB (5807488.0KB)    mismatched
Swap space check failed

Compatibility check: Free disk space for "/u01/app/11.2.0/grid" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          17.165GB (1.7998848E7KB)  21.2725GB (2.2305792E7KB)  mismatched
Free disk space check failed

Compatibility check: Free disk space for "/tmp" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          3.707GB (3887104.0KB)     4.4717GB (4688896.0KB)    mismatched
Free disk space check failed

Compatibility check: User existence for "oracle" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          oracle(1100)              oracle(1100)              matched
User existence for "oracle" check passed

Compatibility check: Group existence for "oinstall" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          oinstall(1000)            oinstall(1000)            matched
Group existence for "oinstall" check passed

Compatibility check: Group existence for "dba" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          dba(1200)                 dba(1200)                 matched
Group existence for "dba" check passed

Compatibility check: Group membership for "oracle" in "oinstall (Primary)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       matched
Group membership for "oracle" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "oracle" in "dba" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       matched
Group membership for "oracle" in "dba" check passed

Compatibility check: Run level [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          5                         5                         matched
Run level check passed

Compatibility check: System architecture [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          x86_64                    x86_64                    matched
System architecture check passed

Compatibility check: Kernel version [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          2.6.32-200.13.1.el5uek    2.6.32-200.13.1.el5uek    matched
Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          250                       250                       matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          32000                     32000                     matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          100                       100                       matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          128                       128                       matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          2074081280                1054504960                mismatched
Kernel param "shmmax" check failed

Compatibility check: Kernel param "shmmni" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          4096                      4096                      matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          2097152                   2097152                   matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          6815744                   6815744                   matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          between 9000.0 & 65500.0  between 9000.0 & 65500.0  matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          262144                    262144                    matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          4194304                   4194304                   matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          262144                    262144                    matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          1048586                   1048586                   matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          1048576                   1048576                   matched
Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "make" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          make-3.81-3.el5           make-3.81-3.el5           matched
Package existence for "make" check passed

Compatibility check: Package existence for "binutils" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6-14.el5  matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "gcc (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc-4.1.2-51.el5 (x86_64)  gcc-4.1.2-51.el5 (x86_64)  matched
Package existence for "gcc (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386)  libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386)  matched
Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-2.5-65 (x86_64),glibc-2.5-65 (i686)  glibc-2.5-65 (x86_64),glibc-2.5-65 (i686)  matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386)  compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386)  matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "elfutils-libelf (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386)  elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386)  matched
Package existence for "elfutils-libelf (x86_64)" check passed

Compatibility check: Package existence for "elfutils-libelf-devel" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.137-3.el5  matched
Package existence for "elfutils-libelf-devel" check passed

Compatibility check: Package existence for "glibc-common" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-common-2.5-65       glibc-common-2.5-65       matched
Package existence for "glibc-common" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-devel-2.5-65 (x86_64),glibc-devel-2.5-65 (i386)  glibc-devel-2.5-65 (x86_64),glibc-devel-2.5-65 (i386)  matched
Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "glibc-headers" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-headers-2.5-65      glibc-headers-2.5-65      matched
Package existence for "glibc-headers" check passed

Compatibility check: Package existence for "gcc-c++ (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc-c++-4.1.2-51.el5 (x86_64)  gcc-c++-4.1.2-51.el5 (x86_64)  matched
Package existence for "gcc-c++ (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64)  libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64)  matched
Package existence for "libaio-devel (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libgcc-4.1.2-51.el5 (x86_64),libgcc-4.1.2-51.el5 (i386)  libgcc-4.1.2-51.el5 (x86_64),libgcc-4.1.2-51.el5 (i386)  matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++-4.1.2-51.el5 (x86_64),libstdc++-4.1.2-51.el5 (i386)  libstdc++-4.1.2-51.el5 (x86_64),libstdc++-4.1.2-51.el5 (i386)  matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++-devel-4.1.2-51.el5 (x86_64)  libstdc++-devel-4.1.2-51.el5 (x86_64)  matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          sysstat-7.0.2-11.el5      sysstat-7.0.2-11.el5      matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "ksh" [reference node: rac1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          ksh-20100202-1.el5_6.6    ksh-20100202-1.el5_6.6    matched
Package existence for "ksh" check passed

Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        rac2
[oracle@rac1 ~]$


iv) Verify New Node (New Pre-Node)

As GI Home owner
From Node RAC1

[oracle@rac1 ~]$ cluvfy stage -pre nodeadd -n rac2 -fixup -verbose

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac2                                  yes
Result: Node reachability check passed from node "rac1"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed
  rac2                                  passed

Verification of the hosts config file successful


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.107]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac2[192.168.2.102]             yes
  rac1[192.168.2.106]             rac1[192.168.2.107]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac2[192.168.2.102]             yes
  rac1[192.168.2.107]             rac1[192.168.2.103]             yes
  rac1[192.168.2.107]             rac1[192.168.2.105]             yes
  rac1[192.168.2.107]             rac2[192.168.2.102]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac2[192.168.2.102]             yes
  rac1[192.168.2.105]             rac2[192.168.2.102]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.107              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
  rac1:192.168.2.101              rac2:192.168.2.102              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Result: Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed
  rac2                                  passed

Verification of the hosts config file successful


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.107]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac2[192.168.2.102]             yes
  rac1[192.168.2.106]             rac1[192.168.2.107]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac2[192.168.2.102]             yes
  rac1[192.168.2.107]             rac1[192.168.2.103]             yes
  rac1[192.168.2.107]             rac1[192.168.2.105]             yes
  rac1[192.168.2.107]             rac2[192.168.2.102]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac2[192.168.2.102]             yes
  rac1[192.168.2.105]             rac2[192.168.2.102]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.107              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
  rac1:192.168.2.101              rac2:192.168.2.102              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.0.101]             rac2[192.168.0.102]             yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.0.101              rac2:192.168.0.102              passed
Result: TCP connectivity check passed for subnet "192.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          3.8633GB (4050940.0KB)    1.5GB (1572864.0KB)       passed
  rac1          3.8633GB (4050940.0KB)    1.5GB (1572864.0KB)       passed
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          3.6216GB (3797500.0KB)    50MB (51200.0KB)          passed
  rac1          2.4853GB (2606016.0KB)    50MB (51200.0KB)          passed
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          1.9994GB (2096472.0KB)    3.8633GB (4050940.0KB)    failed  <<<<<
  rac1          5.5385GB (5807488.0KB)    3.8633GB (4050940.0KB)    passed
Result: Swap space check failed

Check: Free disk space for "rac2:/u01/app/11.2.0/grid"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/11.2.0/grid  rac2          /u01          17.165GB      5.5GB         passed
Result: Free disk space check passed for "rac2:/u01/app/11.2.0/grid"

Check: Free disk space for "rac1:/u01/app/11.2.0/grid"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/11.2.0/grid  rac1          /u01          21.2705GB     5.5GB         passed
Result: Free disk space check passed for "rac1:/u01/app/11.2.0/grid"

Check: Free disk space for "rac2:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              rac2          /             3.707GB       1GB           passed
Result: Free disk space check passed for "rac2:/tmp"

Check: Free disk space for "rac1:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              rac1          /             4.4717GB      1GB           passed
Result: Free disk space check passed for "rac1:/tmp"

Check: User existence for "oracle"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac2          passed                    exists(1100)
  rac1          passed                    exists(1100)

Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "oracle"

Check: Run level
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          5                         3,5                       passed
  rac1          5                         3,5                       passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              hard          65536         65536         passed
  rac2              hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              soft          1024          1024          passed
  rac2              soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              hard          16384         16384         passed
  rac2              hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac1              soft          2047          2047          passed
  rac2              soft          2047          2047          passed
Result: Soft limits check passed for "maximum user processes"

Check: System architecture
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          x86_64                    x86_64                    passed
  rac1          x86_64                    x86_64                    passed
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          2.6.32-200.13.1.el5uek    2.6.18                    passed
  rac1          2.6.32-200.13.1.el5uek    2.6.18                    passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              250           250           250           passed
  rac2              250           250           250           passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              32000         32000         32000         passed
  rac2              32000         32000         32000         passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              100           100           100           passed
  rac2              100           100           100           passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              128           128           128           passed
  rac2              128           128           128           passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1054504960    1054504960    2074081280    failed        Current value too low. Configured value too low.  <<< 
  rac2              2074081280    1054504960    2074081280    failed        Configured value too low.  <<<
Result: Kernel parameter check failed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4096          4096          4096          passed
  rac2              4096          4096          4096          passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              2097152       2097152       2097152       passed
  rac2              2097152       2097152       2097152       passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              6815744       6815744       6815744       passed
  rac2              6815744       6815744       6815744       passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
  rac2              between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              262144        262144        262144        passed
  rac2              262144        262144        262144        passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4194304       4194304       4194304       passed
  rac2              4194304       4194304       4194304       passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              262144        262144        262144        passed
  rac2              262144        262144        262144        passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1048586       1048586       1048576       passed
  rac2              1048586       1048586       1048576       passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1048576       1048576       1048576       passed
  rac2              1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "make"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          make-3.81-3.el5           make-3.81                 passed
  rac1          make-3.81-3.el5           make-3.81                 passed
Result: Package existence check passed for "make"

Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6      passed
  rac1          binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6      passed
Result: Package existence check passed for "binutils"

Check: Package existence for "gcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc(x86_64)-4.1.2-51.el5  gcc(x86_64)-4.1.2         passed
  rac1          gcc(x86_64)-4.1.2-51.el5  gcc(x86_64)-4.1.2         passed
Result: Package existence check passed for "gcc(x86_64)"

Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio(x86_64)-0.3.106-5  libaio(x86_64)-0.3.106    passed
  rac1          libaio(x86_64)-0.3.106-5  libaio(x86_64)-0.3.106    passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc(x86_64)-2.5-65      glibc(x86_64)-2.5-24      passed
  rac1          glibc(x86_64)-2.5-65      glibc(x86_64)-2.5-24      passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          compat-libstdc++-33(x86_64)-3.2.3-61  compat-libstdc++-33(x86_64)-3.2.3  passed
  rac1          compat-libstdc++-33(x86_64)-3.2.3-61  compat-libstdc++-33(x86_64)-3.2.3  passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "elfutils-libelf(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf(x86_64)-0.137-3.el5  elfutils-libelf(x86_64)-0.125  passed
  rac1          elfutils-libelf(x86_64)-0.137-3.el5  elfutils-libelf(x86_64)-0.125  passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"

Check: Package existence for "elfutils-libelf-devel"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.125  passed
  rac1          elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.125  passed
Result: Package existence check passed for "elfutils-libelf-devel"

Check: Package existence for "glibc-common"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-common-2.5-65       glibc-common-2.5          passed
  rac1          glibc-common-2.5-65       glibc-common-2.5          passed
Result: Package existence check passed for "glibc-common"

Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-devel(x86_64)-2.5-65  glibc-devel(x86_64)-2.5   passed
  rac1          glibc-devel(x86_64)-2.5-65  glibc-devel(x86_64)-2.5   passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "glibc-headers"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          glibc-headers-2.5-65      glibc-headers-2.5         passed
  rac1          glibc-headers-2.5-65      glibc-headers-2.5         passed
Result: Package existence check passed for "glibc-headers"

Check: Package existence for "gcc-c++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          gcc-c++(x86_64)-4.1.2-51.el5  gcc-c++(x86_64)-4.1.2     passed
  rac1          gcc-c++(x86_64)-4.1.2-51.el5  gcc-c++(x86_64)-4.1.2     passed
Result: Package existence check passed for "gcc-c++(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libaio-devel(x86_64)-0.3.106-5  libaio-devel(x86_64)-0.3.106  passed
  rac1          libaio-devel(x86_64)-0.3.106-5  libaio-devel(x86_64)-0.3.106  passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libgcc(x86_64)-4.1.2-51.el5  libgcc(x86_64)-4.1.2      passed
  rac1          libgcc(x86_64)-4.1.2-51.el5  libgcc(x86_64)-4.1.2      passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++(x86_64)-4.1.2-51.el5  libstdc++(x86_64)-4.1.2   passed
  rac1          libstdc++(x86_64)-4.1.2-51.el5  libstdc++(x86_64)-4.1.2   passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          libstdc++-devel(x86_64)-4.1.2-51.el5  libstdc++-devel(x86_64)-4.1.2  passed
  rac1          libstdc++-devel(x86_64)-4.1.2-51.el5  libstdc++-devel(x86_64)-4.1.2  passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          sysstat-7.0.2-11.el5      sysstat-7.0.2             passed
  rac1          sysstat-7.0.2-11.el5      sysstat-7.0.2             passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac2          ksh-20100202-1.el5_6.6    ksh-20060214              passed
  rac1          ksh-20100202-1.el5_6.6    ksh-20060214              passed
Result: Package existence check passed for "ksh"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
  rac1                                  passed

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed


Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac2          passed                    does not exist
  rac1          passed                    does not exist
Result: User "oracle" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "rajasekhar.com" as found on node "rac1"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  failed
  rac2                                  failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1,rac2

File "/etc/resolv.conf" is not consistent across nodes

Fixup information has been generated for following node(s):
rac2,rac1
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.3.0_oracle/runfixup.sh'

Pre-check for node addition was unsuccessful on all the nodes.
[oracle@rac1 ~]$


v) Run fixup scripts

On RAC 1, as root user

[root@rac1 ~]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log
Setting Kernel Parameters...
kernel.shmmax = 68719476736
kernel.shmmax = 1054504960
/tmp/CVU_11.2.0.3.0_oracle/orarun.sh: line 230: [: 68719476736kernel.shmmax: integer expression expected
The value for shmmax in response file is not greater than value for shmmax in /etc/sysctl.conf file. Hence not changing it.
kernel.shmmax = 2074081280
[root@rac1 ~]#

On RAC2, as root user

[root@rac2 ~]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log
Setting Kernel Parameters...
kernel.shmmax = 68719476736
kernel.shmmax = 1054504960
/tmp/CVU_11.2.0.3.0_oracle/orarun.sh: line 230: [: 68719476736kernel.shmmax: integer expression expected
The value for shmmax in response file is not greater than value for shmmax in /etc/sysctl.conf file. Hence not changing it.
The value for shmmax in response file is not greater than value of shmmax for current session. Hence not changing it.
[root@rac2 ~]#


2. Cluster Node Addition for GI Home.

Note: Pre-node check failed. Hence need to set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running addNode.sh otherwise, the silent node addition will fail without showing any errors to the console.


i) Run addnode.sh script

As GI Home owner
From active node RAC1

[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5671 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      rac2
         /u01: Required 6.99GB : Available 15.98GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.4
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Server) 11.2.0.3.0
      Installation Plugin Files 11.2.0.3.0
      Universal Storage Manager Files 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Automatic Storage Management Assistant 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Perl Interpreter 5.10.0.0.2
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Oracle Net Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.3.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Cluster Verification Utility Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle LDAP administration 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Cluster Ready Services Files 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Sunday, June 21, 2015 12:35:19 PM IST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Sunday, June 21, 2015 12:35:22 PM IST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Sunday, June 21, 2015 12:45:29 PM IST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac2'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac2
/u01/app/11.2.0/grid/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac1 bin]$

Note: If you are using GNS:
cd $GI_HOME/oui/bin
./addNode.sh -silent “CLUSTER_NEW_NODES={rac2}”


ii) Run orainstRoot.sh #On nodes rac2

On node RAC2
As root

[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]#


iii) Run root.sh #On nodes rac2

On node RAC2
As root

[root@rac2 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#

Note: The root.sh script will configuring Grid Infrastructure on the new node and includes adding High Availability Services to the /etc/inittab so that CRS starts up when the machine starts.


iv) Check Clusterware Resources after ran root.sh

[root@rac2 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac2 ~]#

[root@rac2 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac2 ~]#


v) Run cluvfy post-node add script

As GI Home owner
From node RAC1 as best practice, because initially we ran pre-node cluvfy from RAC 1 only

[oracle@rac1 ~]$ cluvfy stage -post nodeadd -n rac2 -verbose

Performing post-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac2                                  yes
Result: Node reachability check passed from node "rac1"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
  rac1                                  passed

Verification of the hosts config file successful


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.104   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth1   169.254.215.111 169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.2.102]             rac2[192.168.2.107]             yes
  rac2[192.168.2.102]             rac2[192.168.2.104]             yes
  rac2[192.168.2.102]             rac1[192.168.2.101]             yes
  rac2[192.168.2.102]             rac1[192.168.2.106]             yes
  rac2[192.168.2.102]             rac1[192.168.2.103]             yes
  rac2[192.168.2.102]             rac1[192.168.2.105]             yes
  rac2[192.168.2.107]             rac2[192.168.2.104]             yes
  rac2[192.168.2.107]             rac1[192.168.2.101]             yes
  rac2[192.168.2.107]             rac1[192.168.2.106]             yes
  rac2[192.168.2.107]             rac1[192.168.2.103]             yes
  rac2[192.168.2.107]             rac1[192.168.2.105]             yes
  rac2[192.168.2.104]             rac1[192.168.2.101]             yes
  rac2[192.168.2.104]             rac1[192.168.2.106]             yes
  rac2[192.168.2.104]             rac1[192.168.2.103]             yes
  rac2[192.168.2.104]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac2:192.168.2.102              passed
  rac1:192.168.2.101              rac2:192.168.2.107              passed
  rac1:192.168.2.101              rac2:192.168.2.104              passed
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking cluster integrity...

  Node Name
  ------------------------------------
  rac1
  rac2

Cluster integrity check passed


Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac2"
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid" is not shared
Result: Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
  rac1                                  passed

Verification of the hosts config file successful


Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.102   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.107   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth0   192.168.2.104   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:93:7F:00 1500
 eth1   192.168.0.102   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth1   169.254.215.111 169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:63:80:76 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:4C:B3:01 1500


Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.2.101   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.106   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.103   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth0   192.168.2.105   192.168.2.0     0.0.0.0         10.0.4.2        08:00:27:02:46:97 1500
 eth1   192.168.0.101   192.168.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth1   169.254.6.127   169.254.0.0     0.0.0.0         10.0.4.2        08:00:27:9A:66:6A 1500
 eth2   10.0.4.15       10.0.4.0        0.0.0.0         10.0.4.2        08:00:27:D3:B8:9F 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.2.102]             rac2[192.168.2.107]             yes
  rac2[192.168.2.102]             rac2[192.168.2.104]             yes
  rac2[192.168.2.102]             rac1[192.168.2.101]             yes
  rac2[192.168.2.102]             rac1[192.168.2.106]             yes
  rac2[192.168.2.102]             rac1[192.168.2.103]             yes
  rac2[192.168.2.102]             rac1[192.168.2.105]             yes
  rac2[192.168.2.107]             rac2[192.168.2.104]             yes
  rac2[192.168.2.107]             rac1[192.168.2.101]             yes
  rac2[192.168.2.107]             rac1[192.168.2.106]             yes
  rac2[192.168.2.107]             rac1[192.168.2.103]             yes
  rac2[192.168.2.107]             rac1[192.168.2.105]             yes
  rac2[192.168.2.104]             rac1[192.168.2.101]             yes
  rac2[192.168.2.104]             rac1[192.168.2.106]             yes
  rac2[192.168.2.104]             rac1[192.168.2.103]             yes
  rac2[192.168.2.104]             rac1[192.168.2.105]             yes
  rac1[192.168.2.101]             rac1[192.168.2.106]             yes
  rac1[192.168.2.101]             rac1[192.168.2.103]             yes
  rac1[192.168.2.101]             rac1[192.168.2.105]             yes
  rac1[192.168.2.106]             rac1[192.168.2.103]             yes
  rac1[192.168.2.106]             rac1[192.168.2.105]             yes
  rac1[192.168.2.103]             rac1[192.168.2.105]             yes
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.2.101              rac2:192.168.2.102              passed
  rac1:192.168.2.101              rac2:192.168.2.107              passed
  rac1:192.168.2.101              rac2:192.168.2.104              passed
  rac1:192.168.2.101              rac1:192.168.2.106              passed
  rac1:192.168.2.101              rac1:192.168.2.103              passed
  rac1:192.168.2.101              rac1:192.168.2.105              passed
Result: TCP connectivity check passed for subnet "192.168.2.0"


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.0.102]             rac1[192.168.0.101]             yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.0.101              rac2:192.168.0.102              passed
Result: TCP connectivity check passed for subnet "192.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking node application existence...

Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       passed
  rac1          yes                       yes                       passed
VIP node application check passed

Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       passed
  rac1          yes                       yes                       passed
NETWORK node application check passed

Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          no                        no                        exists
  rac1          no                        no                        exists
GSD node application is offline on nodes "rac2,rac1"

Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          no                        yes                       passed
  rac1          no                        yes                       passed
ONS node application check passed


Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac-scan.rajasekhar.com  rac2          true          LISTENER_SCAN1  1521          true
  rac-scan.rajasekhar.com  rac1          true          LISTENER_SCAN2  1521          true
  rac-scan.rajasekhar.com  rac1          true          LISTENER_SCAN3  1521          true

Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?
  ------------  ------------------------  ------------------------
  rac1          LISTENER_SCAN1            yes
  rac1          LISTENER_SCAN2            yes
  rac1          LISTENER_SCAN3            yes
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "rac-scan.rajasekhar.com"...
  SCAN Name     IP Address                Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac-scan.rajasekhar.com  192.168.2.107             passed
  rac-scan.rajasekhar.com  192.168.2.105             passed
  rac-scan.rajasekhar.com  192.168.2.106             passed

Verification of SCAN VIP and Listener setup passed

Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac2          passed                    does not exist
Result: User "oracle" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  passed
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  rac2                                  Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  rac2          0.0                       passed

Time offset is within the specified limits on the following set of nodes:
"[rac2]"
Result: Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful. <<<
[oracle@rac1 ~]$


vi) Check Cluster Nodes

[oracle@rac2 ~]$ olsnodes -n
rac1    1
rac2    2
[oracle@rac2 ~]$


vii) Check TNS Listener

On node RAC2

[oracle@rac2 ~]$ ps -ef | grep tns
root        13     2  0 Jun20 ?        00:00:00 [netns]
oracle   24168     1  0 12:57 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle   24639     1  0 12:57 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
oracle   28355 28292  0 13:19 pts/1    00:00:00 grep tns
[oracle@rac2 ~]$


viii) Check ASM Status

On node RAC2

[oracle@rac2 ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
[oracle@rac2 ~]$


ix) Check OCR

On node RAC2

[oracle@rac2 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4076
         Available space (kbytes) :     258044
         ID                       : 1037097601
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@rac2 ~]$


x) Check Vote disk

On node RAC2

[oracle@rac2 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a785661b81264f8ebfa8538128d4e1fe (/dev/oracleasm/disks/DISK1) [DATA]
 2. ONLINE   7a41922f13254f61bf4cee3f53b9aa74 (/dev/oracleasm/disks/DISK2) [DATA]
 3. ONLINE   492244b5021f4fc7bf7d75b74cfe841a (/dev/oracleasm/disks/DISK3) [DATA]
Located 3 voting disk(s).
[oracle@rac2 ~]$


3. Cluster Node Addition for RDBMS Home.


i) Run addnode.sh script

From Node 1 RAC1
As RDBMS HOME owner

[oracle@rac1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin/    <<< This is RDBMS home

[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5670 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

........
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/oracle/product/11.2.0/db_1
   New Nodes
Space Requirements
   New Nodes
      rac2
         /u01: Required 5.04GB : Available 10.57GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Client 10.3.2.1.0
      Oracle Configuration Manager 10.3.5.0.1
      Oracle ODBC Driverfor Instant Client 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      Oracle Real Application Testing 11.2.0.3.0
      Oracle Database Vault J2EE Application 11.2.0.3.0
      Oracle Label Security 11.2.0.3.0
      Oracle Data Mining RDBMS Files 11.2.0.3.0
      Oracle OLAP RDBMS Files 11.2.0.3.0
      Oracle OLAP API 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle Database Vault option 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      Oracle Display Fonts 9.0.2.0.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle JDBC Server Support Package 11.2.0.3.0
      Oracle SQL Developer 11.2.0.3.0
      Oracle Application Express 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      SQLJ Runtime 11.2.0.3.0
      Database Workspace Manager 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Exadata Storage Server 11.2.0.1.0
      Provisioning Advisor Framework 10.2.0.4.3
      Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0
      Enterprise Manager Repository Core Files 10.2.0.4.4
      Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0
      Enterprise Manager Grid Control Core Files 10.2.0.4.4
      Enterprise Manager Common Core Files 10.2.0.4.4
      Enterprise Manager Agent Core Files 10.2.0.4.4
      RDBMS Required Support Files 11.2.0.3.0
      regexp 2.1.9.0.0
      Agent Required Support Files 10.2.0.4.3
      Oracle 11g Warehouse Builder Required Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Parser Generator Required Support Files 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Multimedia Annotator 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Sample Schema Data 11.2.0.3.0
      Oracle Starter Database 11.2.0.3.0
      Oracle Message Gateway Common Files 11.2.0.3.0
      Oracle XML Query 11.2.0.3.0
      XML Parser for Oracle JVM 11.2.0.3.0
      Oracle Help For Java 4.2.9.0.0
      Installation Plugin Files 11.2.0.3.0
      Enterprise Manager Common Files 10.2.0.4.3
      Expat libraries 2.0.1.0.1
      Deinstallation Tool 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      JAccelerator (COMPANION) 11.2.0.3.0
      Oracle Containers for Java 11.2.0.3.0
      Perl Interpreter 5.10.0.0.2
      Oracle Net Required Support Files 11.2.0.3.0
      Secure Socket Layer 11.2.0.3.0
      Oracle Universal Connection Pool 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Oracle Code Editor 1.2.1.0.0I
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      Oracle ODBC Driver 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle UIX 2.2.24.6.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Precompiler Common Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Oracle Help for the  Web 2.0.14.0.0
      Oracle LDAP administration 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      Generic Connectivity Common Files 11.2.0.3.0
      Oracle Database Gateway for ODBC 11.2.0.3.0
      Oracle Programmer 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Enterprise Manager Agent 10.2.0.4.3
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Call Interface (OCI) 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle XML Development Kit 11.2.0.3.0
      Database Configuration and Upgrade Assistants 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Advanced Security 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Enterprise Manager Console DB 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Oracle Text 11.2.0.3.0
      Oracle Net Services 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
      Oracle OLAP 11.2.0.3.0
      Oracle Spatial 11.2.0.3.0
      Oracle Partitioning 11.2.0.3.0
      Enterprise Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Sunday, June 21, 2015 2:06:51 PM IST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Sunday, June 21, 2015 2:08:47 PM IST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Sunday, June 21, 2015 8:10:31 PM IST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@rac1 bin]$


ii) root.sh #On nodes rac2 from RDBMS home

[root@rac2 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac2 ~]#


4. Add Instance to Database through Command-Line or you can add via dbca.


i) Pre-task

On RAC2
As RDBMS Home owner

[oracle@rac2 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/dbs
[oracle@rac2 dbs]$ ls -ltr
total 28
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwnike1 <<
-rw-r----- 1 oracle oinstall  161 Jun 21 19:56 initDBUA5216639.ora
-rw-r----- 1 oracle oinstall   36 Jun 21 19:56 initnike1.ora << 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_DBUA5216639.dat 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_nike1.dat 
-rw-r--r-- 1 oracle oinstall 2851 Jun 21 19:56 init.ora 
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwDBUA5216639 [oracle@rac2 dbs]$ 
[oracle@rac2 dbs]$ mv initnike1.ora initnike2.ora
[oracle@rac2 dbs]$ mv orapwnike1 orapwnike2 
[oracle@rac2 dbs]$ ls -ltr 
total 28 
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwnike2 
-rw-r----- 1 oracle oinstall  161 Jun 21 19:56 initDBUA5216639.ora 
-rw-r----- 1 oracle oinstall   36 Jun 21 19:56 initnike2.ora 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_DBUA5216639.dat 
-rw-rw---- 1 oracle oinstall 1544 Jun 21 19:56 hc_nike1.dat 
-rw-r--r-- 1 oracle oinstall 2851 Jun 21 19:56 init.ora 
-rw-r----- 1 oracle oinstall 1536 Jun 21 19:56 orapwDBUA5216639 [oracle@rac2 dbs]$ 
[oracle@rac2 dbs]$ cat initnike2.ora 
SPFILE='+DATA1/nike/spfilenike.ora' 
[oracle@rac2 dbs]$ 
[oracle@rac2 dbs]$ echo "nike2:/u01/app/oracle/product/11.2.0/db_1:N" >> /etc/oratab
[oracle@rac2 dbs]$ echo "nike:/u01/app/oracle/product/11.2.0/db_1:N" >> /etc/oratab

cat /etc/oratab
..
#
+ASM2:/u01/app/11.2.0/grid:N            # line added by Agent
nike2:/u01/app/oracle/product/11.2.0/db_1:N
nike:/u01/app/oracle/product/11.2.0/db_1:N


[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/adump
[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/dpdump
[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/hdump
[oracle@rac2 ~]$ mkdir -p /u01/app/oracle/admin/nike/pfile


ii) Add redo thread

On RAC1, As ORACLE HOME owner

SQL> set lines 180
SQL> col MEMBER for a60
SQL> select b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

   THREAD#     GROUP# MEMBER                                                            BYTES
---------- ---------- ------------------------------------------------------------ ----------
         1          2 +DATA1/nike/onlinelog/group_2.290.847761035                    52428800
         1          1 +DATA1/nike/onlinelog/group_1.289.847761031                    52428800

SQL>

SQL> alter database add logfile thread 2 group 3 ('+DATA1') size 52428800, group 4 ('+DATA1') size 52428800;

Database altered.

SQL>

SQL> select b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

   THREAD#     GROUP# MEMBER                                                            BYTES
---------- ---------- ------------------------------------------------------------ ----------
         1          2 +DATA1/nike/onlinelog/group_2.290.847761035                    52428800
         1          1 +DATA1/nike/onlinelog/group_1.289.847761031                    52428800
         2          3 +DATA1/nike/onlinelog/group_3.271.883012175                    52428800
         2          4 +DATA1/nike/onlinelog/group_4.270.883012181                    52428800

SQL>

SQL> alter database enable public thread 2;

Database altered.

SQL>


iii) Add undo tablespace

On node RAC1

SQL> set pages 0
SQL> set long 9999999
SQL> select dbms_metadata.get_ddl('TABLESPACE','UNDOTBS1') from dual;

  CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE
   ALTER DATABASE DATAFILE
  '+DATA1/nike/datafile/undotbs1.357.847760839' RESIZE 41943040


SQL> create undo tablespace undotbs2 datafile '+DATA1' size 25M autoextend on next 5m maxsize 40M;

Tablespace created.

SQL>

SQL> alter system set undo_tablespace=undotbs2 scope=spfile sid='nike2';

System altered.

SQL> alter system set instance_number=2 scope=spfile sid='nike2';

System altered.

SQL> alter system set thread=2 scope=spfile sid='nike2';

System altered.

SQL>

SQL> alter system set cluster_database_instances=2 scope=spfile sid='*';

System altered.

SQL>

SQL> select inst_id,name,value from gv$parameter where name like 'undo_table%';

INST_ID NAME                 VALUE
------- -------------------- ---------------
      2 undo_tablespace      UNDOTBS2
      1 undo_tablespace      UNDOTBS1

SQL>


iv) Add instance to OCR

From node RAC1 as ORACLE_HOME owner

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl   <<< You should run from RDBMS Home
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl add instance -d nike -i nike2 -n rac2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is not running on node rac2 <<<
[oracle@rac1 ~]$ 
[oracle@rac1 ~]$ srvctl start instance -d nike -i nike2
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl status database -d nike -v
Instance nike1 is running on node rac1 with online services nike_srv. Instance status: Open.
Instance nike2 is running on node rac2. Instance status: Open. <<<
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl config database -d nike
Database unique name: nike
Database name: nike
Oracle home: /u01/app/oracle/product/11.2.0/db_1 <<<
Oracle user: oracle
Spfile: +DATA1/nike/spfilenike.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: nike
Database instances: nike1,nike2  <<<<
Disk Groups: DATA1
Mount point paths:
Services: nike_srv <<
Type: RAC
Database is administrator managed <<< [oracle@rac1 ~]$ SQL> col host_name format a22
SQL> set lines 180
SQL> select host_name, inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;

HOST_NAME                 INST_ID INSTANCE_NAME    STATUS       START_TIME
---------------------- ---------- ---------------- ------------ --------------------
rac1.rajasekhar.com             1 nike1            OPEN         21-JUN-2015 11:38:48
rac2.rajasekhar.com             2 nike2            OPEN         22-JUN-2015 01:25:13

SQL>


v) Add New Instance to service via srvctl or you can add via dbca

From node RAC1, as ORACLE_HOME owner

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl   <<< You should run from RDBMS Home
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl add service -d nike -s nike_srv -a nike2 -u  <<<< -a means available instance.
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv
Service name: nike_srv <<<
Service is enabled
Server pool: nike_nike_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike1
Available instances: nike2 <<<<
[oracle@rac1 ~]$

Note: If you want add instance as preferred then follow below

[oracle@rac1 ~]$ which srvctl
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl   <<< You should run from RDBMS Home
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl add service -d nike -s nike_srv -r nike2 -u
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv
Service name: nike_srv
Service is enabled
Server pool: nike_nike_srv
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike2,nike1 <<<
Available instances:
[oracle@rac1 ~]$ srvctl status service -d nike
Service nike_srv is running on instance(s) nike1 <<<< 
[oracle@rac1 ~]$ srvctl start service -d nike
[oracle@rac1 ~]$ srvctl status service -d nike -v
Service nike_srv is running on instance(s) nike1,nike2 <<<<
[oracle@rac1 ~]$

Note: Modify tnsnames.ora file if required.


vi) Check the cluster stack

[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
      2        ONLINE  ONLINE       rac2                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[oracle@rac1 ~]$

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Still page under construction !!! 🙂

Delete Node

Delete Node from Cluster in 11gR2 (11.2.0.3)

0. Environment

1. Remove Oracle Instance

i) Remove Instance from OEM Database Control Monitoring
ii) Backup OCR
iii) Remove instance name from services
iv) Remove Instance from the Cluster Database

2. Remove Oracle Database Software

i) Verify Listener Not Running in Oracle Home
ii) Update Oracle Inventory – (Node Being Removed)
iii) Remove instance nike2 entry from /etc/oratab
iv) De-install Oracle Home (Non-shared Oracle Home)
v) Update Oracle Inventory – (All Remaining Nodes)

3. Remove Node from Clusterware

i) Unpin Node
ii) Disable Oracle Clusterware
iii) Delete Node from Clusterware Configuration
iv) Update Oracle Inventory – (Node Being Removed) for GI Home
v) De-install Oracle Grid Infrastructure Software (Non-shared GI Home)
vi) After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.
vii) Update Oracle Inventory – (All Remaining Nodes)
viii) Verify New Cluster Configuration


0. Environment:

– Two Node RAC version 11.2.0.3
– Node Name: RAC1, RAC2
– OS: RHEL 5
– Database name: nike and instances are nike1 and nike2
– The existing Oracle RAC database is administrator-managed (not policy-managed).
– The existing Oracle RAC does not use shared Oracle homes for the Grid Infrastructure or Database software.

Task: We are going to delete node RAC2 from cluster.

Cluster status
===============
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA1.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1                     Started
               ONLINE  ONLINE       rac2                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
      2        ONLINE  ONLINE       rac2                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
      2        OFFLINE OFFLINE
ora.oc4j
      1        ONLINE  OFFLINE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]# 


1. Remove Oracle Instance


i) Remove Instance from OEM Database Control MonitoringI did not configured. Hence  ignoring.
From: Node RAC1
Note: Run the emca command from any node in the cluster, except from the node where the instance we want to stop from being monitored is running.

emctl status dbconsole
emctl status agnet
emca -displayConfig dbcontrol -cluster —
emca -deleteInst db


ii) Backup OCR
From: Node RAC1

[root@rac1 ~]# ocrconfig -manualbackup
rac1     2015/06/19 23:38:03     /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#

Note: that voting disks are automatically backed up in OCR after the changes we will be making to the cluster.


iii) Remove instance name from services
From node RAC1
Note:
Before deleting an instance from an Oracle RAC database, use either SRVCTL or Oracle Enterprise Manager to do the following:
If you have services configured, then relocate the services
Modify the services so that each service can run on one remaining instance
Ensure that the instance to be removed from an administrator-managed database is neither a preferred nor an available instance of any service

[oracle@rac1 ~]$ srvctl status service -d nike -s nike_srv -v
Service nike_srv is running on instance(s) nike1  <<<< service running only on instance nike1. Hence no issue here. If service running on instance 2, then we need to relocate service before instance delete.
[oracle@rac1 ~]$
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv -v
Service name: nike_srv
Service is enabled
Server pool: nike_nike_srv
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike1  
Available instances: nike2 <<< here instance nike2 as available instance. we have to remove available instance for the service.
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl modify service -d nike -s nike_srv -n -i nike1 <<<
[oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv -v
Service name: nike_srv
Service is enabled
Server pool: nike_nike_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: nike1  
Available instances:  <<<< we have removed instance nike2 entry "srvctl modify service -d nike -s nike_srv -n -i nike1"
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl status service -d nike -s nike_srv -v
Service nike_srv is running on instance(s) nike1  <<<
[oracle@rac1 ~]$


iv) Remove Instance from the Cluster Database
From Node RAC1 as Oracle Home owner.

[oracle@rac1 ~]$ srvctl config database -d nike -v
Database unique name: nike
Database name: nike
Oracle home: /u01/app/oracle/product/11.2.0/db_1 <<<<<
Oracle user: oracle
Spfile: +DATA1/nike/spfilenike.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: nike
Database instances: nike1,nike2 <<<<
Disk Groups: DATA1
Mount point paths:
Services: nike_srv
Type: RAC
Database is administrator managed <<<< This is Admin managed database.
[oracle@rac1 ~]$

[oracle@rac1 ~]$ dbca -silent -deleteInstance -nodeList rac2 -gdbName nike -instanceName nike2 -sysDBAUserName sys -sysDBAPassword sys
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/nike.log" for further details.
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl config database -d nike -v
Database unique name: nike
Database name: nike
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA1/nike/spfilenike.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: nike
Database instances: nike1 <<<<<< instance nike2 removed. 
Disk Groups: DATA1 
Mount point paths: 
Services: nike_srv 
Type: RAC Database is administrator managed 
[oracle@rac1 ~]$ 
SQL> select inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;
   INST_ID INSTANCE_NAME    STATUS       START_TIME
---------- ---------------- ------------ --------------------
         1 nike1            OPEN         19-JUN-2015 01:15:39  <<<< Instance is removed from the cluster. 
SQL>


2. Remove Oracle Database Software

i) Verify Listener Not Running in Oracle Home >>> Please ignore this step because no listener is running from RDBMS HOME.

From Node RAC2

[oracle@rac2 ~]$ ps -ef | grep tns
root         9     2  0 Jun19 ?        00:00:00 [netns]
oracle    4372     1  0 Jun19 ?        00:00:01 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit  <<< Listner is running form GI Home.
oracle    4408     1  0 Jun19 ?        00:00:01 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle   11983 11943  0 00:43 pts/1    00:00:00 grep tns
[oracle@rac2 ~]$

[oracle@rac2 ~]$ srvctl config listener -a (If listener is running from GI Home then ignore this step)
Name: LISTENER
Network: 1, Owner: oracle
Home: 
  /u01/app/11.2.0/grid on node(s) rac1,rac2 
End points: TCP:1521
[oracle@rac2 ~]$

Note: If any listeners were explicitly created to run from the Oracle home being removed, they would need to be disabled and stopped.
srvctl disable listener -l  -n 
srvctl stop listener -l  -n 


ii) Update Oracle Inventory – (Node Being Removed)

From node RAC2

[oracle@rac2 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac2}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac2 bin]$


iii) Remove instance nike2 entry from /etc/oratab

From node RAC2

+ASM2:/u01/app/11.2.0/grid:N            # line added by Agent >> We need to remove all database instance entries from oratab except ASM entries.
[oracle@rac2 ~]$


iv) De-install Oracle Home (Non-shared Oracle Home)

From Node RAC2 as Oracle Home owner

[oracle@rac2 ~]$ cd $ORACLE_HOME/deinstall
[oracle@rac2 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: rac2
Checking for sufficient temp space availability on node(s) : 'rac2'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2015-06-20_01-56-25-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2015-06-20_01-56-28-AM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2015-06-20_01-56-32-AM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check4882.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac2
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y <<<<<
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2015-06-20_01-56-02-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2015-06-20_01-56-02-AM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2015-06-20_01-56-32-AM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2015-06-20_02-02-14-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2015-06-20_02-02-14-AM.log

De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean4882.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/11.2.0/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2015-06-20_01-50-11AM' on node 'rac2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@rac2 deinstall]$

Note: If this were a shared home then instead of de-installing the Oracle Database software, you would simply detach the Oracle home from the inventory.
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location


v) Update Oracle Inventory – (All Remaining Nodes)
From Node RAC1

[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ pwd
/u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$
[oracle@rac1 bin]$
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$


3. Remove Node from Clusterware


i) Unpin Node
As root from node RAC1

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Pinned
[root@rac1 ~]# crsctl unpin css -n rac2
CRS-4667: Node rac2 successfully unpinned.
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Pinned
rac2    Active  Unpinned <<<<
[root@rac1 ~]#

Note: If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.


ii) Disable Oracle Clusterware

From node RAC2, which you want to delete
As user root.

[root@rac2 ~]# cd /u01/app/11.2.0/grid/crs/install/
[root@rac2 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.2.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.2.103/192.168.2.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.2.104/192.168.2.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2613: Could not find resource 'ora.registry.acfs'.
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA1.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
You have new mail in /var/spool/mail/root
[root@rac2 install]#


iii) Delete Node from Clusterware Configuration

From node RAC1
As root user

[root@rac1 ~]# crsctl delete node -n rac2
CRS-4661: Node rac2 successfully deleted.
[root@rac1 ~]#
[root@rac1 ~]# olsnodes -t -s
rac1    Active  Pinned
[root@rac1 ~]#
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.DATA1.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1                     Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.nike.db
      1        ONLINE  ONLINE       rac1                     Open
ora.nike.nike_srv.svc
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  OFFLINE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#


iv) Update Oracle Inventory – (Node Being Removed) for GI Home

From node RAC2, which we want to remove
As GI home owner

	  
[oracle@rac2 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac2 bin]$


v) De-install Oracle Grid Infrastructure Software (Non-shared GI Home)

From Node 2, which we want to delete
As GI Home owner

[oracle@rac2 deinstall]$ pwd
/u01/app/11.2.0/grid/deinstall
[oracle@rac2 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2015-06-20_05-14-18AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac2
Checking for sufficient temp space availability on node(s) : 'rac2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2015-06-20_05-14-18AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]
 >
[ENTER]
The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"
Enter the IP netmask of Virtual IP "192.168.2.104" on node "rac2"[255.255.255.0]
 >
[ENTER]
Enter the network interface name on which the virtual IP address "192.168.2.104" is active
 >
[ENTER]
Enter an address or the name of the virtual IP[]
 >
[ENTER]
Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/netdc_check2015-06-20_05-43-09-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER_1,LISTENER,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER_1,LISTENER,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/asmcadc_check2015-06-20_05-44-06-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac2
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2015-06-20_05-14-18AM/logs/deinstall_deconfig2015-06-20_05-34-12-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2015-06-20_05-14-18AM/logs/deinstall_deconfig2015-06-20_05-34-12-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/asmcadc_clean2015-06-20_05-44-25-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/netdc_clean2015-06-20_05-44-25-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "rac2": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2015-06-20_05-14-18AM/perl/bin/perl -I/tmp/deinstall2015-06-20_05-14-18AM/perl/lib -I/tmp/deinstall2015-06-20_05-14-18AM/crs/install /tmp/deinstall2015-06-20_05-14-18AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands


Run the above command as root on the specified node(s) from a different shell

[root@rac2 ~]# /tmp/deinstall2015-06-20_05-14-18AM/perl/bin/perl -I/tmp/deinstall2015-06-20_05-14-18AM/perl/lib -I/tmp/deinstall2015-06-20_05-14-18AM/crs/install /tmp/deinstall2015-06-20_05-14-18AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 ~]#

Once completed press [ENTER] on the first shell session

Remove the directory: /tmp/deinstall2015-06-20_05-14-18AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done


Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2015-06-20_05-14-18AM' on node 'rac2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac2' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Note: If this were a shared home then instead of de-installing the Grid Infrastructure software, you would simply detach the Grid home from the inventory.
./runInstaller -detachHome ORACLE_HOME=Grid_home_location

[root@rac2 ~]# rm -rf /etc/oraInst.loc
[root@rac2 ~]# rm -rf /opt/ORCLfmap
[root@rac2 ~]# rm -rf /u01/app/11.2.0
[root@rac2 ~]# rm -rf /u01/app/oracle


vi) After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.

[root@rac2 ~]# diff /etc/inittab /etc/inittab.no_crs
[root@rac2 ~]#


vii) Update Oracle Inventory – (All Remaining Nodes)

From Node 1.
As GI Home owner

[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2036 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$


viii) Verify New Cluster Configuration

[oracle@rac1 ~]$ cluvfy stage -post nodedel -n rac2 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed
Result:
Node removal check passed

Post-check for node removal was successful.
[oracle@rac1 ~]$

Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.

Still page under construction !!! 🙂

CLUVFY

Cluster Verification Utility Command Reference

1. Pre-check for CRS installation
2. Post-Check for CRS Installation
3. Post-check for hardware and operating system
4. Pre-check for ACFS Configuration
5. Post-check for ACFS Configuration
6. Pre-check for OCFS2 or OCFS
7. Post-check for OCFS2 or OCFS
8. Pre-check for database configuration
9. Pre-check for database installation
10. Pre-check for configuring Oracle Restart
11. Post-check for configuring Oracle Restart
12. Pre-check for add node
13. Post-check for add node
14. Post-check for node delete
15. Check ACFS integrity
16. Checks user accounts and administrative permissions
17. Check ASM integrity
18. Check CFS integrity
19. Check Clock Synchronization
20. Check cluster integrity
21. Check cluster manager integrity
22. Check CRS integrity
23. Check DHCP
24. Check DNS
25. Check HA integrity
26. Check space availability
27. Check GNS
28. Check GPNP
29. Check healthcheck
30. Checks node applications existence
31. Check node connectivity
32. Checks reachability between nodes
33. Check OCR integrity
34. Check OHASD integrity
35. Check OLR integrity
36. Check node comparison and verification
37. Checks SCAN configuration
38. Checks software component verification
39. Checks space availability
40. Checks shared storage accessibility
41. Check minimum system requirements
42. Check Voting Disk Udev settings
43. Run cluvfy before doing an upgrade
44. strace the command to get more details

~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~
cluvfy stage {-pre|-post} stage_name stage_specific_options [-verbose]
~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~

1. Pre-check for CRS installation


Use the cluvfy stage -pre crsinst command to check the specified nodes before installing Oracle Clusterware. CVU performs additional checks on OCR and voting disks if you specify the -c and -q options.
 
cluvfy stage -pre crsinst -n node1,node2 -verbose


2. Post-Check for CRS Installation


Use the cluvfy stage -post crsinst command to check the specified nodes after installing Oracle Clusterware.
 
cluvfy stage -post crsinst -n node1,node2 -verbose


3. Post-check for hardware and operating system


-- Use the cluvfy stage -post hwos stage verification command to perform network and storage verifications on the specified nodes in the cluster before installing 
   Oracle software. This command also checks for supported storage types and checks each one for sharing.
 
cluvfy stage -post hwos -n node_list [-s storageID_list] [-verbose]
cluvfy.sh stage -post hwos -n node1,node2 -verbose


4. Pre-check for ACFS Configuration


-- the cluvfy stage -pre acfscfg command to verify your cluster nodes are set up correctly before configuring Oracle ASM Cluster File System (Oracle ACFS).
 
cluvfy stage -pre acfscfg -n node_list [-asmdev asm_device_list] [-verbose]
cluvfy stage -pre acfscfg -n node1,node2 -verbose


5. Post-check for ACFS Configuration


-- Use the cluvfy stage -post acfscfg to check an existing cluster after you configure Oracle ACFS.
 
cluvfy stage -post acfscfg -n node_list [-verbose]
cluvfy stage -post acfscfg -n node1,node2 -verbose


6. Pre-check for OCFS2 or OCFS


-- Use the cluvfy stage -pre cfs stage verification command to verify your cluster nodes are set up correctly before setting up OCFS2 or OCFS for Windows.
 
cluvfy stage -pre cfs -n node_list -s storageID_list [-verbose]
cluvfy stage -pre cfs -n node1,node2 -verbose


7. Post-check for OCFS2 or OCFS


-- Use the cluvfy stage -post cfs stage verification command to perform the appropriate checks on the specified nodes after setting up OCFS2 or OCFS for Windows.
 
cluvfy stage -post cfs -n node_list -f file_system [-verbose]
cluvfy stage -post cfs -n node1,node2 -verbose


8. Pre-check for database configuration


-- Use the cluvfy stage -pre dbcfg command to check the specified nodes before configuring an Oracle RAC database to verify whether your system meets all of the
   criteria for creating a database or for making a database configuration change.
 
cluvfy stage -pre dbcfg -n node_list -d Oracle_home [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre dbcfg -n node1,node2 -d Oracle_home -verbose


9. Pre-check for database installation


-- Use the cluvfy stage -pre dbinst command to check the specified nodes before installing or creating an Oracle RAC database to verify that your system meets all of
   the criteria for installing or creating an Oracle RAC database.
 
cluvfy stage -pre dbinst -n node_list [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}]  [-osdba osdba_group] [-d Oracle_home] [-fixup [-fixupdir fixup_dir] [-verbose]   


10. Pre-check for configuring Oracle Restart


-- Use the cluvfy stage -pre hacfg command to check a local node before configuring Oracle Restart.
 
cluvfy stage -pre hacfg [-osdba osdba_group] [-orainv orainventory_group] [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre hacfg -verbose 


11. Post-check for configuring Oracle Restart


-- Use the cluvfy stage -post hacfg command to check the local node after configuring Oracle Restart.
 
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post hacfg -verbose


12. Pre-check for add node.


/*Use the cluvfy stage -pre nodeadd command to verify the specified nodes are configured correctly before adding them to your existing cluster, and to verify the integrity of the cluster before you add the nodes.

This command verifies that the system configuration, such as the operating system version, software patches, packages, and kernel parameters, for the nodes that you want to add, is compatible with the existing cluster nodes, and that the clusterware is successfully operating on the existing nodes. Run this node on any node of the existing cluster.
*/
 
cluvfy stage -pre nodeadd -n node_list [-vip vip_list]  [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre nodeadd -n node1,node2 -verbose


13. Post-check for add node.


/*
Use the cluvfy stage -post nodeadd command to verify that the specified nodes have been successfully added to the cluster at the network, shared storage, and clusterware levels.
*/
 
cluvfy stage -post nodeadd -n node_list [-verbose]
cluvfy stage -post nodeadd -n node1,node2 -verbose


14. Post-check for node delete.


/*
Use the cluvfy stage -post nodedel command to verify that specific nodes have been successfully deleted from a cluster. Typically, this command verifies that the node-specific interface configuration details have been removed, the nodes are no longer a part of cluster configuration, and proper Oracle ASM cleanup has been performed.
*/
 
cluvfy stage -post nodedel -n node_list [-verbose]
cluvfy stage -post nodedel -n node1, node2 -verbose

~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~
cluvfy comp component_name component_specific_options [-verbose]
~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *****~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***** ~~~~~~~~~~~~~~~


15. Check ACFS integrity


-- Use the cluvfy comp acfs component verification command to check the integrity of Oracle ASM Cluster File System on all nodes in a cluster.
 
cluvfy comp acfs [-n [node_list] | [all]] [-f file_system] [-verbose]
cluvfy comp acfs -n node1,node2 -f /acfs/share -verbose


16. Checks user accounts and administrative permissions


/*
Use the cluvfy comp admprv command to verify user accounts and administrative permissions for installing Oracle Clusterware and Oracle RAC software, and for creating an Oracle RAC database or modifying an Oracle RAC database configuration.
*/
 
cluvfy comp admprv [-n node_list]
{ -o user_equiv [-sshonly] |
 -o crs_inst [-orainv orainventory_group] |
 -o db_inst [-osdba osdba_group] [-fixup [-fixupdir fixup_dir]] | 
 -o db_config -d oracle_home [-fixup [-fixupdir fixup_dir]] }
 [-verbose]


17. Check ASM integrity


Use the cluvfy comp asm component verification command to check the integrity of Oracle Automatic Storage Management (Oracle ASM) on all nodes in the cluster. This check ensures that the ASM instances on the specified nodes are running from the same Oracle home and that asmlib, if it exists, has a valid version and ownership.
 
cluvfy comp asm [-n node_list | all ] [-verbose]
cluvfy comp asm -n node1,node2 -verbose


18. Check CFS integrity


Use the cluvfy comp cfs component verification command to check the integrity of the clustered file system (OCFS for Windows or OCFS2) you provide using the -f option. CVU checks the sharing of the file system from the nodes in the node list.
 
cluvfy comp cfs [-n node_list] -f file_system [-verbose]
cluvfy comp cfs -n node1,node2 -f /ocfs2/share -verbose


19. Check Clock Synchronization


Use the cluvfy comp clocksync component verification command to clock synchronization across all the nodes in the node list. CVU verifies a time synchronization service is running (Oracle Cluster Time Synchronization Service (CTSS) or Network Time Protocol (NTP)), that each node is using the same reference server for clock synchronization, and that the time offset for each node is within permissible limits.
 
cluvfy comp clocksync [-noctss] [-n node_list [all]] [-verbose]
cluvfy comp clocksync [-noctss] [-n node_list [all]] [-verbose]

-noctss
If you specify this option, then CVU does not perform a check on CTSS. Instead, CVU checks the platform's native time synchronization service, such as NTP.


20. Check cluster integrity


Use the cluvfy comp clu component verification command to check the integrity of the cluster on all the nodes in the node list.
 
cluvfy comp clu [-n node_list] [-verbose]
cluvfy comp clu -n node1,node2 -verbose


21. Check cluster manager integrity


Use the cluvfy comp clumgr component verification command to check the integrity of cluster manager subcomponent, or Oracle Cluster Synchronization Services (CSS), on all the nodes in the node list.
 
cluvfy comp clumgr [-n node_list] [-verbose]
cluvfy comp clumgr -n node1, node2 -verbose


22. Check CRS integrity


Run the cluvfy comp crs component verification command to check the integrity of the Cluster Ready Services (CRS) daemon on the specified nodes.
 
cluvfy comp crs [-n node_list] [-verbose]
cluvfy comp crs -n node1, node2 -verbose


23. Check DHCP


Starting with Oracle Database 11g release 2 (11.2.0.2), use the cluvfy comp dhcp component verification command to verify that the DHCP server exists on the network and is capable of providing a required number of IP addresses. This verification also verifies the response time for the DHCP server. You must run this command as root.
 
# cluvfy comp dhcp -clustername cluster_name [-vipresname vip_resource_name] [-port dhcp_port] [-n node_list] [-verbose]

-clustername cluster_name
The name of the cluster of which you want to check the integrity of DHCP.

-vipresname vip_resource_name
The name of the VIP resource.

-port dhcp_port
The port on which DHCP listens. The default port is 67.


24. Check DNS


Starting with Oracle Database 11g release 2 (11.2.0.2), use the cluvfy comp dns component verification command to verify that the Grid Naming Service (GNS) subdomain delegation has been properly set up in the Domain Name Service (DNS) server.
 
Run cluvfy comp dns -server on one node of the cluster. On each node of the cluster run cluvfy comp dns -client to verify the DNS server setup for the cluster.


25. Check HA integrity


Use the cluvfy comp ha component verification command to check the integrity of Oracle Restart on the local node.
 
cluvfy comp ha [-verbose]
cluvfy comp ha -verbose


26. Check space availability


Use the cluvfy comp freespace component verification command to check the free space available in the Oracle Clusterware home storage and ensure that there is at least 5% of the total space available. For example, if the total storage is 10GB, then the check ensures that at least 500MB of it is free.
 
cluvfy comp freespace [-n node_list | all]
cluvfy comp freespace -n node1, node2


27. Check GNS


Use the cluvfy comp gns component verification command to verify the integrity of the Oracle Grid Naming Service (GNS) on the cluster.
 
cluvfy comp gns -precrsinst -domain gns_domain -vip gns_vip [-n node_list]  [-verbose]

cluvfy comp gns -postcrsinst [-verbose]


28. Check GPNP


Use the cluvfy comp gpnp component verification command to check the integrity of Grid Plug and Play on all of the nodes in a cluster.
 
cluvfy comp gpnp [-n node_list] [-verbose]
cluvfy comp gpnp -n node1,node2 -verbose


29. Check healthcheck


Use the cluvfy comp healthcheck component verification command to check your Oracle Clusterware and Oracle Database installations for their compliance with mandatory requirements and best practices guidelines, and to ensure that they are functioning properly.
 
cluvfy comp healthcheck [-collect {cluster|database}] [-db db_unique_name]
 [-bestpractice|-mandatory] [-deviations] [-html] [-save [-savedir directory_path]]


30. Checks node applications existence


Use the component cluvfy comp nodeapp command to check for the existence of node applications, namely VIP, NETWORK, ONS, and GSD, on all of the specified nodes.
 
cluvfy comp nodeapp [-n node_list] [-verbose]
cluvfy comp nodeapp -n node1, node2 -verbose


31. Check node connectivity


Use the cluvfy comp nodecon component verification command to check the connectivity among the nodes specified in the node list. If you provide an interface list, then CVU checks the connectivity using only the specified interfaces.
 
cluvfy comp nodecon -n node_list [-i interface_list] [-verbose]
cluvfy comp nodecon -i eth2 -n node1,node2 -verbose
cluvfy comp nodecon -i eth3 -n node1,node2 -verbose


32. Checks reachability between nodes


Use the cluvfy comp nodereach component verification command to check the reachability of specified nodes from a source node.
 
cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]

-srcnode node
The name of the source node from which CVU performs the reachability test. If you do not specify a source node, then the node on which you run the command is used as the source node.


33. Check OCR integrity


Use the cluvfy comp ocr component verification command to check the integrity of Oracle Cluster Registry (OCR) on all the specified nodes.
 
cluvfy comp ocr [-n node_list] [-verbose]
cluvfy comp ocr -n node1,node2 -verbose


34. Check OHASD integrity


Use the cluvfy comp ohasd component verification command to check the integrity of the Oracle High Availability Services daemon.
 
cluvfy comp ohasd [-n node_list] [-verbose]
cluvfy comp ohasd -n node1,node2 -verbose


35. Check OLR integrity


Use the cluvfy comp olr component verification command to check the integrity of Oracle Local Registry (OLR) on the local node.
 
cluvfy comp olr [-verbose]
cluvfy comp olr -verbose


36. Check node comparison and verification


Use the cluvfy comp peer component verification command to check the compatibility and properties of the specified nodes against a reference node. You can check compatibility for non-default user group names and for different releases of the Oracle software. This command compares physical attributes, such as memory and swap space, as well as user and group values, kernel settings, and installed operating system packages.
 
cluvfy comp peer -n node_list [-refnode node]  [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-orainv orainventory_group]  [-osdba osdba_group] [-verbose]

-refnode
The node that CVU uses as a reference for checking compatibility with other nodes. If you do not specify this option, then CVU reports values for all the nodes in the node list.


37. Checks SCAN configuration


Use the cluvfy comp scan component verification command to check the Single Client Access Name (SCAN) configuration.
 
cluvfy comp scan -verbose


38. Checks software component verification


Use the cluvfy comp software component verification command to check the files and attributes installed with the Oracle software.
 
cluvfy comp software [-n node_list] [-d oracle_home] [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-verbose]


39. Checks space availability


Use the cluvfy comp space component verification command to check for free disk space at the location you specify in the -l option on all the specified nodes.
 
cluvfy comp space [-n node_list] -l storage_location -z disk_space {B | K | M | G} [-verbose]

cluvfy comp space -n all -l /u01/oracle -z 2g -verbose


40. Checks shared storage accessibility


Use the cluvfy comp ssa component verification command to discover and check the sharing of the specified storage locations. CVU checks sharing for nodes in the node list.
 
cluvfy comp ssa [-n node_list] [-s storageID_list] [-t {software | data | ocr_vdisk}] [-verbose]

cluvfy comp ssa -n node1,node2 -verbose
cluvfy comp ssa -n node1,node2 -s /dev/sdb


41. Check minimum system requirements


Use the cluvfy comp sys component verification command to check that the minimum system requirements are met for the specified product on all the specified nodes.
 
cluvfy comp sys [-n node_list] -p {crs | ha | database}  [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-osdba osdba_group]  [-orainv orainventory_group] [-fixup [-fixupdir fixup_dir]] [-verbose]

cluvfy comp sys -n node1,node2 -p crs -verbose
cluvfy comp sys -n node1,node2 -p ha -verbose
cluvfy comp sys -n node1,node2 -p database -verbose


42. Check Voting Disk Udev settings


Use the cluvfy comp vdisk component verification command to check the voting disks configuration and the udev settings for the voting disks on all the specified nodes.
 
cluvfy comp vdisk [-n node_list] [-verbose]
cluvfy comp vdisk -n node1,node2 -verbose


43. Run cluvfy before doing an upgrade

runcluvfy stage -pre crsinst -upgrade -n  -rolling -src_crshome  -dest_crshome  -dest_version  -verbose
runcluvfy stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/grid/11.2.0.1 -dest_crshome /u01/app/grid/11.2.0.3 -dest_version 11.2.0.4.0 -verbose


44. Strace the command

Strace the command to get more details
eg: strace -t -f -o clu.trc cluvfy comp olr -verbose
/*
[oracle@rac1 ~]$ strace -t -f -o clu.trc cluvfy comp olr -verbose

Verifying OLR integrity

Checking OLR integrity...

Checking OLR config file...

OLR config file check successful


Checking OLR file attributes...

OLR file check successful


WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Verification of OLR integrity was successful.
[oracle@rac1 ~]$ ls -ltr clu.trc
-rw-r--r-- 1 oracle oinstall 4206376 Jun 12 01:15 clu.trc
[oracle@rac1 ~]$

*/

Reference:
http://docs.oracle.com/cd/E11882_01/rac.112/e41959/cvu.htm#CWADD1100

Still page under construction !!!