Delete Node without remove GI and RDBMS binaries.
0. Environment
1. Backup OCR
2. Check status of service
3. Shutdown instance 2
4. Unpin Node
5. Disable Oracle Clusterware
6. Delete Node from Clusterware Configuration
7. Backup Inventory
8. Update Inventory for ORACLE_HOME
9. Update Inventory for GI_HOME
– Two node RAC, version 11.2.0.3
– Node name: RAC1, RAC2
– Database name: nike , instances: nike1, nik2 . Admin Managed Database.
– Service name: nike_srv
– OS: RHEL 5.7
Task: We are going to delete node RAC2 from cluster without removing GI and RDBMS binaries because i want to add the node back later.
Current Status
===============
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA1.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.nike.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.nike.nike_srv.svc
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE OFFLINE
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
ora.scan2.vip
1 ONLINE ONLINE rac1
ora.scan3.vip
1 ONLINE ONLINE rac1
[root@rac1 ~]#
[root@rac1 ~]# ocrconfig -manualbackup
rac1 2015/06/23 02:39:07 /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150623_023907.ocr
rac1 2015/06/19 23:38:03 /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#
[oracle@rac2 ~]$ srvctl status service -d nike Service nike_srv is running on instance(s) nike1 [oracle@rac2 ~]$ Note: Confirm where service is now running. If service running on instance 2 then manually failover the service. srvctl relocate service -d <dbname> -s <service name> -i <old_inst> -t <new_inst> Note that this does not disconnect any current sessions
[oracle@rac2 ~]$ srvctl stop instance -d nike -i nike2 [oracle@rac2 ~]$ srvctl status database -d nike Instance nike1 is running on node rac1 Instance nike2 is not running on node rac2 [oracle@rac2 ~]$
As root from node RAC1
[root@rac1 ~]# olsnodes -s -t rac1 Active Pinned rac2 Active Pinned [root@rac1 ~]# crsctl unpin css -n rac2 CRS-4667: Node rac2 successfully unpinned. [root@rac1 ~]# olsnodes -s -t rac1 Active Pinned rac2 Active Unpinned <<<< [root@rac1 ~]# Note: If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.
From node RAC2, which you want to delete
As user root.
[root@rac2 ~]# cd /u01/app/11.2.0/grid/crs/install/ [root@rac2 install]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params Network exists: 1/192.168.2.0/255.255.255.0/eth0, type static VIP exists: /rac1-vip/192.168.2.103/192.168.2.0/255.255.255.0/eth0, hosting node rac1 VIP exists: /rac2-vip/192.168.2.104/192.168.2.0/255.255.255.0/eth0, hosting node rac2 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'rac2' CRS-2677: Stop of 'ora.DATA1.dg' on 'rac2' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2' CRS-2673: Attempting to stop 'ora.evmd' on 'rac2' CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2' CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac2' CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac2' CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2' CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2' CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deconfigured Oracle clusterware stack on this node [root@rac2 install]#
6. Delete Node from Clusterware Configuration
From node RAC1
As root user
[root@rac1 ~]# crsctl delete node -n rac2 CRS-4661: Node rac2 successfully deleted. [root@rac1 ~]# [root@rac1 ~]# olsnodes -t -s rac1 Active Pinned [root@rac1 ~]# As ORACLE HOME owner, remove instance 2 from OCR [oracle@rac1 ~]$ which srvctl /u01/app/oracle/product/11.2.0/db_1/bin/srvctl << you should run from ORACLE_HOME [oracle@rac1 ~]$ [oracle@rac1 ~]$ srvctl remove instance -d nike -i nike2 <<< Remove instance from the database nike? (y/[n]) y [oracle@rac1 ~]$ [oracle@rac1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac1 ora.DATA1.dg ONLINE ONLINE rac1 ora.LISTENER.lsnr ONLINE ONLINE rac1 ora.asm ONLINE ONLINE rac1 Started ora.gsd OFFLINE OFFLINE rac1 ora.net1.network ONLINE ONLINE rac1 ora.ons ONLINE ONLINE rac1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac1 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.nike.db 1 ONLINE ONLINE rac1 Open ora.nike.nike_srv.svc 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE ONLINE rac1 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.scan1.vip 1 ONLINE ONLINE rac1 ora.scan2.vip 1 ONLINE ONLINE rac1 ora.scan3.vip 1 ONLINE ONLINE rac1 [oracle@rac1 ~]$ From node RAC2 [root@rac2 ~]# ps -ef | grep init root 1 0 0 Jun22 ? 00:00:00 init [5] root 9125 8531 0 03:30 pts/1 00:00:00 grep init [root@rac2 ~]# ps -ef | grep d.bin root 9127 8531 0 03:30 pts/1 00:00:00 grep d.bin [root@rac2 ~]#
From node RAC1
As root.
[root@rac1 ~]# cat /etc/oraInst.loc inventory_loc=/u01/app/oraInventory inst_group=oinstall [root@rac1 ~]# [root@rac1 ~]# cp -rp /u01/app/oraInventory /u01/app/oraInventory_bkp
8. Update Inventory for ORACLE_HOME
From node RAC1
As ORACLE_HOME owner
[oracle@rac1 bin]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin [oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5671 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@rac1 bin]$
9. Update Inventory for GI_HOME
From node RAC1
As GRID_HOME owner
[oracle@rac1 bin]$ cd /u01/app/11.2.0/grid/oui/bin/ [oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac1}" CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5671 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@rac1 bin]$
Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Still page under construction !!! 🙂