Delete Node from Cluster in 11gR2 (11.2.0.3)
i) Remove Instance from OEM Database Control Monitoring
ii) Backup OCR
iii) Remove instance name from services
iv) Remove Instance from the Cluster Database
2. Remove Oracle Database Software
i) Verify Listener Not Running in Oracle Home
ii) Update Oracle Inventory – (Node Being Removed)
iii) Remove instance nike2 entry from /etc/oratab
iv) De-install Oracle Home (Non-shared Oracle Home)
v) Update Oracle Inventory – (All Remaining Nodes)
3. Remove Node from Clusterware
i) Unpin Node
ii) Disable Oracle Clusterware
iii) Delete Node from Clusterware Configuration
iv) Update Oracle Inventory – (Node Being Removed) for GI Home
v) De-install Oracle Grid Infrastructure Software (Non-shared GI Home)
vi) After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.
vii) Update Oracle Inventory – (All Remaining Nodes)
viii) Verify New Cluster Configuration
– Two Node RAC version 11.2.0.3
– Node Name: RAC1, RAC2
– OS: RHEL 5
– Database name: nike and instances are nike1 and nike2
– The existing Oracle RAC database is administrator-managed (not policy-managed).
– The existing Oracle RAC does not use shared Oracle homes for the Grid Infrastructure or Database software.
Task: We are going to delete node RAC2 from cluster.
Cluster status
===============
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA1.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.nike.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.nike.nike_srv.svc
1 ONLINE ONLINE rac1
2 OFFLINE OFFLINE
ora.oc4j
1 ONLINE OFFLINE
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
ora.scan2.vip
1 ONLINE ONLINE rac1
ora.scan3.vip
1 ONLINE ONLINE rac1
[root@rac1 ~]#
i) Remove Instance from OEM Database Control Monitoring – I did not configured. Hence ignoring.
From: Node RAC1
Note: Run the emca command from any node in the cluster, except from the node where the instance we want to stop from being monitored is running.
emctl status dbconsole
emctl status agnet
emca -displayConfig dbcontrol -cluster —
emca -deleteInst db
ii) Backup OCR
From: Node RAC1
[root@rac1 ~]# ocrconfig -manualbackup
rac1 2015/06/19 23:38:03 /u01/app/11.2.0/grid/cdata/rac-scan/backup_20150619_233803.ocr
[root@rac1 ~]#
Note: that voting disks are automatically backed up in OCR after the changes we will be making to the cluster.
iii) Remove instance name from services
From node RAC1
Note:
Before deleting an instance from an Oracle RAC database, use either SRVCTL or Oracle Enterprise Manager to do the following:
If you have services configured, then relocate the services
Modify the services so that each service can run on one remaining instance
Ensure that the instance to be removed from an administrator-managed database is neither a preferred nor an available instance of any service
[oracle@rac1 ~]$ srvctl status service -d nike -s nike_srv -v Service nike_srv is running on instance(s) nike1 <<<< service running only on instance nike1. Hence no issue here. If service running on instance 2, then we need to relocate service before instance delete. [oracle@rac1 ~]$ [oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv -v Service name: nike_srv Service is enabled Server pool: nike_nike_srv Cardinality: 2 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: NONE Edition: Preferred instances: nike1 Available instances: nike2 <<< here instance nike2 as available instance. we have to remove available instance for the service. [oracle@rac1 ~]$ [oracle@rac1 ~]$ srvctl modify service -d nike -s nike_srv -n -i nike1 <<< [oracle@rac1 ~]$ srvctl config service -d nike -s nike_srv -v Service name: nike_srv Service is enabled Server pool: nike_nike_srv Cardinality: 1 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: NONE Edition: Preferred instances: nike1 Available instances: <<<< we have removed instance nike2 entry "srvctl modify service -d nike -s nike_srv -n -i nike1" [oracle@rac1 ~]$ [oracle@rac1 ~]$ srvctl status service -d nike -s nike_srv -v Service nike_srv is running on instance(s) nike1 <<< [oracle@rac1 ~]$
iv) Remove Instance from the Cluster Database
From Node RAC1 as Oracle Home owner.
[oracle@rac1 ~]$ srvctl config database -d nike -v Database unique name: nike Database name: nike Oracle home: /u01/app/oracle/product/11.2.0/db_1 <<<<< Oracle user: oracle Spfile: +DATA1/nike/spfilenike.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: nike Database instances: nike1,nike2 <<<< Disk Groups: DATA1 Mount point paths: Services: nike_srv Type: RAC Database is administrator managed <<<< This is Admin managed database. [oracle@rac1 ~]$ [oracle@rac1 ~]$ dbca -silent -deleteInstance -nodeList rac2 -gdbName nike -instanceName nike2 -sysDBAUserName sys -sysDBAPassword sys Deleting instance 1% complete 2% complete 6% complete 13% complete 20% complete 26% complete 33% complete 40% complete 46% complete 53% complete 60% complete 66% complete Completing instance management. 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/nike.log" for further details. [oracle@rac1 ~]$ [oracle@rac1 ~]$ srvctl config database -d nike -v Database unique name: nike Database name: nike Oracle home: /u01/app/oracle/product/11.2.0/db_1 Oracle user: oracle Spfile: +DATA1/nike/spfilenike.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: nike Database instances: nike1 <<<<<< instance nike2 removed. Disk Groups: DATA1 Mount point paths: Services: nike_srv Type: RAC Database is administrator managed [oracle@rac1 ~]$ SQL> select inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id; INST_ID INSTANCE_NAME STATUS START_TIME ---------- ---------------- ------------ -------------------- 1 nike1 OPEN 19-JUN-2015 01:15:39 <<<< Instance is removed from the cluster. SQL>
2. Remove Oracle Database Software
i) Verify Listener Not Running in Oracle Home >>> Please ignore this step because no listener is running from RDBMS HOME.
From Node RAC2
[oracle@rac2 ~]$ ps -ef | grep tns root 9 2 0 Jun19 ? 00:00:00 [netns] oracle 4372 1 0 Jun19 ? 00:00:01 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit <<< Listner is running form GI Home. oracle 4408 1 0 Jun19 ? 00:00:01 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit oracle 11983 11943 0 00:43 pts/1 00:00:00 grep tns [oracle@rac2 ~]$ [oracle@rac2 ~]$ srvctl config listener -a (If listener is running from GI Home then ignore this step) Name: LISTENER Network: 1, Owner: oracle Home: /u01/app/11.2.0/grid on node(s) rac1,rac2 End points: TCP:1521 [oracle@rac2 ~]$ Note: If any listeners were explicitly created to run from the Oracle home being removed, they would need to be disabled and stopped. srvctl disable listener -l -n srvctl stop listener -l -n
ii) Update Oracle Inventory – (Node Being Removed)
From node RAC2
[oracle@rac2 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac2}" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2047 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac2 bin]$
iii) Remove instance nike2 entry from /etc/oratab
From node RAC2
+ASM2:/u01/app/11.2.0/grid:N # line added by Agent >> We need to remove all database instance entries from oratab except ASM entries.
[oracle@rac2 ~]$
iv) De-install Oracle Home (Non-shared Oracle Home)
From Node RAC2 as Oracle Home owner
[oracle@rac2 ~]$ cd $ORACLE_HOME/deinstall [oracle@rac2 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1 Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database Oracle Base selected for deinstall is: /u01/app/oracle Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid The following nodes are part of this cluster: rac2 Checking for sufficient temp space availability on node(s) : 'rac2' ## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2015-06-20_01-56-25-AM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2015-06-20_01-56-28-AM.log Database Check Configuration END Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2015-06-20_01-56-32-AM.log Enterprise Manager Configuration Assistant END Oracle Configuration Manager check START OCM check log file location : /u01/app/oraInventory/logs//ocm_check4882.log Oracle Configuration Manager check END ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid The cluster node(s) on which the Oracle home deinstallation will be performed are:rac2 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1 Inventory Location where the Oracle home registered is: /u01/app/oraInventory The option -local will not modify any database configuration for this Oracle home. No Enterprise Manager configuration to be updated for any database(s) No Enterprise Manager ASM targets to update No Enterprise Manager listener targets to migrate Checking the config status for CCR Oracle Home exists with CCR directory, but CCR is not configured CCR check is finished Do you want to continue (y - yes, n - no)? [n]: y <<<<< A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2015-06-20_01-56-02-AM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2015-06-20_01-56-02-AM.err' ######################## CLEAN OPERATION START ######################## Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2015-06-20_01-56-32-AM.log Updating Enterprise Manager ASM targets (if any) Updating Enterprise Manager listener targets (if any) Enterprise Manager Configuration Assistant END Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2015-06-20_02-02-14-AM.log Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2015-06-20_02-02-14-AM.log De-configuring Local Net Service Names configuration file... Local Net Service Names configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END Oracle Configuration Manager clean START OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean4882.log Oracle Configuration Manager clean END Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/11.2.0/grid'. Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2015-06-20_01-50-11AM' on node 'rac2' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Cleaning the config for CCR As CCR is not configured, so skipping the cleaning of CCR configuration CCR clean is finished Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node. Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node. Oracle Universal Installer cleanup was successful. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END ############# [oracle@rac2 deinstall]$
Note: If this were a shared home then instead of de-installing the Oracle Database software, you would simply detach the Oracle home from the inventory.
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location
v) Update Oracle Inventory – (All Remaining Nodes)
From Node RAC1
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ pwd
/u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac1 bin]$
[oracle@rac1 bin]$
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2047 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$
3. Remove Node from Clusterware
i) Unpin Node
As root from node RAC1
[root@rac1 ~]# olsnodes -s -t rac1 Active Pinned rac2 Active Pinned [root@rac1 ~]# crsctl unpin css -n rac2 CRS-4667: Node rac2 successfully unpinned. [root@rac1 ~]# olsnodes -s -t rac1 Active Pinned rac2 Active Unpinned <<<< [root@rac1 ~]# Note: If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.
ii) Disable Oracle Clusterware
From node RAC2, which you want to delete
As user root.
[root@rac2 ~]# cd /u01/app/11.2.0/grid/crs/install/ [root@rac2 install]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params Network exists: 1/192.168.2.0/255.255.255.0/eth0, type static VIP exists: /rac1-vip/192.168.2.103/192.168.2.0/255.255.255.0/eth0, hosting node rac1 VIP exists: /rac2-vip/192.168.2.104/192.168.2.0/255.255.255.0/eth0, hosting node rac2 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2613: Could not find resource 'ora.registry.acfs'. CRS-4000: Command Stop failed, or completed with errors. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'rac2' CRS-2677: Stop of 'ora.DATA1.dg' on 'rac2' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2' CRS-2673: Attempting to stop 'ora.crf' on 'rac2' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2' CRS-2673: Attempting to stop 'ora.evmd' on 'rac2' CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac2' CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2' CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2' CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deconfigured Oracle clusterware stack on this node You have new mail in /var/spool/mail/root [root@rac2 install]#
iii) Delete Node from Clusterware Configuration
From node RAC1
As root user
[root@rac1 ~]# crsctl delete node -n rac2 CRS-4661: Node rac2 successfully deleted. [root@rac1 ~]# [root@rac1 ~]# olsnodes -t -s rac1 Active Pinned [root@rac1 ~]# [root@rac1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac1 ora.DATA1.dg ONLINE ONLINE rac1 ora.LISTENER.lsnr ONLINE ONLINE rac1 ora.asm ONLINE ONLINE rac1 Started ora.gsd OFFLINE OFFLINE rac1 ora.net1.network ONLINE ONLINE rac1 ora.ons ONLINE ONLINE rac1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac1 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.nike.db 1 ONLINE ONLINE rac1 Open ora.nike.nike_srv.svc 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE OFFLINE ora.rac1.vip 1 ONLINE ONLINE rac1 ora.scan1.vip 1 ONLINE ONLINE rac1 ora.scan2.vip 1 ONLINE ONLINE rac1 ora.scan3.vip 1 ONLINE ONLINE rac1 [root@rac1 ~]#
iv) Update Oracle Inventory – (Node Being Removed) for GI Home
From node RAC2, which we want to remove
As GI home owner
[oracle@rac2 ~]$ cd /u01/app/11.2.0/grid/oui/bin/ [oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 2047 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@rac2 bin]$
v) De-install Oracle Grid Infrastructure Software (Non-shared GI Home)
From Node 2, which we want to delete
As GI Home owner
[oracle@rac2 deinstall]$ pwd /u01/app/11.2.0/grid/deinstall [oracle@rac2 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2015-06-20_05-14-18AM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/oracle Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: rac2 Checking for sufficient temp space availability on node(s) : 'rac2' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2015-06-20_05-14-18AM/logs//crsdc.log Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip] > [ENTER] The following information can be collected by running "/sbin/ifconfig -a" on node "rac2" Enter the IP netmask of Virtual IP "192.168.2.104" on node "rac2"[255.255.255.0] > [ENTER] Enter the network interface name on which the virtual IP address "192.168.2.104" is active > [ENTER] Enter an address or the name of the virtual IP[] > [ENTER] Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/netdc_check2015-06-20_05-43-09-AM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER_1,LISTENER,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER At least one listener from the discovered listener list [LISTENER_1,LISTENER,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/asmcadc_check2015-06-20_05-44-06-AM.log ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: The cluster node(s) on which the Oracle home deinstallation will be performed are:rac2 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/11.2.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory Following RAC listener(s) will be de-configured: LISTENER Option -local will not modify any ASM configuration. Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2015-06-20_05-14-18AM/logs/deinstall_deconfig2015-06-20_05-34-12-AM.out' Any error messages from this session will be written to: '/tmp/deinstall2015-06-20_05-14-18AM/logs/deinstall_deconfig2015-06-20_05-34-12-AM.err' ######################## CLEAN OPERATION START ######################## ASM de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/asmcadc_clean2015-06-20_05-44-25-AM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2015-06-20_05-14-18AM/logs/netdc_clean2015-06-20_05-44-25-AM.log De-configuring RAC listener(s): LISTENER De-configuring listener: LISTENER Stopping listener on node "rac2": LISTENER Warning: Failed to stop listener. Listener may not be running. Listener de-configured successfully. De-configuring Naming Methods configuration file... Naming Methods configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "rac2". /tmp/deinstall2015-06-20_05-14-18AM/perl/bin/perl -I/tmp/deinstall2015-06-20_05-14-18AM/perl/lib -I/tmp/deinstall2015-06-20_05-14-18AM/crs/install /tmp/deinstall2015-06-20_05-14-18AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands
Run the above command as root on the specified node(s) from a different shell
[root@rac2 ~]# /tmp/deinstall2015-06-20_05-14-18AM/perl/bin/perl -I/tmp/deinstall2015-06-20_05-14-18AM/perl/lib -I/tmp/deinstall2015-06-20_05-14-18AM/crs/install /tmp/deinstall2015-06-20_05-14-18AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp" Using configuration parameter file: /tmp/deinstall2015-06-20_05-14-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp ****Unable to retrieve Oracle Clusterware home. Start Oracle Clusterware stack and try again. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. Either /etc/oracle/ocr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Either /etc/oracle/ocr.loc does not exist or is not readable Make sure the file exists and it has read and execute access CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Modify failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Delete failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. ################################################################ # You must kill processes or reboot the system to properly # # cleanup the processes started by Oracle clusterware # ################################################################ ACFS-9313: No ADVM/ACFS installation detected. Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall error: package cvuqdisk is not installed Successfully deconfigured Oracle clusterware stack on this node [root@rac2 ~]#
Once completed press [ENTER] on the first shell session
Remove the directory: /tmp/deinstall2015-06-20_05-14-18AM on node: Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty. Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2015-06-20_05-14-18AM' on node 'rac2' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Following RAC listener(s) were de-configured successfully: LISTENER Oracle Clusterware is stopped and successfully de-configured on node "rac2" Oracle Clusterware is stopped and de-configured successfully. Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node. Successfully deleted directory '/u01/app/11.2.0/grid' on the local node. Successfully deleted directory '/u01/app/oraInventory' on the local node. Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac2' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end of the session. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Note: If this were a shared home then instead of de-installing the Grid Infrastructure software, you would simply detach the Grid home from the inventory.
./runInstaller -detachHome ORACLE_HOME=Grid_home_location
[root@rac2 ~]# rm -rf /etc/oraInst.loc [root@rac2 ~]# rm -rf /opt/ORCLfmap [root@rac2 ~]# rm -rf /u01/app/11.2.0 [root@rac2 ~]# rm -rf /u01/app/oracle
vi) After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.
[root@rac2 ~]# diff /etc/inittab /etc/inittab.no_crs
[root@rac2 ~]#
vii) Update Oracle Inventory – (All Remaining Nodes)
From Node 1.
As GI Home owner
[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac1}" CRS=TRUE
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2036 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac1 bin]$
viii) Verify New Cluster Configuration
[oracle@rac1 ~]$ cluvfy stage -post nodedel -n rac2 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "rac1" CRS integrity check passed Result: Node removal check passed Post-check for node removal was successful. [oracle@rac1 ~]$
Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Still page under construction !!! 🙂