How to remove grid 12.2 after an 18c upgrade
The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0. As a result, there are three GRID homes and, ideally, we should remove 12.1.0.1 to save space because it's longer required. This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes. Review existing patch for Grid and Database homes: [code] [oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh" + . /media/patch/gi.env ++ set +x The Oracle base has been set to /u01/app/oracle ORACLE_SID=+ASM2 ORACLE_BASE=/u01/app/oracle GRID_HOME=/u01/18.3.0.0/grid_2 ORACLE_HOME=/u01/18.3.0.0/grid_2 Oracle Instance alive for sid "+ASM2" + /u01/18.3.0.0/grid_2/OPatch/opatch version OPatch Version: 12.2.0.1.17 OPatch succeeded. + /u01/18.3.0.0/grid_2/OPatch/opatch lspatches 29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264) 29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643) 29301631;Database Release Update : 18.6.0.0.190416 (29301631) 28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619) 28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192) 27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171 27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415) OPatch succeeded. + . /media/patch/hawk.env ++ set +x The Oracle base remains unchanged with value /u01/app/oracle ORACLE_UNQNAME=hawk ORACLE_SID=hawk2 ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1 Oracle Instance alive for sid "hawk2" + /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version OPatch Version: 12.2.0.1.17 OPatch succeeded. + /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches 28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800) 28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213) OPatch succeeded. + exit [oracle@racnode-dc1-1 ~]$ [/code] Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process. Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid Verify cluster state is [NORMAL]: [code] [oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh" + . /media/patch/gi.env ++ set +x The Oracle base has been set to /u01/app/oracle ORACLE_SID=+ASM2 ORACLE_BASE=/u01/app/oracle GRID_HOME=/u01/18.3.0.0/grid_2 ORACLE_HOME=/u01/18.3.0.0/grid_2 Oracle Instance alive for sid "+ASM2" + crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [18.0.0.0.0] + crsctl query crs softwareversion Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0] + crsctl query crs softwarepatch Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364]. + crsctl query crs releasepatch Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0]. + crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364]. + exit [oracle@racnode-dc1-1 ~]$ [/code] Check Oracle Inventory: [code] [oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc inventory_loc=/u01/app/oraInventory inst_group=oinstall [oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml <?xml version="1.0" standalone="yes" ?> <!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved. --> <!-- Do not modify the contents of this file by hand. --> <INVENTORY> <VERSION_INFO> <SAVED_WITH>12.2.0.4.0</SAVED_WITH> <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER> </VERSION_INFO> <HOME_LIST> ### GRID home (/u01/app/12.1.0.1/grid) to be removed. ======================================================================================== <HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1"> <NODE_LIST> <NODE NAME="racnode-dc1-1"/> <NODE NAME="racnode-dc1-2"/> </NODE_LIST> </HOME> ======================================================================================== <HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="racnode-dc1-1"/> <NODE NAME="racnode-dc1-2"/> </NODE_LIST> </HOME> <HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/> <HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/> </HOME_LIST> <COMPOSITEHOME_LIST> </COMPOSITEHOME_LIST> </INVENTORY> [oracle@racnode-dc1-2 ~]$ [/code] Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues. [code] [oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid [oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 16040 MB Passed The inventory pointer is located at /etc/oraInst.loc 'DetachHome' was successful. [oracle@racnode-dc1-2 ~]$ [/code] Verify GRID home was removed: [code] [oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml <?xml version="1.0" standalone="yes" ?> <!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates. All rights reserved. --> <!-- Do not modify the contents of this file by hand. --> <INVENTORY> <VERSION_INFO> <SAVED_WITH>12.1.0.2.0</SAVED_WITH> <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER> </VERSION_INFO> <HOME_LIST> <HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="racnode-dc1-1"/> <NODE NAME="racnode-dc1-2"/> </NODE_LIST> </HOME> <HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/> <HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/> ### GRID home (/u01/app/12.1.0.1/grid) removed. ================================================================================ <HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/> ================================================================================ </HOME_LIST> <COMPOSITEHOME_LIST> </COMPOSITEHOME_LIST> </INVENTORY> [oracle@racnode-dc1-2 ~]$ [/code] Remove 12.1.0.1 directory: [code] [oracle@racnode-dc1-2 ~]$ sudo su - Last login: Thu May 2 23:38:22 CEST 2019 [root@racnode-dc1-2 ~]# cd /u01/app/ [root@racnode-dc1-2 app]# ll total 12 drwxr-xr-x 3 root oinstall 4096 Apr 17 23:36 12.1.0.1 drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle drwxrwx--- 5 oracle oinstall 4096 May 2 23:54 oraInventory [root@racnode-dc1-2 app]# rm -rf 12.1.0.1/ [root@racnode-dc1-2 app]# [/code] Check the cluster: [code] [root@racnode-dc1-2 app]# logout [oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env The Oracle base has been set to /u01/app/oracle ORACLE_SID=+ASM2 ORACLE_BASE=/u01/app/oracle GRID_HOME=/u01/18.3.0.0/grid_2 ORACLE_HOME=/u01/18.3.0.0/grid_2 Oracle Instance alive for sid "+ASM2" [oracle@racnode-dc1-2 ~]$ crsctl check cluster -all ************************************************************** racnode-dc1-1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** racnode-dc1-2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [oracle@racnode-dc1-2 ~]$ [/code] In conclusion, don't forget to remove any obsolete installation when no longer required to save space. Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.
Share this
You May Also Like
These Related Stories
Oracle Database 18c schema only accounts
Oracle Database 18c schema only accounts
Mar 16, 2018
5
min read
Exadata's Best Kept Secret: Storage Indexes
Exadata's Best Kept Secret: Storage Indexes
Jul 20, 2010
2
min read
Expand Elastic Configuration on Oracle Exadata - Part 1
Expand Elastic Configuration on Oracle Exadata - Part 1
Jan 29, 2024
4
min read
No Comments Yet
Let us know what you think