[oracle@myclusterdb01]$ . oraenv <<< +ASM1 [oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/lsinventory -all_nodes
[root@myclusterdb01 ~]# cd /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103 [root@myclusterdb01 24448103]# /u01/app/12.1.0.2/grid/OPatch/opatchauto apply -oh /u01/app/12.1.0.2/gridOpatch will most likely finish with some warnings:
[Jun 5, 2016 5:50:47 PM] -------------------------------------------------------------------------------- [Jun 5, 2016 5:50:47 PM] The following warnings have occurred during OPatch execution: [Jun 5, 2016 5:50:47 PM] 1) OUI-67303: Patches [ 20831113 20299018 19872484 ] will be rolled back. [Jun 5, 2016 5:50:47 PM] -------------------------------------------------------------------------------- [Jun 5, 2016 5:50:47 PM] OUI-67008:OPatch Session completed with warnings.Checking the logfiles, you will find that this is probably due to superset patches:
Patch : 23006522 Bug Superset of 20831113If you check the patch number, you will find that this is an old patch : Patch 20831113: OCW PATCH SET UPDATE 12.1.0.2.4 Then this is safely ignorable as opatch rollback old patches after having applied the new ones.
[oracle@myclusterdb01]$ . oraenv <<< +ASM1 [oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/lsinventory -all_nodes
$ORACLE_HOME/OPatch/opatch lsinventory -all_nodesHave a look at the checksum report at the end of the report, it should be the same on each node:
Binary & Checksum Information ============================== Binary Location : /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle Node Size Checksum ---- ---- -------- myclusterdb01 327642940 BD0547018B032A7D2FCB8209CC4F1E6C8B63E0FBFD8963AE18D50CDA7455602D myclusterdb02 327642940 BD0547018B032A7D2FCB8209CC4F1E6C8B63E0FBFD8963AE18D50CDA7455602D myclusterdb03 327642940 BD0547018B032A7D2FCB8209CC4F1E6C8B63E0FBFD8963AE18D50CDA7455602D myclusterdb04 327642940 BD0547018B032A7D2FCB8209CC4F1E6C8B63E0FBFD8963AE18D50CDA7455602D -------------------------------------------------------------------------------- OPatch succeeded.
[root@myclusterdb01]# cd /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch [root@myclusterdb01]# nohup ./opatchauto apply /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24340679 -oh /u01/app/oracle/product/12.1.0.2/dbhome_1 & [root@myclusterdb01]# nohup ./opatchauto apply /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24846605 -oh /u01/app/oracle/product/12.1.0.2/dbhome_1 &
[oracle@myclusterdb01]$ . oraenv <<< A_DATABASE_WITH_THE_CORRECT_ORACLE_HOME [oracle@myclusterdb01]$ srvctl stop home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n1 -n myclusterdb01 [oracle@myclusterdb01]$ srvctl stop home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n2 -n myclusterdb02 [oracle@myclusterdb01]$ srvctl stop home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n3 -n myclusterdb03 [oracle@myclusterdb01]$ srvctl stop home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n4 -n myclusterdb04- Apply the patch from myclusterdb01 (any node can be used though)
[oracle@myclusterdb01]$ cd /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018OJVMPSU/24315824 [oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/opatch apply
To be sure that the patch has successfully been applied on all the nodes, perform and store the output of the below command:
$ORACLE_HOME/OPatch/opatch lsinventory -all_nodesHave a look at the checksum report at the end of the report, it should be the same on each node (an example of this output is shown in paragraph 3.5.1)
Some post-install steps have to be performed for both OJVM and database OH patch, this has to be done for each database (on one node only).
. oraenv <<< A_DATABASE sqlplus / as sysdba startup nomount -- All DB should be down here, only start on one node, don't use srvctl here alter system set cluster_database=false scope=spfile; shut immediate startup upgrade cd /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch ./datapatch -verbose -- It happened that datapatch had issues with some patches then we have to : ./datapatch -apply 22674709 -force -bundle_series PSU -verbose ./datapatch -apply 22806133 -force -bundle_series PSU -verboseNote: the datapatch -force is recommended by Oracle support when ./datapatch -verbose fails (...). you can ignore the errors of the -force datapatch
sqlplus / as sysdba alter system set cluster_database=true scope=spfile; shut immediate srvctl start database -d XXX - Verify that the patches are correctly installed set lines 200 set pages 999 SELECT patch_id, patch_uid, version, flags, action, action_time, description, status, bundle_id, bundle_series, logfile FROM dba_registry_sqlpatch ;
srvctl start home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n1 -n myclusterdb01 srvctl start home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n2 -n myclusterdb02 srvctl start home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n3 -n myclusterdb03 srvctl start home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /tmp/12c.statefile_n4 -n myclusterdb04
cp -r /u01/app/oracle/product/12.1.0.2/dbhome_1 /u01/app/oracle/product/12.1.0.2/dbhome_2
- On node 1
[oracle@myclusterdb01]$ cd /u01/app/oracle/product/12.1.0.2/dbhome_2/oui/bin/
[oracle@myclusterdb01 bin]$ ./runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/12.1.0.2/dbhome_2" ORACLE_HOME_NAME="OraDB12Home2" "ORACLE_BASE=/u01/app/oracle" "CLUSTER_NODES={myclusterdb01,myclusterdb03,myclusterdb04}" "LOCAL_NODE=myclusterdb01" -silent -noConfig -nowait
- On node 2
[oracle@myclusterdb01]$ cd /u01/app/oracle/product/12.1.0.2/dbhome_2/oui/bin/
[oracle@myclusterdb02 bin]$ ./runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/12.1.0.2/dbhome_2" ORACLE_HOME_NAME="OraDB12Home2" "ORACLE_BASE=/u01/app/oracle" "CLUSTER_NODES={myclusterdb01,myclusterdb03,myclusterdb04}" "LOCAL_NODE=myclusterdb02" -silent -noConfig -nowait
etc.. on all the other nodes
Note here that what changes in the command line to clone the OH are the CLUSTER_NODES and LOCAL_NODE parameters.
-- On each node, copy the init, the passwordfile to the new home, any configuration file you would also use
[oracle@myclusterdb01]$ cp /u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/init${ORACLE_SID}.ora /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/.
[oracle@myclusterdb01]$ cp /u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/orapw${ORACLE_SID}.ora /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/.
-- On each node, check and update your LDAP configuration if you have one
-- On each node, update /etc/oratab and/or the script you use to switch between the database environments
#MYDB:u01/app/oracle/product/12.1.0.2/dbhome_1:N
MYDB:u01/app/oracle/product/12.1.0.2/dbhome_2:N
-- One one node, modify the ORACLE_HOME in the cluster configuration
[oracle@myclusterdb01]$ srvctl modify database –d -o /u01/app/oracle/product/12.1.0.2/dbhome_2 -- 11g
[oracle@myclusterdb01]$ srvctl modify database –db -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_2 -- 12c
-- One one node, modify the spfile configuration in the cluster configuration (if your spfile is not stored under ASM)
[oracle@myclusterdb01]$ srvctl modify database –d -p /path_to_your_shared_spfile/spfile${ORACLE_SID}.ora -- 11g
[oracle@myclusterdb01]$ srvctl modify database –db -spfile /path_to_your_shared_spfile/spfile${ORACLE_SID}.ora -- 12c
-- Bounce the database
[oracle@myclusterdb01]$ srvctl stop database -d -o 'immediate' -- 11g
[oracle@myclusterdb01]$ srvctl start database -d
[oracle@myclusterdb01]$ srvctl stop database -db -stopoption 'immediate' -- 12c
[oracle@myclusterdb01]$ srvctl start database -db
-- If you use OEM, you will have to manually update the new OH in the target configuration
Note that here only the last step requires a downtime of 15 - 20 minutes (the time to bounce the database and run the post install steps), all the previous steps can be done earlier during a regular weekday. Another point to add is that you can chose which database to patch when you want (which makes this way of working very flexible). If you reached that point, it means that you are done with your Exadata patching!
Ready to optimize your Oracle Database for the future?
Quick links to Part 1 / Part 2 / Part 3 / Part 4 / Part 5 / Part 6