Blog | Pythian

Troubleshooting a failure when upgrading to Oracle 18c

Written by Michael Dinh | Apr 18, 2019 4:00:00 AM

Upgrading Oracle Grid Infrastructure is often a smooth ride until that final configuration step hits a wall. When gridSetup.sh -executeConfigTools fails with a generic warning, it can feel like looking for a needle in a haystack of log files.

Here are the H2 headings for your blog post to help organize the troubleshooting process:

Initializing the 18c Upgrade and Applying Release Updates

When I was upgrading Grid to 18c, the final step was to run gridSetup.sh -executeConfigTools, which failed. Unfortunately, the error provided was not very descriptive or useful. In this post, I will demonstrate how I went about troubleshooting the failure.

To start the 18c upgrade, run script gridSetup_applyRU.sh (Please note: I have scripted the process.)

[oracle@racnode-dc1-1 ~]$ /media/patch/gridSetup_applyRU.sh ... Successfully Setup Software with warning(s). 

Encountering the INS-43080 Configuration Failure

The last step: gridSetup.sh -executeConfigTools resulted in WARNINGS.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent Launching Oracle Grid Infrastructure Setup Wizard... [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped. ACTION: Refer to the logs or contact Oracle Support Services. 

Navigating the Oracle Inventory Logs

To find the root cause, we need to dive into the specific log directory for this session: /u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM

[oracle@racnode-dc1-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM [oracle@racnode-dc1-1 GridSetupActions2019-04-15_01-02-06AM]$ ls -alrt ... -rw-r----- 1 oracle oinstall 2171 Apr 15 01:03 time2019-04-15_01-02-06AM.log -rw-r----- 1 oracle oinstall 72336 Apr 15 01:03 gridSetupActions2019-04-15_01-02-06AM.log 

Step 1: Identifying the Failed Step via Time Logs

Checking time2019-04-15_01-02-06AM.log was useful to identify exactly where the process hung or crashed.

# Starting step: EXECUTE of state:setup # Upgrading RHP Repository in progress. # 0 # 1555283000839  -------------------------------------------------------------------------------- # Upgrading RHP Repository failed. # 5904 # 1555283006743  -------------------------------------------------------------------------------- 

Step 2: Extracting Detail from the Grid Setup Action Log

Next, check gridSetupActions2019-04-15_01-02-06AM.log to find the specific command that failed and its exit status.

INFO: [Apr 15, 2019 1:03:20 AM] Executing RHPUPGRADE INFO: [Apr 15, 2019 1:03:20 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0 ... INFO: [Apr 15, 2019 1:03:26 AM] Upgrading RHP Repository failed. ... -------------------------------------------------------------------------------- INFO: [Apr 15, 2019 1:03:26 AM] Exit Status is -1 -------------------------------------------------------------------------------- 

Summary: A Streamlined Troubleshooting Approach

In conclusion, following these two steps may help simplify troubleshooting for gridSetup.sh -executeConfigTools failures, as the information provided by the upgrade process is often vague:

  1. Check the time*.log: This quickly identifies which specific step in the sequence failed.
  2. Check the gridSetupActions*.log: This provides the exact command execution details and the exit status.

Oracle Database Consulting Services

Ready to optimize your Oracle Database for the future?