Oracle Silent Mode, Part 7: Installing an 11.1 RAC Database

Posted in: Technical Track

This seventh post digs into some of the silent installation commands of an 11.1 RAC. For a complete series agenda up to now, see below:

  1. Installation of 10.2 And 11.1 Databases
  2. Patches of 10.2 And 11.1 databases
  3. Cloning Software and databases
  4. Install a 10.2 RAC Database
  5. Add a Node to a 10.2 RAC database
  6. Remove a Node from a 10.2 RAC database
  7. Install a 11.1 RAC Database (this post!)
  8. Add a Node to a 11.1 RAC database
  9. Remove a Node from a 11.1 RAC database
  10. A ton of other stuff you should know

As for the Installation of a 10.2 RAC Database, this post shows how to (1) install the 11.1 clusterware, (2) install the 11.1 database, and (3) create a RAC database. It doesn’t explore any Patch Set upgrade since is not out for now. Another interesting question, however, is how to upgrade the 10.2 clusterware to 11.1, since it has to be done in place.

So let’s get into it.

Checking the prerequisites

As with 10.2, the best way to start is definitely to check, double- and triple-check that all the prerequisites are met. You can refer to the 10g post to find more about how to use RDA and the CVU for this purpose. Check also the installation documentation for your platform and Metalink Note 169706.1.

Install Oracle 11.1 Clusterware

Once you’ve made sure all the prerequisites are met, you can install or upgrade the 11.1 clusterware.

Install the 11.1 Clusterware from scratch

The steps and commands to install 11.1 clusterware are exactly the same as the ones to install 10.2 clusterware. The only difference is due to a typo in the crs.rsp response file that comes with the distribution — namely, the FROM_LOCATION parameter doesn’t point the correct location. To overcome this issue, just add the parameter in the runInstaller command line. Below is the syntax that matches that from the 10.2 post; refer to it for more detail about the meaning of the parameters.

cd clusterware
export DISTRIB=`pwd`

./runInstaller -silent                                           \
  -responseFile $DISTRIB/response/crs.rsp                        \
  FROM_LOCATION=$DISTRIB/stage/products.xml                      \
  ORACLE_HOME="/u01/app/crs"                                     \
  ORACLE_HOME_NAME="OraCrsHome"                                  \
  s_clustername="rac-cluster"                                    \
"bond2:"}                                              \
  n_storageTypeOCR=1                                             \
  s_ocrpartitionlocation="/dev/sdb1"                             \
  s_ocrMirrorLocation="/dev/sdc1"                                \
  n_storageTypeVDSK=1                                            \
  s_votingdisklocation="/dev/sdb2"                               \
  s_OcrVdskMirror1RetVal="/dev/sdc2"                             \

Once the clusterware is installed, you only have to connect as root on each of the servers and run the and scripts:

rac-server1# /u01/app/oraInventory/
rac-server2# /u01/app/oraInventory/
rac-server3# /u01/app/oraInventory/
rac-server4# /u01/app/oraInventory/
rac-server1# /u01/app/crs/
rac-server2# /u01/app/crs/
rac-server1# /u01/app/crs/
rac-server1# /u01/app/crs/

Note that, unlike what use to happen with 10.2, if you use a private network (i.e. 192.168.x.x, 10.x.x.x or 172.[16-31].x.x), it should not affect the installation.

Upgrade your 10.2 Clusterware to 11.1

If the install didn’t change between 10.2 and 11.1, the clusterware upgrade differs slightly from applyingthe patch set on top of the clusterware. The principle stays the same, however: (1) The clusterware has to be applied in-place; and (2) on a rolling upgrade fashion. However, the way you do it now is:

  1. Stop the clusterware and its managed resources on one or several node so that you can apply the patch on top of them.
  2. Apply the 11.1 release from one of the stopped nodes to all the stopped nodes.
  3. Apply the rootupgrade script to all the nodes that were upgraded.
  4. For another set of servers, repeat (1), (2), and (3), and do it until you’ve upgraded all the servers.

In the case of the RAC we installed in this series’s article on the 10.2 RAC Intall, one way you could do the upgrade would be to upgrade rac-server1 as a first step and as a second step upgrade rac-server2, rac-server3, and rac-server4 all together. Obviously, you could also upgrade them all together or one-by-one depending on your requirements. Let’s have a look at the syntaxes for the first scenario.

Important Note: The Clusterware has to be or higher to upgrade to

Step 1: Prepare rac-server1 for the upgrade

You first have to push the clusterware distribution to the server and unzip it. Once this is done, you should run the script as root with the clusterware ORACLE_HOME and owner as parameters. Below is an example of the associated syntax:

rac-server1$ cd clusterware
rac-server1$ export DISTRIB=`pwd`
rac-server1$ su root
rac-server1# cd $DISTRIB/upgrade
rac-server1#./ -crshome /u01/app/crs -crsuser oracle

This step will stop all the managed resources and the clusterware.

Step 2: Install the 11.1 clusterware in rac-server1

Connect as oracle (or, if it’s not oracle, as the clusterware owner) and run a command like the one below:

$ cd clusterware
$ export DISTRIB=`pwd`
$ ./runInstaller -silent                      \
    -responseFile $DISTRIB/response/crs.rsp   \
    FROM_LOCATION=$DISTRIB/stage/products.xml \
    REMOTE_NODES={}                           \
    ORACLE_HOME=/u01/app/crs                  \

If you’ve been reading this series, -silent, -responseFile, ORACLE_HOME and ORACLE_HOME_NAME will be familiar to you. FROM_LOCATION is in the command thanks to a typo in the crs.rsp file that come with for Linux x86 — it doesn’t point to the right location. REMOTE_NODES is the list of nodes in addition of the local node onto which you’ll install clusterware 11.1. In this case, because we apply the upgrade on rac-server1 only, which is the local node, the list has to be an empty list.

Step 3: Apply the rootupgrade script

Once you’ve applied the 11.1 release on top of the 11.1 clusterware for that first note, just run the rootupgrade script. It will complete the upgrade of that node and restart all the resources. As root:

rac-server1# cd /u01/crs/install
rac-server1# ./rootupgrade
Step 4: Prepare the other servers for the upgrade

We will then apply the patch set from rac-server2 on the three remaining servers, To proceed, push the clusterware distribution to rac-server2 and the script to the three servers. Run that script as root on those three servers as you did on the first one:

rac-server2# / -crshome /u01/app/crs -crsuser oracle
rac-server3# /tmp/ -crshome /u01/app/crs -crsuser oracle
rac-server4# /tmp/ -crshome /u01/app/crs -crsuser oracle

This step will stop all the managed resources and the clusterware.

Step 5: Install the 11.1 clusterware on rac-server2, rac-server2 and rac-server4 all together

Connect as oracle (or, if it’s not oracle, as the clusterware owner) on the server you’ll use to do the install, and run a command like the one below:

rac-server2$ cd clusterware
rac-server2$ export DISTRIB=`pwd`
rac-server2$ ./runInstaller -silent           \
    -responseFile $DISTRIB/response/crs.rsp   \
    FROM_LOCATION=$DISTRIB/stage/products.xml \
    REMOTE_NODES={rac-server3,rac-server4}    \
    ORACLE_HOME=/u01/app/crs                  \

REMOTE_NODES is used to list all the nodes onto which you’ll install the clusterware. The local node will also be installed.

Step 6: Apply the rootupgrade script

Once you’ve applied the 11.1 release on top of the 10.2 clusterware for that first note, just run the rootupgrade script as root on each one of the nodes:

rac-server2# cd /u01/crs/install/rootupgrade
rac-server3# cd /u01/crs/install/rootupgrade
rac-server4# cd /u01/crs/install/rootupgrade

Once all the nodes are installed, you should be able to see that the 11.1 release is active by running on any of the nodes:

/u01/crs/install/bin/crsctl query crs activeversion

Install Oracle 11.1 RAC Database Software

Install the Oracle RAC Database Base Release

Once the clusterware has been installed, installing RAC Database software is very similar to installing a non-RAC database software, you just need to specify which servers you want the software to be installed on. The first step is downloading the software and unzipping it:

$ unzip
$ cd database
$ export DISTRIB=`pwd`

To install the database software, you don’t need to modify the response files. You only have to run a command like the one below, in the case of an Enterprise Edition:

export DISTRIB=`pwd`

runInstaller -silent                                   \
      -responseFile $DISTRIB/response/enterprise.rsp   \
       FROM_LOCATION=$DISTRIB/stage/products.xml       \
       ORACLE_BASE=/u01/app/oracle                     \
       ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 \
       ORACLE_HOME_NAME=ORADB111_Home1                 \
"rac-server3","rac-server4"}                           \
       n_configurationOption=3                         \
       s_nameForDBAGrp="dba"                           \

Or in the case of a Standard Edition:

runInstaller -silent                                   \
      -responseFile $DISTRIB/response/standard.rsp     \
       FROM_LOCATION=$DISTRIB/stage/products.xml       \
       ORACLE_BASE=/u01/app/oracle                     \
       ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 \
       ORACLE_HOME_NAME=ORADB111_Home1                 \
       CLUSTER_NODES={"rac-server1","rac-server2"}     \

As you can see, only a few parameters differ from the non-RAC database installation described in Part 1 of this series:

  • CLUSTER_NODES contains the list of cluster nodes you want to install the database software on.
  • FROM_LOCATION is used when the response file doesn’t point to the location of the products.xml file.
  • s_nameForDBAGrp, s_nameForASMGrp, and s_nameForOPERGrp are used to specify non-default groups for SYSDBA, SYSASM, and SYSOPER.

Once the software is installed, you have to execute the script from the ORACLE_HOME. Connect as root on every server and run:

rac-server1# /u01/app/oracle/product/11.1.0/db_1/
rac-server2# /u01/app/oracle/product/11.1.0/db_1/
rac-server3# /u01/app/oracle/product/11.1.0/db_1/
rac-server4# /u01/app/oracle/product/11.1.0/db_1/

Install the Oracle RAC database Patch Set

The first 11.1 Patch Set is not available at the time of this writing.

Configure the Listeners

The fastest way to create and configure the listeners is to use NETCA as below:

$ export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
$ export PATH=$ORACLE_HOME/bin:$PATH
$ netca /silent \ 
 /responsefile $ORACLE_HOME/network/install/netca_typ.rsp \ 
 /nodeinfo rac-server1,rac-server2,rac-server3,rac-server4

Unlike other tools, NETCA uses the “/” character instead of “-” for its flags. With 11.1, the DISPLAY environment variable can stay empty.

Configure Automatic Storage Management

If you plan to use ASM from the newly installed ORACLE_HOME or another one you’ve installed earlier, you can use DBCA to configure it in silent mode. The syntax is the same as the 10.2 asm configuration syntax:

$ dbca -silent                   \
    -nodelist rac-server1,rac-server2,\
rac-server3,rac-server4               \
    -configureASM                     \
    -asmSysPassword change_on_install \
    -diskString "/dev/sd*"            \
    -diskList "/dev/sde,/dev/sdf"     \
    -diskGroupName DGDATA             \
    -redundancy EXTERNAL              \
    -emConfiguration NONE

Create a RAC Database with DBCA

You can use DBCA again to create the RAC database. The syntax is the same as that explained in the 10.2 RAC post:

$ dbca -silent                             \
       -nodelist rac-server1,rac-server2,\
rac-server3,rac-server4                    \
       -createDatabase                     \
       -templateName General_Purpose.dbc   \
       -gdbName ORCL                       \
       -sid ORCL                           \
       -SysPassword change_on_install      \
       -SystemPassword manager             \
       -emConfiguration NONE               \
       -storageType ASM                    \
         -asmSysPassword change_on_install \
         -diskGroupName DGDATA             \  
       -characterSet WE8ISO8859P15         \
       -totalMemory 500

From my tests, I have found that -nodelist has to come before the -createDatabase flag. 11.1 also allows you to specify the size of the memory instead of a percentage.

More to come

If you’re used to installing 10.2 RAC databases in silent mode, doing it with 11.1 is very similar. The next two posts will explore the addition and removal of new cluster nodes. Expect them very soon.

Interested in working with Grégory? Schedule a tech call.

8 Comments. Leave new

Log Buffer #110: A Carnival of the Vanities for DBAs
August 15, 2008 11:49 am

[…] I should also mention that Pythian’s Grégory Guillou has published the seventh part of his series on Oracle silent mode. […]

User links about "database" on iLinkShare
August 24, 2008 1:15 am

[…] | user-saved public links | iLinkShare 2 votesOracle Silent Mode, Part 7: Installing an 11.1 RAC Database>> saved by tvj08 2 days […]

Oracle Silent Mode, Part 8: Add a Node to a 11.1 RAC Database
August 27, 2008 11:37 am

[…] assume we want to add a new node, rac-server5, to the cluster we’ve build in the previous post. Connect as the Clusterware owner on any of the existing nodes and run the command […]



I have tried the silent mode installation of Oracle 11g Database over an existing cluster ware. But this is just installing on the local node from which I am executing the command and is not installing on the remote node that I have specified in the node list. Where as when I tried to install the same using the normal GUI installation it worked well and completed the installation on the remote nodes too.

Please advise.

Ravi KAP

Grégory Guillou
January 12, 2009 1:16 pm


If you use the CLUSTER_NODES variable, it should do the work. However, if you can run the GUI, run it with “-record -destinationFile /tmp/output.rsp” and compare it the content of the default response file overloaded with the variable you’ve added as command line parameters. You’ll find what is different between the GUI and the silent install easily.




The CLUSTER_NODES variable is good in the command, which I have just copy-pasted and modified accordingly.

And I did what you suggested, running the GUI, recording the RSP file and comparing that with the command you have posted. The only difference that I see between the two is the “n_configurationOption” parameter that carries a 3 in your command where as the GUI has used a 1.

Question: If I am running from lin1 server, I have enabled the password-less communication with the lin2 server(executing the $SHELL and the add-agent commands). So is it required to log on to the lin2 server also and do the same?





Did you get to export
Patch Set upgrades using the response file
approach. I’d be intersted in any problems /issuses encountered


Gwen Shapira
May 27, 2010 4:36 pm

In my attempt to install 11g database on a cluster, specifying nodes like this doesn’t work:
“rac-server3″,”rac-server4”} \

while this did:

rac-server3,rac-server4}” \


Leave a Reply

Your email address will not be published. Required fields are marked *