Silent Installation of RAC 12c database
Nowadays, everyone is installing Oracle Database 12c in different variants, posting screenshots, and discussing the best GUI tools. However, there is a lack of documentation regarding the silent installation of Grid Infrastructure (GI) and RDBMS—installing without graphical interfaces using only the command line.
Inspired by the "RAC Attack" at OOW13, I decided to install a two-node RAC cluster on my laptop using VirtualBox to document the process.
Preparing the operating system environment
I began by creating a Virtual Machine (named s1) running Oracle Enterprise Linux (OEL) 6.4 with 2GB of RAM and a 30GB disk. I configured two network interfaces for public and private traffic.
Network and package prerequisites
After a basic server installation, I mounted the OEL 6.4 ISO to install the necessary prerequisite packages:
mount /dev/dvd /media
cd /media/Packages
rpm -Uvh compat-libcap1-1.10-1.x86_64.rpm compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm \
libstdc++-devel-4.4.7-3.el6.x86_64.rpm gcc-4.4.7-3.el6.x86_64.rpm glibc-devel-2.12-1.107.el6.x86_64.rpm \
glibc-headers-2.12-1.107.el6.x86_64.rpm cpp-4.4.7-3.el6.x86_64.rpm cloog-ppl-0.15.7-1.2.el6.x86_64.rpm \
kernel-headers-2.6.32-358.el6.x86_64.rpm ppl-0.10.2-11.el6.x86_64.rpm mpfr-2.4.1-6.el6.x86_64.rpm \
libaio-devel-0.3.107-10.el6.x86_64.rpm ksh-20100621-19.el6.x86_64.rpm gcc-c++-4.4.7-3.el6.x86_64.rpm
I configured the /etc/hosts file for a two-node cluster, using a single IP for SCAN since a full DNS service wasn't required for this test:
127.0.0.1 localhost.localdomain localhost
192.168.56.11 s1.home s1
192.168.56.12 s2.home s2
192.168.56.15 s-scan.home s-scan
172.16.100.11 s1-priv.home s1-priv
172.16.100.12 s2-priv.home s2-priv
192.168.56.13 s1-vip.home s1-vip
192.168.56.14 s2-vip.home s2-vip
I then created the Oracle user and groups, disabled the firewall, and set SE Linux to disabled.
Persistent ASM storage with UDEV
I created two shared disks (2GB and 5GB) for ASM. To ensure persistent naming across reboots, I configured udev rules:
echo "options=-g" > /etc/scsi_id.config
i=1
cmd="/sbin/scsi_id -g -u -d"
for disk in sdb sdc ; do
cat <<EOF >> /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="$cmd /dev/\$parent", \
RESULT=="`$cmd /dev/$disk`", NAME="asm-disk$i", OWNER="oracle", GROUP="dba", MODE="0660"
EOF
i=$(($i+1))
done
After partitioning and reloading the rules, I cloned the VM to create the second node (s2), updated its hostname and IP addresses, and removed persistent network rules to prevent interface conflicts.
Clustering and verification
With both nodes running, I moved the GI and RDBMS software to the first node and configured passwordless SSH.
pre-installation checks
I used the Cluster Verification Utility (cluvfy) to identify any missing kernel parameters:
cd /home/oracle/install/grid
./runcluvfy.sh stage -pre crsinst -fixupnoexec -n s1,s2
This generated a fixup script at /tmp/CVU_12.1.0.1.0_oracle/runfixup.sh. I executed this script as root on both nodes to apply necessary kernel changes automatically.
Silent grid infrastructure installation
I created a response file (/tmp/gi.rsp) with the specific parameters for a standard cluster using local ASM storage.
Executing the gi installer
I ran the installer in silent mode, ignoring prerequisite warnings regarding the Management Repository (which I chose not to configure for this laptop test):
./runInstaller -silent -responseFile /tmp/gi.rsp -showProgress -ignorePrereq
Once the installation reached 100%, I executed the following scripts as root on both nodes:
-
/u01/app/oraInventory/orainstRoot.sh -
/u01/app/12.1.0/grid/root.sh
Note: To see the output ofroot.shon the screen during a silent install, you can setOUI_SILENT=falsein the/u01/app/12.1.0/grid/install/utl/rootmacro.shfile.
Finalizing configuration
To complete the configuration, I ran the configToolAllCommands script using a small response file containing the ASM passwords:
/u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/conf.rsp
I verified the GI status:
srvctl status asm# Output: ASM is running on s2, s1
RDBMS silent installation
With GI running, I prepared the additional ASM diskgroup for the database data using ASMCMD:
ASMCMD> mkdg '<dg name="data" redundancy="external"><dsk string = "/dev/asm-disk2" /> <a name="compatible.asm" value="12.1" /> </dg>'
Installing the database software
Finally, I created a response file (/tmp/db.rsp) for the RDBMS software-only installation. I mapped the administrative groups (DBA, BACKUPDBA, DGDBA, KMDBA) to the dba group.
cd /home/oracle/install/database./runInstaller -silent -responseFile /tmp/db.rsp -showProgress -ignorePrereq
After the software installation, I ran the root.sh script on both nodes. The installation was successful, completing the setup of the 12c RAC cluster environment entirely through the command line.
Oracle Database Consulting Services
Ready to optimize your Oracle Database for the future?
Share this
Share this
More resources
Learn more about Pythian by reading the following blogs and articles.
How to Troubleshoot OEM 12c Cloud Control Auto-Discovery
How to Fix a Target with a Pending Status in OEM 12cR2
Oracle Service Bus (OSB) 12c LDAP Authorization – HTTP 401 (Unauthorized)
Ready to unlock value from your data?
With Pythian, you can accomplish your data transformation goals and more.