After reviewing my blog post about running EBS OVM templates in VirtualBox, two of my teammates suggested that I work on something with potentially broader appeal. Their basic message was: "This is really cool for us EBS nerds, but what about the Core DBAs?" So how does "11gR2 RAC in an hour" sound? :-)
In this post, I'll demonstrate how to deploy the pre-built Oracle VM templates to create a two-node 11gR2 RAC cluster in Oracle VirtualBox.
There are already several high-quality "How to run RAC on your workstation" HOW-TO's out there, including the well-known RAC Attack (by Pythian's own Jeremy Schneider, and others) and Tim Hall's super-straightforward article on ORACLE-BASE. Does the internet really need another screenshot-heavy blog post about installing Oracle RAC? Maybe not, but I'm doing it anyway, because:
Some readers might point out that installing and configuring the software is a good way to learn how things work, and that breaking and fixing things along the way helps to learn even more. I actually agree with that sentiment in general, since I'm a "learn by failing doing" kind of guy. However, Oracle is selling a line of high-end products that are supposed to take all of the hard work out of configuring RAC, so why shouldn't we have a bit of fun?
Nothing you're about to read in this post is supported by anyone. Not me, not Pythian, and certainly not Oracle. If you're thinking about using the techniques described here for any sort of production or QA deployment, please stop and question your sanity. Then call a few colleagues over to your desk and ask them to question your sanity.
Please be mindful of your licensing and support status before working with these templates. Content from Oracle's Software Delivery Cloud is subject to a far more restrictive licensing than the more-familiar OTN development license. So far, this is just a proof-of-concept. As Darth Vader once said: "Do not be too proud of this technological terror you've constructed." :-)
The basic steps are as follows, with details in the next section.
Complete details on DNS setup are beyond the scope of this post. Here are the IPs and hostnames that I will be using. I'm using two separate host-only networks (vboxnet0 and vboxnet1) for the public and private interfaces.
Code snippet#RAC stuff #Pub 192.168.56.11 thing1.local.org thing1 192.168.56.12 thing2.local.org thing2 #Priv 192.168.57.11 thing1-priv.local.org thing1-priv 192.168.57.12 thing2-priv.local.org thing2-priv #VIP 192.168.56.21 thing1-vip.local.org thing1-vip 192.168.56.22 thing2-vip.local.org thing2-vip #SCAN 192.168.56.31 clu-scan.local.org clu-scan 192.168.56.32 clu-scan.local.org clu-scan 192.168.56.33 clu-scan.local.org clu-scan
Connect to Oracle's Software Delivery Cloud and download the files for "Oracle RAC 11.2.0.1.4 for x86_64 (64 bit) with Oracle Linux 5.5" (V25916-01.zip and V25917-01.zip). Unzip the files and unpack the resulting .tgz archives. This will create a directory called OVM_EL5U5_X86_64_11201RAC_PVM.
Use the VBoxManage utility to convert the raw disk images (System.img and Oracle11201RAC_x86_64-xvdb.img) to .vdi files.
zathras:OVMRACTempl jpiwowar$ mkdir OVM_EL5U5_X86_64_11201RAC_PVM/Thing1 zathras:OVMRACTempl jpiwowar$ time VBoxManage convertfromraw OVM_EL5U5_X86_64_11201RAC_PVM/System.img OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacRoot.vdi zathras:OVMRACTempl jpiwowar$ time VBoxManage convertfromraw OVM_EL5U5_X86_64_11201RAC_PVM/Oracle11201RAC_x86_64-xvdb.img OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacORCL.vdi
Configure the VM (Thing1) with Oracle Linux (64-bit), 3 NICs (2 Host-only, 1 NAT), 1 CPU, and 2GB RAM. Attach the VDI files to a SATA controller and the Linux ISO to the virtual DVD drive.
Enter “linux rescue” at the boot prompt. Configure the network interfaces eth0 and eth2 (NAT) using DHCP. Once at the prompt, switch to the root volume:
# chroot /mnt/sysimage # service sshd start
Update modprobe.conf to remove Xen-specific modules and adjust inittab to prevent console spawning errors.
[root@localhost ~]# vi /etc/modprobe.conf alias eth0 e1000 alias scsi_hostadapter ata_piix alias scsi_hostadapter ahci [root@localhost ~]# perl -pi.orig -e 's/^(co)/#\1/' /etc/inittab [root@localhost ~]# rm /etc/rc3.d/S99oraclevm-template
Since the OVM template uses a Xen kernel, we need a standard version from the Oracle public yum server.
Bash[root@localhost ~]# wget https://public-yum.oracle.com/public-yum-el5.repo [root@localhost ~]# yum install kernel-2.6.18-194.el5 kernel-devel-2.6.18-194.el5 [root@localhost ~]# mkinitrd -v -f /boot/initrd-2.6.18-194.el5.img 2.6.18-194.el5
Add divider=10 to the boot parameters in grub.conf. This reduces host CPU utilization and prevents the cluster configuration scripts from bogging down.
[root@localhost yum.repos.d]# perl -pi.orig -e 's/(numa=off)/\1 divider=10/' /boot/grub/grub.conf
Now that the first node is prepared with a standard kernel and proper disk references, you're ready to clone it and begin the cluster configuration.
Ready to optimize your Oracle Database for the future?