After weeks of not having the time I wanted for this, it’s finally done! Today, I installed RAC on Oracle Enterprise Linux 5 (OEL5), and I can tell you that there’s nothing exceptional about the process.
The only trouble I encountered had nothing directly to do with the installation. In order for the device rights to be assigned when RHEL5 or OEL5 start up, you have to create a file in
/etc/udev/rules.d, but I have already spoken about that in my last post on the subject of raw devices.
Besides that, it’s all quite simple once 10g’s prerequisites are met. After an hour and a half, it was all wrapped up, at least for two nodes. I didn’t see anything revolutionary differences, but nonetheless there are some points worth mentioning.
1. There’s no need to launch VIPCA manually if your public network uses reserved IP addresses. Nonetheless, it’s quite alright to do so, because beneath 80% of examples, Oracle networks are in the
10.x.x.x ranges. Oracle has also fixed the Cluster Verify Utility, which now simply returns a warning instead of an error when this happens:
Interfaces found on subnet "192.168.245.0" that are likely candidates for a private interconnect: sacha eth1:192.168.245.2 WARNING: Could not find a suitable set of interfaces for VIPs.
On the other hand, note that you must have a ping-able GATEWAY at your public address and that the CVU doesn’t seem to validate it. If you don’t have one, the penalty is that your VIP won’t start.
2. No more RAW DEVICES are needed. As I have mentioned before, OCR and Voting Disks can be directly in the partitions (in fact, that’s even the recommended configuration now). It’s the same for ASM disks — for example, you can just change the parameter indicating the path under which ASM will discover disks (
/dev/sdb*, and with DBCA, you can directly adjust your block devices without ASM.
3. There is a command for verifying the state of all the cluster’s nodes.
./crsctl check cluster sacha ONLINE kilian ONLINE
4. There is a command for making a manual backup of OCR. It’s practical to do so before adding a node to the cluster.
./ocrconfig -manualbackup sacha 2007/09/16 08:09:13 /u01/crs/cdata/arkz/backup_20070916_080913.ocr ./ocrconfig -showbackup manual sacha 2007/09/16 08:09:13 /u01/crs/cdata/arkz/backup_20070916_080913.ocr
5. OPROCD replaces hangcheck-timer. I’ve written about this before, and verified it.
ps -ef |grep oproc root 5580 4625 0 07:38 ? 00:00:00 /bin/sh/etc/init.d/init.cssd oprocd root 6142 5580 0 07:38 ? 00:00:00 /u01/crs/bin/oprocd run -t 1000 -m 500 lsmod |grep hangcheck
6. ASMCMD now has a
cp command. This makes it possible to do things such as the working examples below.
. oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle asmcmd ASMCMD> cd DATA/ORCL/CONTROLFILE ASMCMD> pwd +DATA/ORCL/CONTROLFILE ASMCMD> cp Current.260.633428899 +DATA/ORCL/CONTROLFILE/gark.ctl source +DATA/ORCL/CONTROLFILE/Current.260.633428899 target +DATA/ORCL/CONTROLFILE/Gark.ctl copying file(s)... file, +DATA/orcl/controlfile/gark.ctl, copy committed. ASMCMD> ls Current.260.633428899 gark.ctl
lsdsk command allows you to see disks associated with ASM. For example, with the
-d option, you can see which disks belong to a DiskGroup.
oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle asmcmd ASMCMD> lsdsk -k -d DATA Total_MB Free_MB OS_MB Name Failgroup Path 1529 864 1529 DATA_0000 DATA_0000 /dev/sdb5 2337 1306 2337 DATA_0001 DATA_0001 /dev/sdb7
A few remarks:
- It’s possible to launch ASMCMD when ASM is stopped. The message that follows,
ASMCMD-08103: failed to connect to ASM; ASMCMD running in non-connected mode, indicates that you will work directly on the heads of the disks.
- For reasons I imagine are linked to the fact that in my case, the
asm_diskstringparameter does not have the default value (
lsdskseems not to work when in non-connected mode.
- ASMCMD’s new commands look very interesting, particularly
remap. Okay, enough ASM stuff for now.
8. You can both add and remove Voting Disks in a working cluster. Let’s start with OCR, which seems to work already. As the root user, you can add and remove an OCR Mirror while the clusterware is working as follows:
cd /u01/crs/bin ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 248724 Used space (kbytes) : 2252 Available space (kbytes) : 246472 ID : 433771587 Device/File Name : /dev/sdb8 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded ./ocrconfig -replace ocrmirror /dev/sdb9 ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 248724 Used space (kbytes) : 2252 Available space (kbytes) : 246472 ID : 433771587 Device/File Name : /dev/sdb8 Device/File integrity check succeeded Device/File Name : /dev/sdb9 Device/File integrity check succeeded Cluster registry integrity check succeeded ./ocrconfig -replace ocrmirror ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 248724 Used space (kbytes) : 2252 Available space (kbytes) : 246472 ID : 433771587 Device/File Name : /dev/sdb8 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded
You can do the same thing with Voting Disk while the clusterware is as:
dd if=/dev/sdb10 of=/u01/crs/cdata/arkz/voting.copy bs=4k 62244+0 records in 62244+0 records out 254951424 bytes (255 MB) copied, 10.0618 seconds, 24.2 MB/s ./crsctl query css votedisk 0. 0 /dev/sdb10 Located 1 voting disk(s). ./crsctl add css votedisk /dev/sdb11 Successful addition of voting disk /dev/sdb11. ./crsctl add css votedisk /dev/sdb12 Successful addition of voting disk /dev/sdb12. ./crsctl query css votedisk 0. 0 /dev/sdb10 1. 0 /dev/sdb11 2. 0 /dev/sdb12 Located 3 voting disk(s). # At first glance, it's looks like it's not possible to remove Voting Disk 1: ./crsctl delete css votedisk /dev/sdb10 Failure 8 with Cluster Synchronization Services while deleting voting disk. # Too bad! But we can easily remove the other Voting Disks # and logically, a problem with Voting Disk 1 would not # affect the cluster. # Too bad again! I can't test this with my configuration. ./crsctl delete css votedisk /dev/sdb11 Successful deletion of voting disk /dev/sdb11. ./crsctl delete css votedisk /dev/sdb12 Successful deletion of voting disk /dev/sdb12. ./crsctl query css votedisk 0. 0 /dev/sdb10 Located 1 voting disk(s).
9. You can kill sessions cluster-wide.
# Session 1 sqlplus / as sysdba grant select on gv_$session to scott; # Session 2 sqlplus scott/tiger select sid, serial#, inst_id from gv$session where audsid=sys_context('USERENV','SESSIONID'); SID SERIAL# INST_ID --- ---------- ---------- 67 610 1 # Session 1 alter system kill session '67, 610, @1' immediate; # Session 2 / select sid, serial#, inst_id * ERROR at line 1: ORA-03135: connection lost contact Process ID: 29700 Session ID: 67 Serial number: 610
10. You can launch an AWR script for the whole RAC instead of instance-by-instance. Use
spawrrac.sql as follows:
@?/rdbms/admin/spawrrac Instances in this AWR schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Instance DB Id DB Name Count ----------- ------------ -------- 1161209635 ORCL 2 Enter value for dbid: 1161209635 Using 1161209635 for database Id Specify the number of days of snapshots to choose from ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Entering the number of days (n) will result in the most recent (n) days of snapshots being listed. Pressing without specifying a number lists all completed snapshots. Listing the last 31 days of Completed Snapshots Snap Instance DB Name Snap Id End Interval Time Level Count ------------ --------- ----------------- ----- -------- ORCL 1 16 Sep 2007 10:00 1 2 2 16 Sep 2007 11:00 1 2 3 16 Sep 2007 12:00 1 2 Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 1 Begin Snapshot Id specified: 1 Enter value for end_snap: 2 End Snapshot Id specified: 2 Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The default report file name is spawrrac_1_2. To use this name, press to continue, otherwise enter an alternative. Enter value for report_name: [...] End of Report ( spawrrac_1_2.lst )
Share this article
10 Responses to “Running RAC and ASM on Linux (Finally!)”
Leave a Reply