Skip to content

Insight and analysis of technology and business strategy

Expand Elastic Configuration on Oracle Exadata - Part 2

In Part 1 of this blog series, we looked at how to reimage a vanilla Exadata compute or storage node to install the required ISO image. This blog will look at how we can integrate new storage servers into an Exadata cluster.

Part 2: Expand the Exadata cluster

A prerequisite to expanding an Exadata cluster is running the OEDA and assigning the hostnames and IP addresses the new servers would occupy. I have already run the OEDA for this blog post and copied the XML files generated to my Exadata compute node 1, where I will run the next steps.

Step 1: Setup Network

The vanilla Exadata compute or storage nodes usually have IP addresses in the 172.16.X.X subnet assigned to the eth0 interface. Depending on how your VLANs and routing are configured, you may not be able to reach this subnet from your network. In this case, log in using the serial console through the ILOM and add a VIP on the eth0 interface using an IP in your actual eth0 subnet so you can reach the server. Then add this IP to /etc/ssh/sshd_config file for Listenaddress and restart the daemon so you can log in to the host using this IP address.

[root@node12 ~]# ifconfig eth0:1 netmask up
[root@node12 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a8:69:8c:11:97:58 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet brd scope global eth0:1
       valid_lft forever preferred_lft forever

[root@node12 ~]# echo “ListenAddress” >> /etc/ssh/sshd_config
[root@node12 ~]# systemctl restart sshd

The next step is to run the script provided in the OEDA package to configure the hosts being added to the cluster. As a prerequisite step, you must set up passwordless SSH from the host on which the script is being run to the new Exadata hosts. Once done, edit the file properties/ file from your OEDA install directory and set the ROCEELASTICNODEIDRANGE parameter to the IP range from which IPs have been assigned in the previous step.

[root@exadb01 linux-x64]# grep ROCEELASTICNODEIPRANGE properties/
[root@exadb01 linux-x64]# ./ -cf ./My_Company-exadb.xml
 Applying Elastic Config...
 Discovering pingable nodes in IP Range of -
 Found 3 pingable hosts..[,,]
 Validating Hostnames..
 Discovering ILOM IP Addresses..
 Getting uLocations...
 Getting Mac Addressess using eth0...
 Getting uLocations...
 Mapping Machines with local hostnames..
 Mapping Machines with uLocations..
 Checking if Marker file exists..
 Updating machines with Mac Address for 3 valid machines.
 Creating preconf..
 Writing host-specific preconf files..
 Writing host specific file /u01/onecommand/linux-x64/WorkDir/exacel04_preconf.csv for exacel04 ....
 Preconf file copied to exacel04 as /var/log/exadatatmp/firstconf/exacel04_preconf.csv
 Writing host specific file /u01/onecommand/linux-x64/WorkDir/exacel05_preconf.csv for exacel05 ....
 Preconf file copied to exacel05 as /var/log/exadatatmp/firstconf/exacel05_preconf.csv
 Writing host specific file /u01/onecommand/linux-x64/WorkDir/exacel06_preconf.csv for exacel06 ....
 Preconf file copied to exacel06 as /var/log/exadatatmp/firstconf/exacel06_preconf.csv
 Running Elastic Configuration on
 Running Elastic Configuration on
 Running Elastic Configuration on
 Completed Applying Elastic Config...
 Ending applyElasticConfig
[root@exadb01 linux-x64]#

The script will apply the hostnames and IP addresses from the OEDA XML file to the new compute/cell nodes.

The following steps detail adding new storage servers to an existing Exadata cluster.

Step 2: Run calibration

Run calibration on new cells to benchmark the performance of the disks.

[root@exadb01 ~]# dcli -g ~/new_cell_group -l root cellcli -e calibrate force; Calibration will take a few minutes... Aggregate random read throughput across all flash disk LUNs: 56431 MBPS Aggregate random read IOs per second (IOPS) across all flash disk LUNs: 1668595 Calibrating flash disks (read only, note that writes will be significantly slower) ... LUN 1_0 on drive [FLASH_1_2,FLASH_1_1] random read throughput: 14,175.00 MBPS, and 694092 IOPS LUN 2_0 on drive [FLASH_2_2,FLASH_2_1] random read throughput: 14,178.00 MBPS, and 643495 IOPS LUN 4_0 on drive [FLASH_4_2,FLASH_4_1] random read throughput: 14,176.00 MBPS, and 638538 IOPS LUN 5_0 on drive [FLASH_5_2,FLASH_5_1] random read throughput: 14,229.00 MBPS, and 694577 IOPS LUN 6_0 on drive [FLASH_6_2,FLASH_6_1] random read throughput: 14,198.00 MBPS, and 687977 IOPS LUN 7_0 on drive [FLASH_7_2,FLASH_7_1] random read throughput: 14,167.00 MBPS, and 642601 IOPS LUN 8_0 on drive [FLASH_8_2,FLASH_8_1] random read throughput: 14,185.00 MBPS, and 648842 IOPS LUN 9_0 on drive [FLASH_9_2,FLASH_9_1] random read throughput: 14,204.00 MBPS, and 642252 IOPS CALIBRATE results are within an acceptable range. Calibration has finished. Calibration will take a few minutes...

Step 3: Add new cells to the RAC cluster

Next, add the new cells to the file /etc/oracle/cell/network-config/cellip.ora on all compute nodes.

[root@exadb01 network-config]# dcli -g ~/dbs_group -l root cat /etc/oracle/cell/network-config/cellip.ora cell=";" cell=";" cell=";" cell=";" cell=";" cell=";" cell=";" cell=";" cell=";"

Step 4: Provision disks for ASM

Next, create grid disks on the new cells. In my example, the new storage servers have the Extreme Flash configuration, which contains only flash disks.

[root@exacel04 ~]# cellcli -e create griddisk all flashdisk prefix=NEW_DATA, size=5660G
GridDisk NEW_DATA_FD_00_exacel04 successfully created
GridDisk NEW_DATA_FD_01_exacel04 successfully created
GridDisk NEW_DATA_FD_02_exacel04 successfully created
GridDisk NEW_DATA_FD_03_exacel04 successfully created
GridDisk NEW_DATA_FD_04_exacel04 successfully created
GridDisk NEW_DATA_FD_05_exacel04 successfully created
GridDisk NEW_DATA_FD_06_exacel04 successfully created
GridDisk NEW_DATA_FD_07_exacel04 successfully created

Modify the ASM disk string on the ASM instance to include the new grid disks.

SQL> alter system set asm_diskstring='o/*/DATA_*','o/*/RECO_*','/dev/exadata_quorum/*','o/*/NEW_DATA_*' sid='*';

The new disks should show up as candidate disks on the v$asm_disk view.

SQL> select path, state, mount_status, header_status from v$asm_disk where path like '%NEW_DATA%' order by 2;

PATH                                                    STATE    MOUNT_S HEADER_STATU
------------------------------------------------------- -------- ------- ------------

Step 5: Create a disk group

Create a new disk group to consume the new disks. The command shown here creates a high redundancy disk group using three fail groups, each containing the complete set of disks from each cell server.

FAILGROUP exacel04 DISK 'o/;*'
FAILGROUP exacel05 DISK 'o/;*'
FAILGROUP exacel06 DISK 'o/;*'
ATTRIBUTE 'content.type' = 'data',
'au_size' = '4M',

Diskgroup created.

Voila! Your newly provisioned storage is ready to be consumed by your Oracle databases. These steps will help you the next time you want to expand your Exadata cluster. See you again in another blog post!

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!