A grand tour of Oracle Exadata, Part 1

Posted in: Technical Track

Pythian has full-featured Oracle Exadata Services complete with successful implementations and reference customers.

When Oracle first introduced Exadata at OpenWorld 2008, it was aimed squarely at the data warehouse market dominated by Teradata, Netezza, and other pure-play vendors. Version 2, introduced a year later, has expanded the scope to include general-purpose mixed and even pure transaction processing workloads. Marketing claims abound with reports of 10x and faster speed improvements.

In this series of articles (part 2 here and part 3 here), we’ll explore the major components of Exadata and the Oracle Database Machine and take a peek at how they’re designed with performance and scalability in mind.

Going against the industry trend of embedding database-specific logic in hardware, Exadata makes use of commodity off-the-shelf hardware components, with an underlying open source operating system stack. While arguably such a common hardware architecture makes it easier for competitors to copy functionality, it also gives Exadata a well-understood, stable, and tested platform that’s constantly evolving higher speeds and capacities.

Database nodes

The database nodes in an Oracle Database Machine will be familiar to anyone who has worked with Oracle RAC in a Linux/x86 platform. They consist of exactly the same Sun Fire x4170 1U servers sold for general-purpose computing, but come maxed out in terms of configuration:

  • 2 quad-core Intel Xeon E5540 2.53Ghz processors, with Intel X58 chipset
  • 18 4G 800Mhz ECC memory modules, for a total of 72G of RAM (why only 800Mhz? It’s a restriction of the Xeon memory controller when all memory sockets are filled)
  • LSI MegaRAID 9261-8i controller with 512mb battery-backed cache
  • 4 Hitachi 146GB 10k RPM SAS disks, configured in a 3-disk RAID-5 array with one hot spare
  • 4-port Intel 82575EB-based gigabit Ethernet interface
  • 2-port Mellanox MT26428 QDR Infiniband card, with ports bonded at the operating system level
  • Sun lights-out management

A full rack configuration includes eight database servers.

Infiniband switches

The Infiniband switch is Sun’s largest, the Datacenter Infiniband Switch 36. It supports full wire-speed QDR 40Gbit/s on each port.

A full rack configuration includes three switches: a pair to provide a fully-redundant fabric, and a third for interconnection in multi-cabinet deployments. Deployments as large as 8 full racks in the same Infiniband fabric are supported.

Management switch

A single Cisco Catalyst 4948 48-port gigabit Ethernet switch handles management traffic. With all the management ports in a full rack configuration, almost every port on the switch ends up being used, so don’t count on this to provide extra switch ports. The management switch if the only piece of Exadata hardware that Oracle permits customers to customize, allowing customers to plug management ports into their existing management switches if they so choose.

Storage cells

The storage cells are where much of the Exadata magic happens, though with the exception of the flash cache cards you couldn’t tell by looking at the hardware. Again it’s very much commodity hardware, this time the Sun Fire x4275, the 2U bigger brother of the x4170’s running the database. The configuration is similar to the database nodes:

  • 2 quad-core Intel Xeon E5540 2.53Ghz processors, with Intel X58 chipset
  • 6 4G 1033MHz ECC memory modules, for a total of 24G of RAM
  • LSI MegaRAID 9261-8i controller with 512mb battery-backed cache (without hardware RAID configured here though)
  • Either 12 Seagate ST3600057SS 600GB SAS drives, or 12 2TB SATA drives, depending on the configuration purchased
  • 4-port Intel 82575EB-based gigabit Ethernet interface
  • 2-port Mellanox MT26428 QDR Infiniband card, with ports bonded at the operating system level
  • Sun lights-out management
  • 4 Sun Flash Accelerator F20e PCI cards with 96G of usable space, for a total of 384G of flash cache

KVM and rack

One 32-port Avocent KVM switch with associated keyboard/mouse drawer provides console access to database servers and storage cells. The switch is IP-enabled, meaning remote console access is available either via the individual system ILO ports or the KVM switch.

All the components are housed in a 42U Sun 1242E rack with integrated zero-U power distribution units.

Environmental requirements

Considering that a full rack contains 8 database servers, 14 storage cells, and associated infrastructure, it’s both a big power consumer and also very heavy. Users in older data centers in particular should make sure their facilities can handle the power/cooling and weight load:

  • Power: 13.2kW max, 9.6kW typical
  • Cooling: 44,800 BTU/hour max, 32,800 BTU/hour typical
  • Airflow: 1680 CFM max, 950 CFM typical (approximate)
  • Weight: 2176 lbs / 986 kg

In my next post, we’ll look at the software components.

Learn more about Pythian’s services for evaluation, migration to and operational support for Oracle Exadata.

Interested in working with Marc? Schedule a tech call.

About the Author

Marc is a passionate and creative problem solver, drawing on deep understanding of the full enterprise application stack to identify the root cause of problems and to deploy sustainable solutions. Marc has a strong background in performance tuning and high availability, developing many of the tools and processes used to monitor and manage critical production databases at Pythian. He is proud to be the very first DataStax Platinum Certified Administrator for Apache Cassandra.

7 Comments. Leave new

Alex Fatkulin
July 7, 2010 3:21 pm

“why only 800Mhz? I believe it’s a chipset limitation when all memory slots are filled”

It’s a CPU thing (since memory controller is located on the CPU uncore, not the chipset) which depends on how many modules you have per channel. More than 2 modules per channel will put you at 800 if you follow Intel’s spec.


Marc Fielding
July 8, 2010 10:40 pm

Very interesting. I’ve updated the post. Thanks Alex!

Log Buffer #194, A Carnival of The Vanities for DBAs | The Pythian Blog
July 11, 2010 11:11 pm

[…] Queuing, and how it applies to data warehousing workloads on Exadata. Marc Fielding begins his grand tour of Exadata, writing to share his implementation […]

Kevin Closson
July 16, 2010 12:53 pm

“why only 800Mhz? I believe it’s a chipset limitation when all memory slots are filled”

With Nehalem EP you can only have 1 DIMM per slot at 1333, 2 clocks down to 1066 and 3 down to 800. Westmere EP support 2 DIMMs at 1333 as well as low-voltage DIMS.


Log Buffer #197, A Carnival of the Vanities for DBAs | The Pythian Blog
July 30, 2010 8:43 pm

[…] Fielding posts Part 2 of his series A Grand Tour of Oracle Exadata. He’s also hosting a webinar on how to successfully implement Exadata, Aug. […]

Designing for Exadata: Maximizing Storage Indexes Use | The Pythian Blog
August 13, 2010 3:04 pm

[…] Exadata V2 is a very well balanced database machine combined with smart and innovative software. One of these innovative features is the Storage Index […]

Exadata « Oracle Scratchpad
September 14, 2010 2:00 pm

[…] Part 1 – mostly hardware […]


Leave a Reply

Your email address will not be published. Required fields are marked *