A grand tour of Oracle Exadata, Part 1

Jul 6, 2010 / By Marc Fielding

Tags: , , ,

Pythian has full-featured Oracle Exadata Services complete with successful implementations and reference customers.

When Oracle first introduced Exadata at OpenWorld 2008, it was aimed squarely at the data warehouse market dominated by Teradata, Netezza, and other pure-play vendors. Version 2, introduced a year later, has expanded the scope to include general-purpose mixed and even pure transaction processing workloads. Marketing claims abound with reports of 10x and faster speed improvements.

In this series of articles (part 2 here and part 3 here), we’ll explore the major components of Exadata and the Oracle Database Machine and take a peek at how they’re designed with performance and scalability in mind.

Going against the industry trend of embedding database-specific logic in hardware, Exadata makes use of commodity off-the-shelf hardware components, with an underlying open source operating system stack. While arguably such a common hardware architecture makes it easier for competitors to copy functionality, it also gives Exadata a well-understood, stable, and tested platform that’s constantly evolving higher speeds and capacities.

Database nodes

The database nodes in an Oracle Database Machine will be familiar to anyone who has worked with Oracle RAC in a Linux/x86 platform. They consist of exactly the same Sun Fire x4170 1U servers sold for general-purpose computing, but come maxed out in terms of configuration:

  • 2 quad-core Intel Xeon E5540 2.53Ghz processors, with Intel X58 chipset
  • 18 4G 800Mhz ECC memory modules, for a total of 72G of RAM (why only 800Mhz? It’s a restriction of the Xeon memory controller when all memory sockets are filled)
  • LSI MegaRAID 9261-8i controller with 512mb battery-backed cache
  • 4 Hitachi 146GB 10k RPM SAS disks, configured in a 3-disk RAID-5 array with one hot spare
  • 4-port Intel 82575EB-based gigabit Ethernet interface
  • 2-port Mellanox MT26428 QDR Infiniband card, with ports bonded at the operating system level
  • Sun lights-out management

A full rack configuration includes eight database servers.

Infiniband switches

The Infiniband switch is Sun’s largest, the Datacenter Infiniband Switch 36. It supports full wire-speed QDR 40Gbit/s on each port.

A full rack configuration includes three switches: a pair to provide a fully-redundant fabric, and a third for interconnection in multi-cabinet deployments. Deployments as large as 8 full racks in the same Infiniband fabric are supported.

Management switch

A single Cisco Catalyst 4948 48-port gigabit Ethernet switch handles management traffic. With all the management ports in a full rack configuration, almost every port on the switch ends up being used, so don’t count on this to provide extra switch ports. The management switch if the only piece of Exadata hardware that Oracle permits customers to customize, allowing customers to plug management ports into their existing management switches if they so choose.

Storage cells

The storage cells are where much of the Exadata magic happens, though with the exception of the flash cache cards you couldn’t tell by looking at the hardware. Again it’s very much commodity hardware, this time the Sun Fire x4275, the 2U bigger brother of the x4170′s running the database. The configuration is similar to the database nodes:

  • 2 quad-core Intel Xeon E5540 2.53Ghz processors, with Intel X58 chipset
  • 6 4G 1033MHz ECC memory modules, for a total of 24G of RAM
  • LSI MegaRAID 9261-8i controller with 512mb battery-backed cache (without hardware RAID configured here though)
  • Either 12 Seagate ST3600057SS 600GB SAS drives, or 12 2TB SATA drives, depending on the configuration purchased
  • 4-port Intel 82575EB-based gigabit Ethernet interface
  • 2-port Mellanox MT26428 QDR Infiniband card, with ports bonded at the operating system level
  • Sun lights-out management
  • 4 Sun Flash Accelerator F20e PCI cards with 96G of usable space, for a total of 384G of flash cache

KVM and rack

One 32-port Avocent KVM switch with associated keyboard/mouse drawer provides console access to database servers and storage cells. The switch is IP-enabled, meaning remote console access is available either via the individual system ILO ports or the KVM switch.

All the components are housed in a 42U Sun 1242E rack with integrated zero-U power distribution units.

Environmental requirements

Considering that a full rack contains 8 database servers, 14 storage cells, and associated infrastructure, it’s both a big power consumer and also very heavy. Users in older data centers in particular should make sure their facilities can handle the power/cooling and weight load:

  • Power: 13.2kW max, 9.6kW typical
  • Cooling: 44,800 BTU/hour max, 32,800 BTU/hour typical
  • Airflow: 1680 CFM max, 950 CFM typical (approximate)
  • Weight: 2176 lbs / 986 kg

In my next post, we’ll look at the software components.

Learn more about Pythian’s services for evaluation, migration to and operational support for Oracle Exadata.

9 Responses to “A grand tour of Oracle Exadata, Part 1”

Leave a Reply

  • (will not be published)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>