Oracle’s Secret New Feature: Educated Guesses
Sep 22, 2008 / By Christo Kutrovsky
Larry Ellison is announcing a major new feature this Wednesday at Open World. For the first time in a while, his keynote is dedicated to the “database” as opposed to the usual high level ERP/Apps/Fusion. Even the title of his keynote is catchy — “Extreme Performance”.
Oracle has been keeping the new feature a secret. Even the 11gR2 beta program had very few participants to prevent information leaking out. It’s, “Something’s coming, but I am not telling what.”
Okay, it worked on me, I’m excited about it. Let’s think what it could be. What single database feature is so major, that Larry himself will announce it during OpenWorld?
What do we know so far?
- Starting with the obvious, Larry’s keynote is “Extreme Performance”, so it’s related to performance.
- We know Kevin Closson has worked on it – he had a blog entry saying “I am working on something big” that got pulled off the web.
Given these two points, let’s further think about it. What do we know about Kevin?
- He worked for PolyServe — a company whose main product is a cluster file system.
- He worked for Sequent on NUMA systems, which in today’s world is pretty close to cluster software with a very fast, low latency interconnect.
- He is an expert in storage systems and disk performance.
- He joined Oracle recently, possibly to work on this secret project.
- He must be really excited about it, to post anything on his blog under radio silence.
I think it’s something related to storage, something new and revolutionary about storage. But what?
We already know, from leaks on certain websites, that ASM will become a cluster filesystem which will allow storing OCR files, as well as user files, on the ASM disks.
But is this big enough? It’s definitely significant. Now you get a “free” reliable, cluster file system with Oracle. I don’t think it’s big enough though. Oracle already had OCFS and OCFS2. So it’s not something new to release a filesystem. And even if ASM becomes a true filesystem, that would not provide such a significant performance boost to warrant a keynote called “Extreme Performance”. An ASM filesystem would be a major manageability feature, not so much a performance feature.
That being ruled out, what could it be?
Recently, when setting up a new 11g database on a server with 128gb of RAM, I was setting up hugepages as usual, and thinking about how big my cache would be. It struck me that the cache will be bigger than the database for quite a while. Why do we even need the SAN/Datafiles?!
Then it hit me.
We don’t! We don’t need them at all!
What are the main storage components of a production database?
- the redo logs — to guarantee crash recovery
- the datafiles — primary storage
- the backups — mandatory for a production system
- the SGA — why is this part of storage? Well, because you can’t have a database without some fast in-memory storage, right?
If you have sufficient SGA (RAM) to load your entire database (datafiles), why do you need the datafiles?
I am sure you are immediately thinking what if the database crashes?
Remember, what’s the recent push in Oracle: grid computing.
Picture a RAC database – 8 nodes, 128 GB of RAM each, totaling 1 TB of storage. Add 2- or 3-way mirroring and you get 300 GB of highly redundant, extremely fast storage. A true, native “in-memory” cluster database. A true “shared nothing” cluster database.
Even if you do not consider the performance increase, the redundancy level goes up. You no longer have a “central” SAN to rely on. Maybe you have two mirrored SANs in your enterprise to protect you against such failures. How about none?
Let’s keep moving with that idea. How can Oracle achieve it? What technologies would be needed?
I think Oracle already has all the required technologies to achieve this “extreme performance”. It’s just a matter of connecting them.
And the answer is Cache Fusion. But how? Imagine this scenario. During database startup you would “restore” your database from your backups (compressed or not) directly into memory. Remember that’s 8 nodes that are doing the uncompression/reading. So starting up won’t really take much time.
Once the database is up, cache fusion will take care of the rest: sending blocks over the interconnect, keeping past images, keeping and managing multiple copies. Oracle already does this, just not for redundancy reasons. Look at my Cluster Efficiency query.
If a node (or two) go down, who cares? All the data is already replicated 2- or 3-way. In the event all nodes go down, Oracle would still keep the online redo logs for archival purposes. Or maybe not? Replicated in memory REDO? Why not?
In fact, the only real changes are:
- backup will be restored into memory
- no dbwriter — no datafiles to write to
- cache fusion block replication for redundancy
The result? “Extreme Performance.” Now that’s definitely worthy for a keynote by Larry himself.
A major innovation indeed. For Oracle, at least. MySQL cluster databases are already all in memory. Actually, it’s the only way it can be, and this is seen as a limitation by the community, simply because it’s the only way.
Oracle doesn’t need to make the feature exclusive for the entire database. This may be a tablespace level feature, or even a table/partition level one. Then you would really be in control of which areas of your database get “extreme performance”. Think of the possibilities.
We were brainstorming with Paul Vallee on what the new feature could be. Paul’s idea was slightly different than mine. He envisioned ASM to be the driving technology behind an all-in-memory database. ASM already has 2- and 3-way mirroring. The change would be minor — instead of creating disks out of LUNs, they would be created out of RAM. ASM would take care of the inter-node replication.
If Oracle had an all-in-memory database done with ASM, you would still have to “read” the data into the buffer cache, introducing double-buffering. This would be a step back, actually. In the PC world, Windows NT/2000 revolutionized caching from DOS/Windows95. The merging of the file system cache with the execution memory was a significant step forward to avoid double buffering. And this would limit the granularity of what is “all-in-memory”.
This is how Paul’s idea looks:
We have our bets. What’s yours? Please throw in some wild guesses. The winner (the earliest correct guess) gets a Pythian Maestro shell shipped to him or her. (NOTE: I was going to write â€œdoes not apply to Oracle employees, but I decided to give them a chance too. As long as you don’t know and you are guessing, you can try).
Here’s Darrin Leboeuf, Pythian’s V.P. Client Services, modeling the Pythian Maestro shell.
17 comments on “Oracle’s Secret New Feature: Educated Guesses”
Pingback: Oracle Open World 2008 Diaries: the X Preview
Pingback: Doug's Oracle Blog
Pingback: Another round of discussion on in-memory OLTP data management | DBMS2 -- DataBase Management System Services
Leave a Reply
You must be logged in to post a comment.