I got in today and missed the General session … because I did not get here in time and still had to register. . . . I started the morning with an Oracle session on RAC. It was a two-part session and I did not have the patience to wait out the first half. It was on best practices and I was looking for something more technical.
I needed to change the undo_retention parameter in a 10gR2 database. Unfortunately I forgot whether the parameter was in seconds or minutes, and I wanted to see what the default was. So I open up the docs, go into the Reference guide and get the info I need from the top few lines. I wanted to set undo_retention to some large value in order to use flashback. So I decided to read if they were any notes about setting this too large. I started reading the notes. I’ve read this in the past, so didn’t expect anything new. Then I read the following: “For fixed-size undo tablespaces, the system automatically tunes for the maximum possible undo” What? Fixed size undo vs autoextend undo ? That’s new!
I tripped across this story about some new 750G disks @ 7200 RPM soon to be released by Seagate. This filled me with a sense of dread about having to, once again, go through the process of convincing purchasing managers at various customer sites that actually, no, they can not just buy three of these and RAID-5 them together into a huge storage area for their terabyte database. So now, tell me, what happens when you use very big disks for high-performance applications? You have way, way too many square feet to service with far, far too few loading docks (and usually only one access road!!!).
I read an article about combining multiple CPU’s and presenting them to the OS as a singe CPU. That way all applications that are currently designed to run only on a single CPU will be able to benefit from multiple CPUs. The problem is, I don’t see how this can be achieved. CPUs already have multiple pipelines, out of order executions etc. Some instructions just have to be executed in sequence and cannot be parallelized. If this technology is implemented in a similar way to hard disk RAID, then I wonder how would that affect application behavior?
Hearing this week that Larry Ellison is thinking of acquiring and releasing its own version of Linux and seeing the ensuing media buzz reminded me of a different, but no simpler time: November 9, 1998. In fact, to say it reminded me is an understatement, I have a severe case of déja vu!
Have you ever wondered how fast you can commit ? No matter what you do, a true commit will take time. The time will be very small, but it will still be more then doing an in memory query of a lookup table. And worst of all, you will serialize at the entire instance on it. So I did a test on how fast you can commit with 1 session. This will be the absolute maximum you can achieve in commit performance.
Here’s a quick way to diagnose where the database is spending its time for a specific process/operation. This is very similar to what statspack is doing, except it’s targeted towards a specific process, and does include the time where the database waited for the application.
I am happy to present the very first post to my new group blog, where I will be inviting Pythian DBAs with interesting thoughts to contribute to prepare blogs of their own. My subject for this first posting: DBD::Oracle. Pythian is a big user of perl and DBD::Oracle internally, as our problem tracking groupware, Support Track, and our availability monitoring software, avail, are all written using the DBI to communicate with the underlying database.
Seah Hull at the Oracle Open Source blog interviewed me on the subject of Pythian taking on the stewardship of DBD::Oracle and even has a podcast of the interview available on his site.
Pythian News is now using WordPress to transform our Industry News section into a group blog of articles relevant to the DBA community at large. We hope this will allow readers from throughout the DBA industry to get involved and not only view our blogs, but also be able to contribute to the discussions.
Computer Associates has sold its Ingres database to a private equity firm, Garnett & Helfrich Capital. The new owners of the database have reaffirmed their commitment to making Ingres open source, and are planning to earn revenue from support contracts and other related services.