After I actually woke up I found myself attending a session about LogMiner by Chet Watmough. Log Miner in my opinion is a vastly under-used utility. It has such great versatility. Chet showed some techniques to add additional logging. By inserting the client info information into a dummy table, just to produce a record in the redo stream, he can identify who did what.
So I made my decision and went to “RAC & ASM best practices” presented by Kirk McGowan of Oracle. I wanted to see how far the documentation / common sense was from what will actually work. It wasnt that far from the docs, which is good. It means Oracle got it right.
Good luck to Pythian DBAs Christo Kutrosky and Babette Turner-Underwood who are presenting at Collaborate ’06 today and tomorrow. These are two outstanding presenters with deep technical knowledge reinforced with day-to-day use of the technology and I would encourage anyone to attend. If you attended the presentation, by all means post any feedback here!
I got in today and missed the General session … because I did not get here in time and still had to register. . . . I started the morning with an Oracle session on RAC. It was a two-part session and I did not have the patience to wait out the first half. It was on best practices and I was looking for something more technical.
I needed to change the undo_retention parameter in a 10gR2 database. Unfortunately I forgot whether the parameter was in seconds or minutes, and I wanted to see what the default was. So I open up the docs, go into the Reference guide and get the info I need from the top few lines. I wanted to set undo_retention to some large value in order to use flashback. So I decided to read if they were any notes about setting this too large. I started reading the notes. I’ve read this in the past, so didn’t expect anything new. Then I read the following: “For fixed-size undo tablespaces, the system automatically tunes for the maximum possible undo” What? Fixed size undo vs autoextend undo ? That’s new!
I tripped across this story about some new 750G disks @ 7200 RPM soon to be released by Seagate. This filled me with a sense of dread about having to, once again, go through the process of convincing purchasing managers at various customer sites that actually, no, they can not just buy three of these and RAID-5 them together into a huge storage area for their terabyte database. So now, tell me, what happens when you use very big disks for high-performance applications? You have way, way too many square feet to service with far, far too few loading docks (and usually only one access road!!!).
I read an article about combining multiple CPU’s and presenting them to the OS as a singe CPU. That way all applications that are currently designed to run only on a single CPU will be able to benefit from multiple CPUs. The problem is, I don’t see how this can be achieved. CPUs already have multiple pipelines, out of order executions etc. Some instructions just have to be executed in sequence and cannot be parallelized. If this technology is implemented in a similar way to hard disk RAID, then I wonder how would that affect application behavior?
Hearing this week that Larry Ellison is thinking of acquiring and releasing its own version of Linux and seeing the ensuing media buzz reminded me of a different, but no simpler time: November 9, 1998. In fact, to say it reminded me is an understatement, I have a severe case of déja vu!
Have you ever wondered how fast you can commit ? No matter what you do, a true commit will take time. The time will be very small, but it will still be more then doing an in memory query of a lookup table. And worst of all, you will serialize at the entire instance on it. So I did a test on how fast you can commit with 1 session. This will be the absolute maximum you can achieve in commit performance.
Here’s a quick way to diagnose where the database is spending its time for a specific process/operation. This is very similar to what statspack is doing, except it’s targeted towards a specific process, and does include the time where the database waited for the application.