Here is an article I posted on the Oracle-L mailing list recently. Much to my surprise, people liked it enough that they asked to see it here, too.
This was written in response to the question: “How do you edit an SPFILE?”
I skipped the general session but heard there was talk about Oracle unlimited support. Basically someone else summarized it as Oracle will take your money as long as you want to give it to them. So no more “de-supported” products. Of course, old products will not get bug fixes unless you really really really pay for it.
Good luck to acquaintances (dare I say friends? we have never met in person!) of mine Jonathan Lewis and Doug Burns who are presenting on April 27th at the UKOUG Northern Server Technology Day.
After I actually woke up I found myself attending a session about LogMiner by Chet Watmough. Log Miner in my opinion is a vastly under-used utility. It has such great versatility. Chet showed some techniques to add additional logging. By inserting the client info information into a dummy table, just to produce a record in the redo stream, he can identify who did what.
So I made my decision and went to “RAC & ASM best practices” presented by Kirk McGowan of Oracle. I wanted to see how far the documentation / common sense was from what will actually work. It wasnt that far from the docs, which is good. It means Oracle got it right.
Good luck to Pythian DBAs Christo Kutrosky and Babette Turner-Underwood who are presenting at Collaborate ’06 today and tomorrow. These are two outstanding presenters with deep technical knowledge reinforced with day-to-day use of the technology and I would encourage anyone to attend. If you attended the presentation, by all means post any feedback here!
I got in today and missed the General session … because I did not get here in time and still had to register. . . . I started the morning with an Oracle session on RAC. It was a two-part session and I did not have the patience to wait out the first half. It was on best practices and I was looking for something more technical.
I needed to change the undo_retention parameter in a 10gR2 database. Unfortunately I forgot whether the parameter was in seconds or minutes, and I wanted to see what the default was. So I open up the docs, go into the Reference guide and get the info I need from the top few lines. I wanted to set undo_retention to some large value in order to use flashback. So I decided to read if they were any notes about setting this too large. I started reading the notes. I’ve read this in the past, so didn’t expect anything new. Then I read the following: “For fixed-size undo tablespaces, the system automatically tunes for the maximum possible undo” What? Fixed size undo vs autoextend undo ? That’s new!
I tripped across this story about some new 750G disks @ 7200 RPM soon to be released by Seagate. This filled me with a sense of dread about having to, once again, go through the process of convincing purchasing managers at various customer sites that actually, no, they can not just buy three of these and RAID-5 them together into a huge storage area for their terabyte database. So now, tell me, what happens when you use very big disks for high-performance applications? You have way, way too many square feet to service with far, far too few loading docks (and usually only one access road!!!).
I read an article about combining multiple CPU’s and presenting them to the OS as a singe CPU. That way all applications that are currently designed to run only on a single CPU will be able to benefit from multiple CPUs. The problem is, I don’t see how this can be achieved. CPUs already have multiple pipelines, out of order executions etc. Some instructions just have to be executed in sequence and cannot be parallelized. If this technology is implemented in a similar way to hard disk RAID, then I wonder how would that affect application behavior?