Posts by Don Seiler
Last week I returned to Denver, Colorado for the RMOUG Training Days conference. I had gone last year as an attendee, but this year I was invited to give my presentation on the Oracle 11g ADR (slides and info at the end of this post). While I haven’t attended very many conferences yet, I can’t imagine a bigger bang for your buck than RMOUG Training Days.
When I found out that NoCOUG had accepted my abstract, “Oracle 11g: Learning to Love the ADR”, I was both ecstatic and terrified. This meant that I actually had to prepare the presentation and speak in front of peers. Surely they would throw me into San Francisco Bay if I didn’t bring my A-game, so I set out to do just that.
No, this isn’t a re-post of my earlier blog about bug 1233183.1. We’ve found a fun new bug that seems to be specific to our poor standalone ASM instances when upgrading from Oracle Grid Infrastructure 184.108.40.206 to 220.127.116.11.
We have a client that runs an application that, for whatever reasons, does NOT like daylight saving time. For that reason, the Oracle server is kept in Eastern Standard Time and does not change with the rest of the eastern United States when DST begins and ends every year. They accomplish this with a custom /etc/localtime file. However, they left /etc/sysconfig/clock set to “TZ=America/New_York,” which would prove fateful as I shall point out.
The past four days have found me very frustrated and at wits’ end while testing upgrades of standalone Oracle Grid Infrastructure (ASM) 18.104.22.168 to 22.214.171.124 on RHEL/OEL 5 VMs. The upgrade would seem to go fine, but after rebooting, I would see ASM and LISTENER running under the old (126.96.36.199) grid home directories again. This is the second bug with the 188.8.131.52 upgrade process that I’ve encountered.
A few months ago, we had a test instance complaining that it couldn’t write to ASM. This was an 184.108.40.206 single (non-RAC) instance on Oracle Enterprise Linux 5, using ASM for the storage.We ended up booting the server altogether, after which everything came up nicely. We filed an SR with Oracle Support, who directed us to Note 391790.1 (Unable To Connect To Cluster Manager Ora-29701). This note lists the cause, quite simply, as…
I recently reformatted my laptop with the latest Ubuntu LTS release, 10.04, aka Lucid Lynx. Since I like to have a native client installation as well as a portable sandbox server, I decided to install the latest version of Oracle EE, 220.127.116.11. Rather than re-invent the wheel, I’m going to direct you to the previous Oracle-on-Ubuntu post by my colleague Augusto Bott. Many of the directions there hold true here (even with 32-bit vs 64-bit), with a few exceptions.
I saw that PSU 9343627 was released and gave it a whirl. I was a little confused when the README seemed to contain a lot of instructions that always assumed it to be on a clustered, RAC install. My setup was a single-instance Grid Infrastructure installation just to provide ASM. I soon met problem upon problem when going through first this setup step. I decided to try the PSU anyway, thinking Oracle would certainly have noted in the README if this was a problem for single-node installations. Turns out I was wrong.
Today, I had a spare Fedora 11 machine sitting next to me, so I thought I’d try the upgrade to the newly-released Fedora 12, aka “Constantine.” Fedora support cycles are rather short compared to Ubuntu, so Fedora 11 will likely be de-supported in 6 to 7 months. Following the Fedora documentation, I decided to use the “preupgrade” tool. Everything was going smoothly until the machine restarted to begin installation of the new packages. I got a message that there wasn’t enough space in my /boot partition. Turns out that this is a known problem with the preupgrade tool. A kind soul in #fedora on IRC directed me to the list of common Fedora 12 bugs, in particular the preupgrade free space check. I strongly suggest you visit the links provided for helpful screenshots and commands to follow.
I recently performed a migration from Oracle 10gR2 on Solaris to the same version on Linux, immediately followed by an upgrade to 11g. Both platforms were x86-64. Migrating to Linux also included migrating to ASM, whereas we had been using ZFS to hold the datafiles on Solaris. Restoring files into ASM meant we would have to use RMAN.