After a little more than two weeks of soaking without any issues (yay!), version 1.45_00 of DBD::Oracle has been promoted to general use as v1.46.
And because some contributors have been very busy in the meantime, the next trial version, v1.47_00, is also already on its way to CPAN. This new version offers a few bug fixes (more details in the changelog excerpt below) as well as a rework of the platform-specific troubleshooting guides as POD documents. As usual, it’ll be left around for a minimum of two weeks before it gets promoted to v1.48. Happy upgrade!
Often, a hardware is updated at the same time as a database version. In such cases, we migrate a database from an original location to a new server upgrading the database version and sometimes changing operational system. If this is your situation, you can use the upgrade process to minimize the system’s downtime to 1 hour, independently from the size of the database you migrate.
Before I dig into the mechanics under the hood of the Hadoop beastie (which is the part, I assume, that is going to be heady as hell), I thought it would be a good idea to play a little bit with some of its applications to give me a feel for the lay of the land.
This blog post is a short summary of one of our migration strategies used to migrate Oracle 10g databases to ODA balancing the requirements of minimal downtime and efforts/costs of the project.
Backup is one of the most important topics for any Oracle DBA. It is our primary responsibility to make sure that at any point in time we can recover a database. Some time ago, I created a survey (my very first one): “Why do you use RMAN catalog DB for your Oracle DB backups?” In this blog post, I will share the survey’s results.
SLOB on steroids v0.1: Use it at your own risk. First of all, let me state that I’m falling in love with SLOB :).
Interval partitioning – this ability to create partitions on the fly was introduced in 11g. When the feature came out, there were several nasty bugs. One such particular “limitation” has to do with parallel group by on the partition key. If you want to see just that part, skip towards the end, but I think reading the whole post will offer some insight on how Oracle Parallel Query works.
A couple of years back, I created WWW::Ohloh::API because it seemed to be a fun thing to do. And trust me, it was. But now, since I’m not using that module personally, I thought it would be a good idea to see if anyone would be willing to co-maintain it. Before I could do that, though, I had to deal with two little problems.
My impressions: Considering that this conference was the first one of its kind in Argentina, I have to say it was great. Small, but great. Not many people knew about the event, but the response was still pretty good. It was a 3-day event with plenty of technical discussions and sessions, and some other sessions came from companies who wanted to share experiences and impressions on what is coming for MySQL technologies.
First of all, if you are using NFS to store Oracle database data files, I strongly advise you to enable Oracle Direct NFS (DNFS) to access these files. However, if you are not using NFS, DNFS is probably not of interest to you, and you should stop reading here. But, if you want to set it up, let’s go!