I tripped across this story about some new 750G disks @ 7200 RPM soon to be released by Seagate. This filled me with a sense of dread about having to, once again, go through the process of convincing purchasing managers at various customer sites that actually, no, they can not just buy three of these and RAID-5 them together into a huge storage area for their terabyte database. So now, tell me, what happens when you use very big disks for high-performance applications? You have way, way too many square feet to service with far, far too few loading docks (and usually only one access road!!!).
I read an article about combining multiple CPU’s and presenting them to the OS as a singe CPU. That way all applications that are currently designed to run only on a single CPU will be able to benefit from multiple CPUs. The problem is, I don’t see how this can be achieved. CPUs already have multiple pipelines, out of order executions etc. Some instructions just have to be executed in sequence and cannot be parallelized. If this technology is implemented in a similar way to hard disk RAID, then I wonder how would that affect application behavior?
Hearing this week that Larry Ellison is thinking of acquiring and releasing its own version of Linux and seeing the ensuing media buzz reminded me of a different, but no simpler time: November 9, 1998. In fact, to say it reminded me is an understatement, I have a severe case of déja vu!
Have you ever wondered how fast you can commit ? No matter what you do, a true commit will take time. The time will be very small, but it will still be more then doing an in memory query of a lookup table. And worst of all, you will serialize at the entire instance on it. So I did a test on how fast you can commit with 1 session. This will be the absolute maximum you can achieve in commit performance.
Here’s a quick way to diagnose where the database is spending its time for a specific process/operation. This is very similar to what statspack is doing, except it’s targeted towards a specific process, and does include the time where the database waited for the application.
I am happy to present the very first post to my new group blog, where I will be inviting Pythian DBAs with interesting thoughts to contribute to prepare blogs of their own. My subject for this first posting: DBD::Oracle. Pythian is a big user of perl and DBD::Oracle internally, as our problem tracking groupware, Support Track, and our availability monitoring software, avail, are all written using the DBI to communicate with the underlying database.
Seah Hull at the Oracle Open Source blog interviewed me on the subject of Pythian taking on the stewardship of DBD::Oracle and even has a podcast of the interview available on his site.
Pythian News is now using WordPress to transform our Industry News section into a group blog of articles relevant to the DBA community at large. We hope this will allow readers from throughout the DBA industry to get involved and not only view our blogs, but also be able to contribute to the discussions.
Computer Associates has sold its Ingres database to a private equity firm, Garnett & Helfrich Capital. The new owners of the database have reaffirmed their commitment to making Ingres open source, and are planning to earn revenue from support contracts and other related services.
Oracle announced that it would release a free for production use version of its database product, limited to 4G in the database, 1G of ram and a single CPU. Today, the four major open source DBMS vendors (MySQL, Postgres, Firebird, and Sleepycat) unanimously rejected Oracle’s new free-to-use Express Edition beta, claiming that users will not accept the capacity limits of the new oracle product.
MySQL AB,”developer of the world’s most popular open source database” has released today, the production versionof MySQL 5.0, its most important upgrade in ten years. The 5.0 version provides new enterprise database features such as stored procedures, triggers, views and cursors.