Log Buffer #148: A Carnival of the Vanities for DBAs
May 29, 2009 / By David Edwards
This is the 148th edition of Log Buffer, the weekly review of database blogs. Welcome.
Since PGCon ’09 has concluded not long ago (and not far away), let’s start with Postgres stuff, much of which has to do with the convention.
Here are Robert Treat’s reflections on PGCon 2009, on his zillablog: “ . . . PGCon always presents the strongest line up of Postgres information available, and this year was certainly no exception.”
Jignesh Shah reports on the upshot of some interaction at his PGCon presentation, “ . . . it is highly recommended specially on multi-core systems to use FX scheduler class for Postgres on OpenSolaris.”
Back to Josh and his readers now for a worthwhile discussion of PostgreSQL development priorities. Josh’s #1: “Simple built-in replication.”
Linchi Shea has an update to his T-SQL exercise, to produce the simplest data-loading script to produce worst query performance. “The original intent,” writes Linchi, “was to highlight some pitfalls in data loading that may lead to bad query performance. But then I thought why take all the fun away by having too many constraints, and why not just let it loose and see how bad it can get if one is to do it intentionally.”
Aaron Alton, the HOBT, was likewise thinking on ways the SQL DBA goes awry. His conclusion: easy on the updates there, sparky. “To the unsuspecting database developer, it may seem that some operations in SQL Server are more or less ‘free’. Let’s clear the air on that one – nothing is free, ever. Or if it is, it usually has a 30 day limit. It’s easy to forget this, because when you’re working with something like SQL Server, it’s hard to imagine that a sub-second response time can hide anything of significant concern.”
Sometimes you don’t need to make things bad intentionally. For example, when a decimal isn’t a decimal. Simon Sabin writes, “To say the type system in SQL is lax is an not quite correct, its actually lax, inconsistent and very annoying.”
The Data Management blog says, “compression tools are a must for any DBA”, asserting that, “At a high level, compression software in itself can give you a vast amount of options that you simply may not be able to grasp without.”
CPU Costing and the effects of multiple blocksizes – part 4 arrives at Randolf Geist’s Oracle related stuff.
On the Oracle Scratchpad, Jonathan Lewis describes “CPU used,” which demonstrates how, “‘CPU Time’ in the ‘Top N Timed Events’ . . . [looks] very different from the ‘BUSY_TIME’ that appears in the ‘OS Statistics’ part of the [Statspack] report.”
On High Availability MySQL, Mark Callaghan has a good reason to use inodb_file_per_table — per-table IO statistics.
Dimitri K. looks into InnoDB Dirty Pages & Log Size Impact. He begins, “ . . . seeking for the most optimal MySQL config parameters I’ve discovered a strange thing: my dirty pages percentage setting 15 was completely ignored by InnoDB during my tests; [and] once the test workload was finished it still took 30 minutes yet to flush dirty pages!”
That’s all for now. I’d love to hear from you, so please share your favourite DB blogs from the week gone by in the comments.
Till next time!
Leave a Reply