What is the importance of running regular consistency checks? Paul S. Randal returns with some survey results and analysis. He writes, “The results are actually surprising – I didn’t expect so many people to be running consistency checks so frequently . . . ”
Jonathan Kehayias appears this week with with a rant: Got Performance Problems? Buy bigger hardware!. ” . . . The title of this thread is very tongue in cheek . . . However, it is the most common answer I seem to get from application vendors these days when dealing with performance issues, despite the fact that I can point out a dozen reasons why their application design/code is the problem. . . . How does this relate to SQL Server? . . . [When] I get to look at badly performing code, I often find the same problems . . . Correlated subqueries, user defined functions that perform data access, table valued parameters holding thousands of rows of data . . . seem to be the most common killers to TSQL performance . . . ”
Here’s a question from SQL in the Wild—is a scan a bad thing? “Let’s imagine a hypothetical DBA who’s doing some performance tuning. He looks at a query plan for a moderately complex query and panics because there’s a couple of index scans and he wants to rather see index seeks. . . . Is that correct, are index scans bad and index seeks good?” As the author and readers point out, things are rarely that simple.
Tibor Karaszi warns, watch out for that autogrow bug. “Under some circumstances, autogrow for database files can be set to some 12000 percent. . . . So, if you have a reasonably sized database and autogrow kicks in, you can do the maths and realize that pretty soon you are out of disk space.”
Dave Beulke has some thoughts on performance features in DB2 V9 for z/OS. He writes, “These many improvements result in less CPU and also allow you to do on-line reorgs during non-peak processing helping to eliminate those late night or off hours reorgs that drive our schedules crazy as a DBA.”
If you’re going ahead with that, you will appreciate DB2 9.7 Migration: Achieving Maximum Benefit (Particularly Compression) on Diary of a Database Geek.
Conor O’Mahony’s DB2 News and Personal Views shares Truth in Advertising – Advanced Data Compression in Oracle 11g. “The point of this blog post is not to question whether Oracle can achieve high compression rates or whether performance is acceptable when compression is turned on. The point of this blog post is to make sure you are aware that compression rates and performance are highly dependent on the nature of your data and the nature of your environment, and don’t let vendors like Oracle tell you otherwise.”
The results of the First International NoCOUG SQL Challenge were announced this week by its creator, Iggy Fernandez. Log Buffer congratulates the winner, Alberto Dell’Era. (Congrats also are due Pythian’s André Araújo for his runner-up entry.)
Chen Shapiro shares her thoughts on the NoCOUG results: “[The] winning solution is just plain SQL. It is algorithmically brilliant . . . but still just plain (very long) SQL. And I was wondering – this is a SQL challenge and not a math challenge. Shouldn’t the winning solution demonstrate more SQL brilliance and less math brilliance?”
On his Oracle Scratchpad, Jonathan Lewis has the third in his series, Philosophy. This edition’s precept: “The performance of a query should be related to the size of the data set you’re interested in, not to the size of the database.”
Kerry Osborne wants to know, Why isn’t Oracle using my outline/profile/baseline? “I seem to have spent a lot of time during the last week having conversations about why Oracle is not locking plans, even when an Outline or SQL Profile has been created. I mean, their whole purpose in life is to keep the optimizer from changing plans, right?”
Domas Mituzas was hunting down that ol’ evil replication management. He writes, “When one wants to script automated replication chain building, certain things are quite annoying, like immutable replication configuration variables. For example, at certain moments log_slave_updates is more than needed . . . [There] are few options, roll in-house fork (heheeeee!), restart your server, and keep warming up your tens of gigabytes of cache arenas, or wait for MySQL to ship a feature change in next major release. Then there are evil tactics . . . ”
On High Availability MySQL, Mark Callaghan pursued fast count(*) for InnoDB. He writes, “Why must SELECT COUNT(*) FROM FOO run fast? . . . When there isn’t a where clause, MyISAM executes SELECT COUNT(*) FROM FOO fast. When there is a WHERE clause, MySQL has limited support for combining index scans but nothing like bitmap indexes. . . . If you must, the following will make SELECT COUNT(*) FROM FOO fast for InnoDB . . . ”
Shlomi Noach stepped up with reasons to use InnoDB plug-in. “The plugin is a drop-in replacement for “normal” InnoDB tables; enabling many new features. It is the outcome of a long termed silence from InnoBase (Oracle), which were thought to be neglecting the InnoDB engine.”
Tony Bain argues that the NoSQL community needs to engage the DBAs. “I know the majority of people discussing NoSQL platforms today are web developers. In fact there is almost a sense of trying to trying to keep this under the radar of DBAs. If we don’t talk to the DBAs about this stuff then they won’t bother us with all that jabber about consistency, data integrity, robustness and recovery. . . . Actually, many of the NoSQL projects are touting one of the key benefits of a NoSQL platform is you can do big data without the need of a costly DBA. . . . Baloney.” Good point!
That’s all for now. Please add your favourite DB blogs from this week in the comments. Till next time!
Interested in working with David? Schedule a tech call.