For a change, let’s begin with some PostgreSQL stuff. On Tending the Garden, Selena Deckelmann gives her retrospective thanks to those who attended and presented the PostgreSQL Conference East.
Hey, there was a MySQL ambassador there, too — Baron Schwartz of xaprb. Here’s Baron’s recap of his experiences at the conference.
When pet projects bite back! reasserts that SQL is in fact a programming language. Sometimes one can forget that and need a little reminder. Or a not-so-little reminder, such as a three-pages-long query. The discussion ranges into questions of design, a matter that Baron Schwartz also pursues: he asks (on behalf of his wife), what is your favorite database design book? (I want to know too — um, for . . . a friend of mine.) Lots of good responses so far.
For huge queries to huge tables. On the MySQL performance blog, Aurimas Mikalauskas walks us through using MMM to ALTER huge tables. He writes, “When it comes to changes that really require table to be rebuilt – adding/dropping columns or indexes, changing data type, converting data to different character set – MySQL master-master replication especially accompanied by MMM can be very handy to do the changes with virtually no downtime.”
Maybe you’d like to get a taste of Falcon without installing, as Giuseppe Maxia writes. He suggests using something called db4free, a project that describes itself as “. . . a platform for database users and developers from all over the world to exchange experiences, give hints to each other or to discuss.”
Arjen Lentz is looking forward to a session called Falcon for InnoDB Users at the upcoming MySQL Conference & Expo: “Falcon is indeed not a [drop-in] replacement for InnoDB. Earlier on it was kind-of presented that way by MySQL Marketing, basically responding to the Oracle acquisition of InnoDB. Strategically, yes Falcon is fully owned by MySQL, so that was important to push as a point. But simplification can hurt, as technically Falcon works quite differently.”
Sometimes I like to point out that DBAs, regardless of the technology, are all in the same boat. However, looking at Mark Rittman’s post on the Oracle User Group Norway conference, I think I may have to reconsider that.
Jason Arniel writes, “I encountered a curious failure in a dataguard environment, that seems interesting enough to distribute to a wider audience.” It seemed to be that the diskgroup was filling up, but as Jason demonstrates, it turns out to be a matter of managing datafiles on a standby using ASM.
David Aldridge, the Oracle Sponge, looks into discarding the output of a select statement: “. . . you want to run a select for a big bunch of rows, maybe for measuring wait events for a tricky query, but you want to discard the output so you don’t incur the network traffic and waits to the client. You could use autotrace in sql*plus, but how about something more generic?” He shows one way, and the many comments offer some other approaches and refinements.
Here at home on the Pythian blog, Karun Dutt posted his first item, 3 Tips on Using dg4odbc on 64-bit Linux, the results of having to make data available to an application that formerly queried an MS SQL Server.
Linchi Shea offers an item on the performance impact of controller cache, “[looking] at how the controller cache configurations may impact on the performance of read queries whose execution plans are dominated by index seeks.”
From the Database Underground, Sean McCown reports on being got on April Fools by an prank of Steve Jones’s that begins, “SQL Server is being entirely rewritten for 2010, with a current code name of ‘Kalamazoo’. ” However, as Sean writes, “. . . a lot of the stuff he talked about, I’ve been talking about recently both with MS and with other DBAs as well as in online articles.”
On plok, Jan Lehnardt offers video and slides from a recent presentation of his on CouchDB.
Back here again, Paul Vallée submits what he dubs Rajaraman’s First Law: more data will beat better software.
More data. Dominic Brooks of OraStory says, You want data? This is data. The post is about the scale of the database operations of CERN’s Large Hadron Collider, where scale is equal to about 15 PetaBytes per year, according to sources. While this is an Oracle-based operation, perhaps MySQL’s BLACKHOLE engine will turn out to be more appropriate for this particular project?
Until next time!
Interested in working with David? Schedule a tech call.