Posts by Keith Murphy
It’s time to continue our series on the transactional storage engines for MySQL. Some might question why I even include Falcon because it is very much beta at this time. MySQL, however, has made quite an investment into Falcon, and while it is currently beta, the code is improving and it looks like that it will be production-worthy when MySQL server 6.0 hits GA. I am going to concentrate quite a bit on the Falcon/InnoDB comparison as that is what everyone wants to talk about. This is despite my having heard MySQL employees repeatedly make statements to the effect of, “Falcon is not going to replace InnoDB,” or “Falcon is not competing with InnoDB.” Well, take that with a grain of salt. It certainly seems to me that they are competing for the same spot.
Seems I have turned into a bit of a news source. dbWatch Software sent me a news release on their dbWatch monitoring platform, which looks like it might be an interesting product for those who work in a heterogeneous database environment. Here’s the release.
InnoDB is a storage engine that uses MVCC (described shortly) to provide ACID-compliant transactional data storage using row-level locking. MVCC stands for Multi-Version Concurrency Control. It is how InnoDB allows multiple transactions to look at a data set of one or more tables and have a consistent view of the data. MVCC keeps a virtual snapshot of the dataset for each transaction. An example will make this clear.
I began to write a post on InnoDB transactions, but there was so much background material that I decided first to write a post introducing transactions, and then one on how InnoDB implements them. If there is good response from these two posts, I will continue with other posts on the major storage engines and their transactional characteristics.
As an editor, I cringe at the title of this post. However, it is absolutely accurate. Recently, we had a situation where we had two servers running Sun Solaris 10 on some high-end Sun hardware. The two servers are configured in a master-master circular replication setup. Here is the problem. On both servers, the error logs were being created incorrectly. On one of them, it was creating an error log that was 154 megabytes in size.Â FLUSH LOGS worked, but the newly-created error log would be the same size. While there was some data in the file that I could use the cat, head or string command to discern, the majority of the file was not text data. If anyone has any thoughts or a similar experience, I would love to hear from you!
I just wanted to thank everyone who participated in the survey that Mark Schoonover and I created. My endless thanks goes to Mark who did a lot of work on this. The results will be coming out in the Summer issue of MySQL Magazine which will be online July the 15th. I am putting together the articles now and it looks like it’s going to be a great one!
Recently, a customer wondered if they should start using the innodb_file_per_table option, or if they should continue to use the large InnoDB tablespace files created by the innodb_data_file_path option in the my.cnf option file. Many people still use the older innodb_data_file_path option because it is the default for MySQL server. So, what are the benefits of using innodb_file_per_table instead? Let me show you.
I have a question I wanted to throw out. The term “GA” gets batted around all the time as meaning, the production-ready version of MySQL server. However, googling for quite a bit, I can’t find a definition for GA (other than what I stated above, i.e. production-ready). What does this mean in terms of bugs? Features? Anything else I might be missing?
In case you haven’t heard, on Monday, MySQL released the next RC of 5.1.25. It is available to the community, so download it now and take it for a spin!
I finished uploading the backup presentation that I did last Monday at the Boston MySQL User’s Group. I cover the basics of backup/recovery and disaster planning. Total time is about an hour and three minutes. It was a lot of fun and the Bostonians seem to appreciate it.