Exposing MyRocks Internals Via System Variables: Part 7, Use Case Considerations
(In the previous post, Part 6, we covered Replication.) In this final blog post, we conclude our series of exploring MyRocks by taking a look at use case considerations. After all, having knowledge of how an engine works is really only applicable if you feel like you’re in a good position to use it.
Advantages of MyRocks
Let’s start by talking about some of the advantages of MyRocks.Compression
MyRocks will typically do a good job of reducing the physical footprint of your data. As I mentioned in my previous post in this series about compression, you have the ability to configure compression down to the individual compaction layers for each column family. You also get the advantage of the fact that data isn’t updated once it’s written to disk. Compaction, which was also previously covered, takes data changes and merges them together. The result of this is less free space within data pages, meaning that the size of the data file on disk is a better representation of the size of the data set. You can read more about this on this page of the MyRocks wiki. I noted in my previous post on compression that a common mistake that I’ve seen in the past is the adoption of technologies or engines where there is an ability to reduce your storage footprint, particularly in moments where you may be running out of disk space. Fight that urge and consider all the facts in regard to the engine before adopting.Write-Optimized
If your use case is write-heavy, then MyRocks may be a good solution for you given the fact that it writes data in a log structure based on data changes and then relies on compaction to clean up the data after the fact as it makes its way through compaction layers, thus creating deferred write amplification. This removes a lot of random reads and writes that would be required for storage engines that are reliant on b-tree. You can read more about write optimization with MyRocks on this page of code.fb.com. You can even take this a step further by saying that MyRocks is write-optimized at the cluster level considering it supports read free replication.Better performance when the active data set doesn’t fit in memory
There are benchmark tests that have been performed that show that MyRocks can outperform InnoDB in the case when the active data set (the data most commonly accessed) does not fit entirely in memory. For InnoDB, this would mean that there is data that is frequently accessed that cannot all fit in the buffer pool. For MyRocks, we know that different caches are used for writes and reads, but for all intents and purposes we can say that the case would be when there is data that is frequently accessed that doesn’t fit in the block cache. You can read more about this by checking out this blog post by Vadim Tkachenko on the Percona Blog.Backups
When I originally drafted this installment of the blog series, I actually had backups listed as a drawback. The reason is that while you were able to do hot backups of MyRocks datasets using myrocks_hotbackup , you would not be able to use that tool to backup any data that was stored using any other storage engine. This was a common problem with TokuDB enabled instances and thus forced us to fall back to snapshot-based backups when we came across these systems. However, on May 9th, 2019, Percona announced in their 8.0.6 release of xtrabackup that MyRocks would be supported in the product, allowing you to create backups for systems that used both InnoDB and MyRocks. This is especially important considering that in MySQL 8.0, the data dictionary is now stored in database tables within MySQL, replacing the existing .frm method, and those tables are stored using InnoDB .Drawbacks of MyRocks
Now let’s cover some of the drawbacks and limitations of MyRocksRange Lookups
MyRocks gives you some fantastic advantages when it comes to reads in your database that are filtered based on const operators like ‘IN’ and ‘=’ given that it can use bloom filters to increase the velocity of getting that data to disk by invalidating data files without needing to access them beyond reading the filter blocks if they aren’t already in memory. This changes when you need to look for data in ranges. There is no way to take advantage of standard bloom filters for this, and you may need to spend a lot of time decompressing data and loading it into memory to find what you need. This issue is exacerbated in the case that you need to do full table scans given that you will need to access multiple versions of the table data across the various compaction layers and then take on the overhead of the system having to determine the latest version of the data for each record. That’s not to say that range lookups in your use case would completely invalidate MyRocks as a potential solution for you. There are optimizations of RocksDB that allow for more efficient range lookups, and even prefix bloom filters that can be beneficial. You can read more about this by reading this blog post on why RocksDB was selected as the underlying storage engine for CockroachDB.No Online DDL
Currently MyRocks does not support Online DDL, whereas InnoDB has supported this to some degree for quite some time . You can read more about this limitation by checking out this bug in the MyRocks github project.No Foreign Keys
Foreign keys are currently not supported in MyRocks. To be completely candid though, this isn’t a deal breaker for me. When I was a database developer and schema designer, I was a big-time supporter of foreign keys because I’m a control freak and don’t like developers messing with the reliability of my data relationships. Now that I’ve been on the DBA side for a while, I can see the overhead that foreign keys create. They offer that lovely peace of mind when it comes to data relationships, but at a noteworthy cost. Even more so when you apply them to something like Galera. Given the overhead cost of foreign keys, I’m seeing more and more cases where they are being removed unless data relationships are incredibly critical. You may find that you can live without them completely and, as such, you may not be dissuaded from using MyRocks.No Transportable Tablespaces
Having the ability to physically move your data at the table level without having to do a logical export / import doesn’t really seem like a big deal until you need it. This is going to be a difficult challenge to overcome considering how you have multiple tables in a single column family, plus the complexities that come with having multiple compaction layers. I have a feeling that a lack of transportable tablespaces with MyRocks is likely something we’re going to have to deal with for a long time.Select For Update Not Supported With Repeatable Read Isolation Level
"Select for update" is not available with repeatable read, but is available with read-committed. This is important as you need to understand how this will impact your explicit exclusive lock requests.So when do I consider it?
I would suggest considering MyRocks if you’re working with….- A large OLTP dataset where your active data set doesn’t fit into memory
- Write-intensive workloads
- High concurrency reads that don’t filter on range
Conclusion
In this blog series, we took a look at MyRocks internals in an attempt to understand how they work at a high level as well as understanding the variables and metrics that are associated with them. We also covered the strengths and weaknesses that you should take into mind when considering this technology. MyRocks has grabbed the attention of the community in a big way in the last couple of years but the debate will go on as to whether or not it’s ready for the big leagues, or if it will be an InnoDB killer, or if it will fizzle out entirely within the context of MySQL, etc. The important thing to know is that there has been a focus on this engine, it’s something you should be aware of, and you now have an option to include log structures merge indexes in your data ecosystem without having to stray too far away from the SQL language you’re already comfortable with. Keep an eye out for use cases in your environment. Keep the limitations in mind. Don’t adopt just because it compresses your data. Above all else, be sure to test, test, and then test some more before a production implementation. Thank you very much for taking the time to read this blog series. There was a lot of time and effort that went into the research and lab work that supported what has been written here so I can only hope that it will be useful to you as you continue your exploration of log-structured merge capabilities that are now available for your use. Also, I would like to add a big thank you to my colleague Sandeep Varupula here at Pythian and George Lorch of Percona for helping fact check the installments in this series. In case you missed the previous posts, here they are: Part 1: Data Writing Part 2: Data Flushing Part 3: Compaction Part 4: Compression and Bloom Filters Part 5: Data Reads Part 6: ReplicationShare this
You May Also Like
These Related Stories
Exposing MyRocks Internals via system variables: Part 4, Compression and Bloom Filters
Exposing MyRocks Internals via system variables: Part 4, Compression and Bloom Filters
May 14, 2019
12
min read
Exposing MyRocks internals via system variables: Part 5, Data Reads
Exposing MyRocks internals via system variables: Part 5, Data Reads
May 21, 2019
13
min read
Exposing MyRocks internals Via system variables: Part 6, Replication
Exposing MyRocks internals Via system variables: Part 6, Replication
May 28, 2019
5
min read
No Comments Yet
Let us know what you think