I recently did an upgrade of 200+ nodes of Cassandra across multiple environments sitting behind multiple applications using the cstar tool. We chose the cstar tool because, out of all automation options, it has topology awareness specifically to Cassandra. Here are some things I noticed.
1. The sister program cstarpar is sometimes required
The cstar tool is used to run commands on servers in a distributed way. The alternate cstarpar is used if you need to run the commands on the originating server instead. The Last Pickle detailed a fine example in their 3-part series on cstarpar [ https://thelastpickle.com/blog/2018/12/11/cstar-reboots.html]. In our case, the individual nodes didn't have the same access to a configuration management server that the jump host did. The cstarpar script was used to issue a command to the configuration management server, and then send ssh commands to the individual nodes (to change files, restart, etc.).2. The cstar jobs folder can be used to view output
The jobs folder is on the originating server under ~/.cstar/jobs, with a UUID-labeled directory for each job, and server hostname directories underneath. The output is in a file named "out" under each hostname directory. Grepping through ~/.cstar/jobs/[UUID]/server*/out is a handy way to view desired info in the output.3. Use verbose output
The cstar output can be a little too quiet, and we know that sometimes means trouble. The tag on a -v flag so you have lots of output to grep through as above.4. Ask for the output
Related, you also have to ask for some output. One of the pre-checks was to verify that specifically named files didn't exist. Long story short, but the most efficient way to do this particular check was to grep through directories. In the test, the command worked, and in staging, the command worked. In production, cstar was marking each node as failed. Much troubleshooting later, we realized that the files existed in test and staging, but not production, so the script wasn't finding anything and therefore "failing." Piping the output into a 'wc -l' allowed each check to have some kind of response, and the script succeeded.5. The Cassandra nodes have to be up
It's documented that all of the nodes in a cluster have to be registering as up, or cstar will fail. The automated process we used was to shut down Cassandra, pull the new config and binary, and restart Cassandra, node by node. With a lot of Cassandra nodes, even with a brief sleep time in between nodes, I was hitting the permissions server too often and too quickly for its comfort, and about 75% of the way through, it started blocking me after Cassandra was shut down on every 10th node or so. The only way I detected this was that cstar paused for long enough that I noticed; there was no error message. I had to wait for the permissions server to stop limiting me, and then manually issue the commands on the node. On the plus side, cstar didn't fail while waiting for me and continued on with the list of nodes automatically after I took care of the individual node.6. It really is topology-aware
I saved the best for last. It's a trick to make other automation tools aware of Cassandra topology. In this upgrade environment, we had multiple data centers with varying numbers of nodes within each, and cstar was smart about distributing the work so that generally the same percentage of nodes were completed for each data center at any point in time. That meant that in the end, the largest data center wasn't being hit repeatedly with remaining upgrades. Overall, the gotchas were minor, and I'm happy we used the cstar tool on this upgrade. It allowed flexibility to run custom scripts in a unique environment and certainly shortened the amount of time required to upgrade a large cluster. Check out the cstar tool here https://github.com/spotify/cstar.Share this
Previous story
← The things I hate about Apache Cassandra
You May Also Like
These Related Stories
How to fix error when MySQL client fails to load SQL file with Blob data
How to fix error when MySQL client fails to load SQL file with Blob data
Jul 15, 2019
5
min read
How to manage multiple MySQL binary installations with SYSTEMD
How to manage multiple MySQL binary installations with SYSTEMD
Jul 30, 2018
7
min read
SaltStack for remote parallel execution of commands
SaltStack for remote parallel execution of commands
Jul 7, 2014
3
min read
No Comments Yet
Let us know what you think