Have you ever wondered if there's a "secret sauce" for tuning your replication configuration to squeeze out better performance? Or perhaps you've found yourself staring at a wall of latency, wishing there was a knob you could turn that didn't involve rewriting your entire schema.
In this post, I’ll break down the very useful (but often obscure) switches that can significantly improve your replication throughput.
The Snapshot Agent is the heavy lifter that prepares your initial data. If your initial sync is crawling, these two parameters are your best friends.
This is the number of rows sent in a single bulk copy operation.
bcp in, the batch size defines how many rows are sent to the server as a single transaction. It also dictates when the Distribution Agent logs a progress message.bcp out, a fixed size of 1000 is used. Setting this to 0 disables message logging.This specifies how many bulk copy operations can run in parallel.
The Log Reader Agent is responsible for moving transactions from the Publisher's transaction log to the Distribution database.
This is arguably one of the most important parameters for handling large delete or update batches.
The Distribution Agent moves commands from the Distributor to the Subscriber. It has the most "knobs" to turn for fine-tuning.
These manage the frequency of COMMIT statements at the Subscriber.
COMMIT (Default: 100).COMMIT (Default: 1000).This improves performance by forcing the Distribution Agent to use the BULK INSERT command when applying snapshot files.
When standard parameter tuning isn't enough, you need to look at architectural shortcuts.
If you are applying a snapshot over a slow or unreliable network, don't stream it.
This can greatly improve aggregate replication throughput by allowing multiple connections to a Subscriber to apply batches in parallel.
Thanks for reading! Tuning replication is often an iterative process of finding the right balance between overhead and reliability.
Ready to future-proof your SQL Server investment?