Skip to content

Insight and analysis of technology and business strategy

Interesting Behavior of MaxCmdsInTran Parameter

I recently worked on transactional replication issue and discovered interesting behavior of the log reader agent switch called MaxCmdsInTran and wanted to share it with you guys.

Lets take a look at  the use of this switch by looking at the msdn documentation below,

MaxCmdsInTran number_of_commands

Specifies the maximum number of statements grouped into a transaction as the Log Reader writes commands to the distribution database. Using this parameter allows the Log Reader Agent and Distribution Agent to divide large transactions (consisting of many commands) at the Publisher into several smaller transactions when applied at the Subscriber. Specifying this parameter can reduce contention at the Distributor and reduce latency between the Publisher and Subscriber. Because the original transaction is applied in smaller units, the Subscriber can access rows of a large logical Publisher transaction prior to the end of the original transaction, breaking strict transnational atomicity. The default is 0, which preserves the transaction boundaries of the Publisher.

However, I observed that if you do any update on Primary Column which won’t be split into multiple smaller transactions as described in the documentation.

Looking further on this reveals that  it probably the effect of bounded update. Bounded update has to be processed as a whole since it send all delete followed by all insert, can’t break into smaller transactions as it won’t know what would be a safe boundary.

The key difference comes from the fact that how updates are replicated when you update PK column and non-PK column. Let’s take an example to look at this (In this example C1 is non-PK and C2 is PK column)

If you update the non-PK column it replicated as update.

— Updating non-PK column

begin tran My_Deferred_Update_1_Row

update T1 set c1 = 1 where C1=2

commit tran My_Deferred_Update_1_Row

— Below is what gets added in msrepl_commands

exec sp_replshowcmds 1000

xact_seqno                                      command

0x0000016E000005330004 {CALL [dbo].[sp_MSupd_dbot1] (1,,2,0×01)}

What is bounded update?

However when you do a update on PK/Clustered index columns are replicated as Delete/Insert pair.

— Updating unique column

begin tran My_Bounded_Update_2_Rows

update T1 set c2 = c2 + 1000

commit tran My_Bounded_Update_2_Rows

— Below is what get added in msrepl_commands

exec sp_replshowcmds 1000

xact_seqno                                        command

0x00000017000000B5000E  {CALL?[dbo].[sp_MSdel_dboT1] (1)}

0x00000017000000B5000E  {CALL?[dbo].[sp_MSdel_dboT1] (2)}

0x00000017000000B5000E  {CALL?[dbo].[sp_MSins_dboT1] (1,3000,1)}

0x00000017000000B5000E  {CALL?[dbo].[sp_MSins_dboT1] (2,1002,2)}

As you can see in above case when we do update on PK/clustered index column, the updates are sent as deletes followed by inserts( this is called bounded update). This is one single transaction which is converted into delete and update pair. All deletes are sent first followed by insert.

We cannot break this transaction (PK update) as it will cause the delete (few or all) to happen first and then insert in separate transaction and will break transaction boundary, breaking this operation into multiple transaction will cause inconsistency and that’s most probably reason for this switch won’t work in this situation.

Why replication sending all deletes first and then all inserts and not the pairs delete/insert in order?

Let’s assume table A contains two rows, unique column C1 values being 1 and 2.

Now user runs the following: update A set c1 = c1 + 1.

The log records will be like

LOP_BEGIN_UPDATE

Del 1

Ins 2

Del 2

Ins 3

LOP_END_UPDATE

And the commands posted in the distribution database will be like

{CALL [sp_MSdel_dboA] (1)}

{CALL [sp_MSdel_dboA] (2)}

{CALL [sp_MSins_dboA] (1,2)}

{CALL [sp_MSins_dboA] (2,3)}

But if its send update directly, you’ll see

Update A set c1 = 2

Update A set c1 = 3

In that case, the first update will fail since c1 = 2 already exist. that’s why it deletes the row first before inserting them back to the new value.

I would recommend to look at the option of publishing the stored procedure execution to avoid this kind of huge updates which will cause performance issues in replication.

Happy Reading!

 

 

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner