Important pre-checks before running pt-online-schema-change to Add/Drop an index in MySQL
When you're working with large MySQL tables in production, making schema changes like adding or dropping indexes can be risky if not done carefully or manually using SQL statements. Fortunately, Percona Toolkit’s pt-online-schema-change (pt-osc) provides a safer, non-blocking way to make changes — but even with this powerful tool, preparation is very essential.
Today, I am sharing here some of the most essential pre-checks you should perform before using pt-online-schema-change to modify indexes on a MySQL table.
1. Check table size and row count
Understand the table size to anticipate the duration and performance impact.
- Large tables can take time and may increase load — plan accordingly
- Large tables will take longer to clone and alter, and may spike I/O and CPU
- Size of the table helps you gauge the impact and duration of the schema change
- Size of the table is also important to determine the required free disk space for a successful pt-osc execution
COMMAND TO CHECK TABLE SIZE:
select table_schema as database_name,
table_name,
round( (data_length + index_length) / 1024 / 1024 / 1024, 2) as total_size_gb, round( (data_length) / 1024 / 1024 / 1024, 2) as data_size_gb,
round( (index_length) / 1024 / 1024 / 1024, 2) as index_size_gb
from information_schema.tables
where table_schema = 'DB_NAME'
and table_name = 'TABLE_NAME';
2. Confirm that a primary key or unique index exists on the table
pt-osc needs a PRIMARY KEY or UNIQUE KEY to track and apply row changes. If no primary or unique key exists, the tool will refuse to run unless explicitly forced (--no-check-alter), which is not a recommended practice
COMMAND TO check your table structure and confirm that this exists
SHOW CREATE TABLE TABLE_NAME;
3. Check replication health
Ensure all concerned replicas are up to date and healthy before running changes to the master. Schema changes can cause significant replication lag if not managed carefully.
COMMANDS TO BE EXECUTED ON THE MASTER NODE:
SHOW SLAVE HOSTS;
COMMANDS TO BE EXECUTED ON THE ABOVE LISTED REPLICAS:
SHOW SLAVE STATUS \G
Check and confirm below values:
Seconds_Behind_Master = 0
Slave_IO_Running = YES
Slave_SQL_Running = YES
You can use --check-slave-lag with pt-osc to monitor and throttle changes based on replica lag.
4. Review the index change
Before adding or dropping an index, ensure below things are done:
- Use EXPLAIN to see the current query plans
- Ensure you're not adding duplicate or redundant indexes
- Evaluate the benefit of the change
- Ensure the index you’re adding is necessary and efficient
- Ensure the index you’re dropping is not being used by queries
- Prevent negative performance impacts
Indexes should improve read performance and not add unnecessary overhead. Before adding or dropping an index, run EXPLAIN (or EXPLAIN ANALYZE) on key queries:
1. Check Current Indexes
SHOW INDEX FROM table_name;
SHOW CREATE TABLE table_name;
2. Check query performance before and after using EXPLAIN PLAN
If you're adding or changing an index, test your most important queries with and without the index in a non-production environment:
Run the Query Without the Index
EXPLAIN SELECT * FROM test WHERE id = 1234;
This will output the execution plan, showing how MySQL is accessing the table. If you see values like type: ALL, key: NULL in the output, it means a full table scan is happening and this table needs indexing.
Now add the Index in a non-production environment first and run the same query again with EXPLAIN
EXPLAIN SELECT * FROM test WHERE id = 1234;
Now compare the output. You must see the difference as below
- type (from ALL to ref or range)
- key showing new index
- rows drastically reduced
- Extra maybe show Using index condition
Make sure you avoid these mistakes
- Over-indexing: Too many indexes slow down writes and increase storage
- Redundant indexes: Wastes resources with no performance benefit
- Wrong column order: Multi-column indexes must match query patterns
- Dropping needed indexes: Can kill performance or break query plans
5. Ensure sufficient disk space
When pt-osc is executed, it creates a copy of the table. Make sure you have enough free disk space in:
- Data directory
- Temporary storage
- Binlogs (if replication is on)
You’ll need enough free space for a table copy. The available size in the data partition should be at least twice of the table size.For e.g, if the table size is 50G, make sure there is more than 100G free space available in the data partition when pt-osc is executed.
COMMAND TO CHECK DISK SPACE:
df -h
6. Check for foreign keys
Foreign keys can block or prevent pt-osc from completing if not handled properly.If your table has foreign key constraints, pt-osc must:
- Detect them
- Rebuild the constraints pointing to the new table
- If improperly handled, the operation may fail or leave foreign key constraints broken.
COMMAND to identify any foreign key relationships:
SELECT table_name, column_name, constraint_name,referenced_table_name, referenced_column_name
FROM information_schema.key_column_usage
WHERE referenced_table_name = 'table_name';
COMMAND to check foreign keys defined on your table:
SELECT CONSTRAINT_NAME, TABLE_NAME, REFERENCED_TABLE_NAME
FROM information_schema.referential_constraints
WHERE TABLE_NAME = 'table_name';
NOTE: Check these queries for both tables, child tables and referenced tables.
What do you do if foreign keys exist on your concerned table?
If foreign keys are existing, use the --alter-foreign-keys-method flag with one of the following options:
- auto: (default) Automatically determines the safest method
- rebuild_constraints: Drops and recreates foreign keys on child tables
- drop_swap: Rebuilds tables and drops constraints temporarily
- none: Only safe if no foreign keys are involved
What happens if you use pt-osc without handling foreign keys?
- Schema sync failures
- Orphaned child rows
- Broken referential integrity
7. Check for triggers
Since pt-osc creates triggers to keep the shadow table in sync with the original during the change, you must verify:
- The table doesn't already have a conflicting BEFORE/AFTER INSERT, UPDATE, or DELETE trigger
- pt-osc will fail and will not work on tables that already have triggers defined.
- This tool itself adds its own triggers (INSERT, UPDATE, DELETE) on the original table
MySQL does not allow multiple triggers of the same type (e.g., two AFTER INSERT triggers). - So, if your table already has any triggers, pt-osc will refuse to run
COMMANDS to check existing triggers:
SELECT TRIGGER_NAME, EVENT_MANIPULATION, EVENT_OBJECT_TABLE, ACTION_TIMING, ACTION_STATEMENT, CREATED
FROM information_schema.TRIGGERS
WHERE TRIGGER_SCHEMA = 'your_database_name'
AND EVENT_OBJECT_TABLE = 'table_name';
SHOW TRIGGERS LIKE 'table_name';
SHOW TRIGGERS FROM database_name WHERE Table = 'table_name';
8. Check for replication filters
pt-osc is very cautious when it detects that replication filters are used on the server. This is because:
- If a replica has filters like --replicate-do-db, --replicate-ignore-table, or --binlog-do-db, etc. replication can break or it misses schema changes
So by default, it aborts execution if any such filters are found
COMMAND to check replication filters
SHOW VARIABLES LIKE '%replicate%';
Check below variables:
- replicate_do_db
- replicate_ignore_table
- binlog_do_db
- replicate_wild_do_table
SHOW SLAVE STATUS\G
If any of the above values are non-empty, pt-osc will block the operation by default.
If you understand the risks, you can override this behavior by explicitly passing --no-check-replication-filters as an option in the pt-osc command. This will make pt-osc ignore the presence of replication filters and proceed anyway.
Make sure you can safely override this only if:
- You know the schema change will affect only databases/tables that are replicated
- You understand how your replication filters work
- You’ve tested it in a staging environment
- You're not using partial replication that could skip temporary or renamed tables used by pt-osc
Apart from these, there are some other important considerations to take care of while doing a pt-osc execution
1. Test the changes in test/staging environment
Always test your pt-osc command in a non-production environment before running it in a production environment
2. Set proper load throttling options
Use throttling to avoid overload, set the values as per your system values
For e.g:
--max-load Threads_running=XX
--critical-load Threads_running=XX
3. Use --max-lag in a replicated environment
Add --max-lagoption in your pt-osc command if you are working on a server which has replicas added under it. With this option, pt-online-schema-change monitors the replica lag, and pauses the data copy if any replica exceeds the specified lag in seconds. Hence this prevents the tool from overloading replicas and causing excessive lag.
It checks lag via SHOW SLAVE STATUS If any replica has replication lag > X seconds, pt-osc pauses. This helps to maintain a near-real-time data consistency and avoids breaking or delaying downstream apps that rely on timely replication
4. Take backup or snapshot first
Always take a full back up of your table before altering any production data
5. Do a dry run and check the command in a non-production environment
Double-check your full command using dry-run before doing actual execution. Always test your commands in a staging environment first, before running it directly on a production environment.
pt-online-schema-change \
--alter "ADD INDEX idx_example(created_at)" \
--dry-run \
--user=dbuser --password=dbpass --host=localhost \
--database=mydb --table=mytable
Summary
pt-online-schema-change is a fantastic tool, but it’s only as safe as your preparation. These few checks will help ensure that your schema change doesn't disrupt replication, application traffic or data integrity. It ensures your schema change is safe, non-disruptive, and production-ready.
Whether you're adding a much-needed index or cleaning up old ones, following these steps can save you hours of debugging and stress!
MySQL Database Consulting
Looking to optimize your MySQL use?
Share this
Share this
More resources
Learn more about Pythian by reading the following blogs and articles.

The hidden cost of too many indexes in MySQL
ProxySQL Configuration File & Startup Process Explained
What Does GA (General Availability) Mean?
Ready to unlock value from your data?
With Pythian, you can accomplish your data transformation goals and more.