MongoDB Backups and Point In Time Recovery – Part 3

7 min read
Nov 17, 2022

This is the third and final post in the series of blog posts for MongoDB replica set backups with Point In Time Recovery (PITR). This will be focused on restores, particularly details on PITR using the full and incremental backups. If you have not read the first two posts, please read part 1 that explains how to backup MongoDB using lvm snapshot and part 2 that explains how to run incremental backup using the oplog.


What is PITR

Point in time recovery (PITR) in the context of databases, is the ability for an administrator to restore or recover a set of data or a particular setting from a time in the past. It usually involves restoring a full backup that brings the server to its state as of the time the backup was made. Additionally it might involve restoring incremental backups that were run after the full backup to bring the state of the server to a more recent time. 

In our case, we run daily full backup followed by hourly incremental backups. If we have a full backup taken at midnight and we want to restore the database until 6am, we need the full backup plus 6 incremental backups. 



You are a database administrator and someone just called you to restore the “meta” collection in the “users” database. All of the meta keys were dropped. It’s unknown how many meta keys existed in the collection. The erroneous operation was run about 2:30 pm UTC. The last full backup was run at 1pm UTC.

Let’s see the actions that we should take to restore the database from full backup and then apply incrementals. 



One of the first things to check when restoring to point in time, how does that time 2:30 pm UTC translate into server time. Is the server also running in UTC or other time zone? For this exercise, we will be using server time UTC, which matches with the reported incident time.


If you still have the collection on the instance where the erroneous operation was run, you can query the collection and find the time. In the example below, you can see that we are querying the collection for operation : delete (d) on the namespace : users.meta. There are 8 documents that were removed from this collection and the time for the delete is showing 2022-11-08T14:32:19. The ts for the same is 1667917939. Notice the timestamp has increments for documents removed at the same time “ts” : Timestamp(1667917939, 8). In this case, it’s 8.



{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a6669887cc403cfdade24") }, "ts" : Timestamp(1667917939, 1), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a6674887cc403cfdade25") }, "ts" : Timestamp(1667917939, 2), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a667e887cc403cfdade26") }, "ts" : Timestamp(1667917939, 3), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a6688887cc403cfdade27") }, "ts" : Timestamp(1667917939, 4), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a66bf887cc403cfdade28") }, "ts" : Timestamp(1667917939, 5), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a67ef5cad8195283072f3") }, "ts" : Timestamp(1667917939, 6), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a67f65cad8195283072f4") }, "ts" : Timestamp(1667917939, 7), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }

{ "op" : "d", "ns" : "users.meta", "ui" : UUID("66592935-0a72-4a81-bdf1-414b42f6199f"), "o" : { "_id" : ObjectId("636a68015cad8195283072f5") }, "ts" : Timestamp(1667917939, 8), "t" : NumberLong(6), "v" : NumberLong(2), "wall" : ISODate("2022-11-08T14:32:19.877Z") }



You can get the date from Epoch timestamp by using bash as


date -d @1667917939

Tue Nov  8 14:32:19 UTC 2022


Or you can get the timestamp from date using


date +%s -ud"Tue Nov  8 14:32:19 UTC 2022"



If you don’t have the collection, you can still use bsondump on the file and find the timestamp from there. In this case, if you are doing hourly incremental backups, you need to query the second incremental backup as that one fits between 2 and 3 pm.


bsondump --quiet /backup/mongo_20221108/oplog_2/local/ | grep '"op":"d","ns":"users.meta"'










From the output, we can see the first document removed by the erroneous operation was executed at “ts”:{“$timestamp”:{“t”:1667917939,”i”:1}}. Again, there are 8 documents removed at the same time which are visible in the output.


Now we know that we need to apply the incremental backups just before Tue Nov  8 14:32:19 UTC 2022, or in timestamp 1667917939.


We will create a new mongod instance running on port 57017 and restore the latest full backup. Once that is done, we will apply the incremental backups. Creating the instance running on a different port or brand new server is not part of this exercise. 

Restoring the fill backup that is taken following the steps from Part 1 has the following steps:

  • Stop the mongod process on port 57017 and clean the data directory
  • Extract the backup archive file using tar -xzf /backups/mongodb_backup_$(date ‘+%Y%m%d%H%M’).tar.gz
  • Copy the extracted files to the data directory or use the same path as data directory by updating the mongod.conf file
  • Assign the directory permissions to mongod:mongod user and group
  • Start mongod process


Let’s see the status of the database.collection once the full backup is restored.


> use users

switched to db users

> db.meta.find()

{ "_id" : ObjectId("636a6669887cc403cfdade24"), "name" : "Lisa", "meta_key" : "p2qwng9splfjg02" }

{ "_id" : ObjectId("636a6674887cc403cfdade25"), "name" : "Rolando", "meta_key" : "spsplf2jgav02" }

{ "_id" : ObjectId("636a667e887cc403cfdade26"), "name" : "Rafa", "meta_key" : "sllmvnf2v02qp3" }

{ "_id" : ObjectId("636a6688887cc403cfdade27"), "name" : "Durga", "meta_key" : "vxn3f29k23qgp" }

{ "_id" : ObjectId("636a66bf887cc403cfdade28"), "name" : "Vikram", "meta_key" : "sfkjd00sjfpwmgb" }



There are just 5 documents restored and we know from the collection that there are 8 documents removed. Restoring the full backup is not enough, we need incrementals. 

Let’s see how to apply incremental backup that was generated from the oplog collection. For this, we will use a special command from mongorestore with –oplogLimit



Prevents mongorestore from applying oplog entries with timestamp newer than or equal to <timestamp>. Specify <timestamp> values in the form of <time_t>:<ordinal>, where <time_t> is the seconds since the UNIX epoch, and <ordinal> represents a counter of operations in the oplog that occurred in the specified second.

You must use –oplogLimit in conjunction with the –oplogReplay option.

Our mongorestore command will look like this:

mongorestore –port 57017 -u<username> -p<password> –authenticationDatabase=admin –oplogReplay –oplogLimit 1667917939, 1  /backup/oplogR


Before we use the above command, we need to copy the file to match the appropriate format that mongorestore expects. If we check the backup files, the oplog file is


ls -l /backup/mongo_20221108/oplog_1/local/

total 96

-rw-r--r-- 1 root root 93853 Nov  8 14:30

-rw-r--r-- 1 root root   185 Nov  8 14:30


To allow the mongorestore command to use the files, it needs to be used as oplog.bson. For this purpose, we can create new directories per oplog and copy the files there. 


cp /backup/mongo_20221108/oplog_1/local/ /backup/oplogR1/oplog.bson


Now we need to replay the operations from the increment #1


mongorestore --port 57017 –u<username>  -p<password> --authenticationDatabase=admin --oplogReplay /backup/oplogR1


We need to repeat this as many times as we have to before we come to the latest incremental backup where we need to stop with –oplogLimit


cp /backup/mongo_20221108/oplog_2/local/ /backup/oplogR2/oplog.bson


At last, running the restore with the –oplogLimit will allow us to restore just before the erroneous operation was run.


mongorestore --port 57017 -u<username> -p<password> --authenticationDatabase=admin --oplogReplay --oplogLimit 1667917939, 1  /backup/oplogR2


Now, if we query our collection, we can see the 8 users meta keys are restored.


mongohc:PRIMARY> db.meta.find()

{ "_id" : ObjectId("635955f59340fa059a6cf694"), "name" : "Vikram", "meta_key" : "sfkjd00sjfpwmgb" }

{ "_id" : ObjectId("635956109340fa059a6cf695"), "name" : "Lisa", "meta_key" : "p2qwng9splfjg02" }

{ "_id" : ObjectId("6359562c9340fa059a6cf696"), "name" : "Rolando", "meta_key" : "spsplf2jgav02" }

{ "_id" : ObjectId("635956419340fa059a6cf697"), "name" : "Rafa", "meta_key" : "sllmvnf2v02qp3" }

{ "_id" : ObjectId("6359565f9340fa059a6cf698"), "name" : "Durga", "meta_key" : "vxn3f29k23qgp" }

{ "_id" : ObjectId("635956c29340fa059a6cf699"), "name" : "Archita", "meta_key" : "3odfmp0k3nal8" }

{ "_id" : ObjectId("635956d89340fa059a6cf69a"), "name" : "Ravi", "meta_key" : "aksd02kopakdn38" }

{ "_id" : ObjectId("635956f29340fa059a6cf69b"), "name" : "Prasenjeet", "meta_key" : "3kjnladfn08efk" }



Mongorestore is a utility that loads data from either a binary database dump created by mongodump or the standard input into a mongod or mongos instance. With –oplogLimit it can be used for point in time recovery to apply oplog entries with timestamp newer than or equal to <timestamp>. 

Get Email Notifications

No Comments Yet

Let us know what you think