Amazon database migration service - first try
![](https://www.pythian.com/hubfs/Imported_Blog_Media/Screen-Shot-2016-03-16-at-1_26_40-PM.png)
Recently, checking Amazon Web Services, I stumbled upon a service I hadn't tested before. It was Data Migration Service (DMS). I read documentation and checked other resources. I found a good, fresh blog post AWS Database Migration Service written by Jeff Barr. It was really interesting and I decided to give a try and test the service. I created an Oracle RDS on AWS as a target and an Oracle Linux box on Azure with Oracle 12c EE as a source database for migration. The source database sid was "test" and destination was "orcl". I created tablespaces and users on both sides with the name "testmig" and created a table on the source database. Initially I loaded 1000000 records to the table and created an index. The schema on destination database was empty. I also enabled archivelog mode on the source database. Creating user and table on the source: [code lang="sql"] test> create user testmig identified by welcome1 default tablespace testmig temporary tablespace temp; User created. test> grant connect,resource to testmig; Grant succeeded. test> conn test test> create table test_tab_1 (pk_id number, rnd_str_1 varchar2(15),use_date date,rnd_str_2 varchar2(15), acc_date date); Table created. test> [/code] Loading the data: [code lang="bash"] [oracle@oradb1 patchdepot]$ head test_tab_1.dat 340,MLBO07LV,10/30/13 15:58:04,NABCFVAQ,12/08/17 18:22:48 341,M48R4107,12/09/13 12:30:41,ACA79WO8,12/15/16 08:13:40 342,KARMF0ZQ,04/21/14 08:53:33,JE6SOE0K,06/18/17 07:12:29 343,8NTSYDIS,11/09/14 23:41:48,FBJXWQNX,08/28/15 20:47:39 344,0LVKBJ8T,09/28/12 06:52:05,VBX3FWQG,10/28/15 06:10:42 345,Z22W1QKW,06/06/13 11:14:32,26BCTA9L,08/21/17 08:35:15 346,CGGQO9AL,08/27/14 02:37:41,15SRXZSJ,11/09/17 19:58:58 347,WKHINIUK,07/02/13 14:31:53,65WSGVDG,08/02/15 10:45:50 348,HAO9X6IC,11/17/12 12:08:18,MUQ98ESS,12/03/15 20:37:20 349,D613XT63,01/24/15 16:49:11,3ELW98N2,07/03/16 11:03:40 [oracle@oradb1 patchdepot]$ export NLS_DATE_FORMAT="MM/DD/YY HH24:MI:SS" [oracle@oradb1 patchdepot]$ sqlldr userid=testmig table=test_tab_1 Password: SQL*Loader: Release 12.1.0.1.0 - Production on Wed Mar 16 13:07:50 2016 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. Express Mode Load, Table: TEST_TAB_1 Path used: External Table, DEGREE_OF_PARALLELISM=AUTO Table TEST_TAB_1: 100000 Rows successfully loaded. Check the log files: test_tab_1.log test_tab_1_%p.log_xt for more information about the load. [oracle@oradb1 patchdepot]$ [/code] On the target system: [code lang="sql"] rdsorcl> create tablespace testmig; Tablespace TESTMIG created. rdsorcl> create user testmig identified by welcome1 default tablespace testmig; User TESTMIG created. rdsorcl> [/code] In the blog post mentioned, the migration was done without replication and I was curious to test it with some kind of ongoing DML activity on the source database. I setup a linux box with Jmeter and started my load with pace about 15 transactions per second. The transactions were inserts and updates on the created table. Everything was working fine so far and I switched to the Data Migration Service on AWS. The service has a pretty easy and clear workflow. You need just push the button "Create migration" and it will guide you through the process. In general, you need to create a replication instance, endpoints for source and target and task to start initial load and replication. I created a replication instances and while it was creating (it took some time) was asked to setup endpoints for source and target. The first issue I hit when I tried to use a DNS name for my Azure instance. The test connection was failing by timeout and it was not clear where the problem were. It could be either connection or DNS problem. The issue was solved by providing IP address instead of domain name for my Azure instance.
![Screen Shot 2016-03-16 at 1.26.40 PM](https://www.pythian.com/hs-fs/hubfs/Imported_Blog_Media/Screen-Shot-2016-03-16-at-1_26_40-PM-272x300.png?width=544&height=600&name=Screen-Shot-2016-03-16-at-1_26_40-PM-272x300.png)
- We may need to adjust security groups for target RDS or EC2 systems. It may prevent connections.
- Better to use IP for source endpoints since DNS may be not reliable.
- Enable logging when you create task.
- If you enable replication from Oracle database you have to setup full supplemental logging for the replicated schemas on your source system.
- It requires basic knowledge about replication and how it works to understand and fix the error.
Share this
You May Also Like
These Related Stories
Issues With Database Service Names Starting With Pluggable Database (PDB) Name
Issues With Database Service Names Starting With Pluggable Database (PDB) Name
Nov 5, 2020
9
min read
Fix: SSIS Catalog Creation Error – “SQL Server Denali is Required to Install Integration Services”
![](https://www.pythian.com/hubfs/Imported_Blog_Media/shutterstock_1501280888-scaled-e1649344057853.jpg)
Fix: SSIS Catalog Creation Error – “SQL Server Denali is Required to Install Integration Services”
Mar 24, 2022
1
min read
SQL Server AlwaysOn Availability Groups on Google Kubernetes Engine
![](https://www.pythian.com/hubfs/Imported_Blog_Media/gke-sql-server-always-on.jpeg)
SQL Server AlwaysOn Availability Groups on Google Kubernetes Engine
Nov 9, 2021
17
min read
No Comments Yet
Let us know what you think