HOWTO: Oracle Cross-Platform Migration with Minimal Downtime

Aug 20, 2009 / By Don Seiler


I recently performed a migration from Oracle 10gR2 on Solaris to the same version on Linux, immediately followed by an upgrade to 11g. Both platforms were x86-64. Migrating to Linux also included migrating to ASM, whereas we had been using ZFS to hold the datafiles on Solaris. Restoring files into ASM meant we would have to use RMAN (which we would probably choose to use anyway).

As with many databases, the client wanted minimal downtime. It was obvious to us that the most time-consuming operation would be the restore and recovery into the new instance. We were basically doing a restore and recovery from production backups and archived redo logs. It quickly dawned on me that we could start this operation well before the scheduled cutover time and downtime window, chopping at least six hours from the downtime window. The client would only need to keep the new instance in mount mode after the initial restore/recovery finished, periodically re-catalog the source instance’s FRA (which was mounted via NFS), and then re-run the recover database command in RMAN. Once the time comes to cutover, simply archivelog current the original instance and shutdown immediate. Then open the new instance with the RESETLOGS option, and voila! Migration complete!

I’ll try to recreate a simple example here.

On the new instance, assume that you have ASM set up and configured with two disk groups, DATA and REDO. Let’s also assume that you have the FRA directory (or the directory where you hold backups and archivelogs) mounted from the Solaris host to the Linux host under /solfra/prod/. Here’s the command we used:

mount -o hard,bg,tcp,nfsvers=3,suid,rsize=32768,wsize=32768,nointr,noac,ro solhost:/fra /solfra/

We then need to get a pfile and edit it for ASM-ification. You can either restore the spfile from backup or just export a pfile from the current running production instance. To do the former, you’d need to go into RMAN and run an operation similar to this (best to ensure that the ORACLE_HOME, ORACLE_SID, and other Oracle-related environment variables are set properly):

RMAN> startup force

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/home/oracle/app/oracle/product/10.2.0/db_1/dbs/initprod.ora'

starting Oracle instance without parameter file for retrival of spfile
Oracle instance started
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of startup command at 06/29/2009 13:34:50
ORA-00205: error in identifying control file, check alert log for more info
RMAN> restore spfile to pfile 'initprod.ora' from autobackup recovery area = '/solfra/' db_unique_name = 'PROD';

Starting restore at 29-JUN-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK

recovery area destination: /solfra/
database name (or database unique name) used for search: PROD
channel ORA_DISK_1: autobackup found in the recovery area
channel ORA_DISK_1: autobackup found: /solfra/PROD/autobackup/2009_06_29/o1_mf_s_690813149_54kypcxz_.bkp
channel ORA_DISK_1: SPFILE restore from autobackup complete
Finished restore at 29-JUN-09

From here you can go into sqlplus and CREATE PFILE FROM SPFILE;.

Now we have our text pfile in $ORACLE_HOME/dbs/initprod.ora, and we want to make these changes specific to the transition to ASM:


Obviously, the last two parameters about location and size of the FRA should be tailored to your setup.

We then create a new SPFILE with CREATE SPFILE FROM PFILE;.

Now we’re ready to restore the controlfile(s) from backup:

RMAN> startup force;

Oracle instance started
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of startup command at 08/13/2009 20:20:56
ORA-00205: error in identifying control file, check alert log for more info

RMAN> restore controlfile from autobackup recovery area = '/solfra/' db_unique_name = 'PROD';

Starting restore at 13-AUG-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=650 devtype=DISK

recovery area destination: /solfra/
database name (or database unique name) used for search: PROD
channel ORA_DISK_1: autobackup found in the recovery area
channel ORA_DISK_1: autobackup found: /solfra/PROD/autobackup/2009_08_13/o1_mf_s_694801261_5890xyoj_.bkp
channel ORA_DISK_1: control file restore from autobackup complete
output filename=+DATA/prod/controlfile/current.256.694815695
output filename=+REDO/prod/controlfile/current.256.694815697
Finished restore at 13-AUG-09

Note that if you restored the spfile from backup, you don’t need to specify the recovery area in the restore controlfile from autobackup command. In this case, because I chose to create a pfile from the source instance, I needed to supply the recovery area location. Note that this is the recovery area that holds the source backups, not the recovery area that your new instance will use once it is opened.

With the control files restored, we can mount the new instance:

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

The next step is to catalog the backup files and archivelogs from the remote FRA.


Starting implicit crosscheck backup at 13-AUG-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=646 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=650 devtype=DISK
Crosschecked 565 objects
Finished implicit crosscheck backup at 13-AUG-09

Starting implicit crosscheck copy at 13-AUG-09
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 13-AUG-09

searching for all files in the recovery area
cataloging files...

This will continue listing all the files in the catalog, and some files that were not cataloged (e.g. online redo logs, non-RMAN files, etc.).

Now we’re ready to begin our RESTORE phase. You’ll need an rman command file that looks something like this:

run {
  set newname for datafile X to new;
  # ... (repeat for other datafiles)
  set newname for tempfile Y to new;
  # ... (repeat for other tempfiles)
  restore database;
  switch datafile all;
  switch tempfile all;
recover database;

Note that X and Y are the file# of the individual datafile and tempfile, respectively. You will probably want to generate this file if you have a large number of datafiles.

It is important that you read MetaLink Doc 415884.1, if you are performing a migration to a different platform with the same endian. Oracle suggests that you should perform a CONVERT DATABASE, but later admits that you only need to perform a CONVERT on tablespaces containing UNDO segments:

The convert database command is a no operation command for all files that do not contain undo but they are scanned regardless during the conversion. Using the convert tablespace/datafile for those containing undo will suffice and all other source files not containing undo can be moved to the target (same endian) platform without the Recovery Manager conversion. This will greatly reduce the time required to move and database with thousands of files from source to target platform also decreasing the amount of time the database is unavailable.

We had an internal debate over whether or not it is truly necessary even to do this. However, I would advise you to read this document and discuss it with Oracle Support Services. They would most likely try to convince you to do it or threaten to not support your database.

Once the script finishes with the recover database portion, this query should indicate that datafile headers and controlfiles are in sync:

SQL> alter session set nls_date_format='YYYY/MON/DD hh24:mi:ss';

Session altered.

SQL> select checkpoint_time,fuzzy,count(*),status
from ( select checkpoint_time,fuzzy,status
       from v$datafile_header
       union all
       select controlfile_time,'CTL',null from v$database)
group by checkpoint_time,fuzzy,status;

-------------------- --- ---------- -------
2009/AUG/14 02:22:11 CTL	  1
2009/AUG/14 02:22:11 YES       1266 ONLINE

The database is still in mount mode. If we were ready to cut over now, we could open the database with the RESETLOGS option. However, we are still a few hours or a day away from the cutover. What we want to do is apply more logs from the current production instance as they get archived. To minimize the babysitting, we decided to run this in a script once every few hours:


As you can guess, this re-catalogs the NFS-mounted FRA, ignoring files that were already cataloged. So now the controlfile knows about the new archivelogs. The RECOVER DATABASE command then beautifully applies logs from where it left off until it runs out of logs again. We re-run this periodically to keep our new instance as close to current as possible until it’s time to cut over.

When the cutover time comes, we run this on the current production instance:

SQL> alter system archive log current;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Then, on the new instance, we do one last catalog and recover. This should bring us completely up to date with the original prod instance. Now we just need to open with resetlogs:

SQL> alter database open resetlogs;

Database altered.

In my case, I manually renamed the online and standby redo logs to +REDO (they still had the old filesystem names in v$logfile), but I believe that OPEN RESETLOGS does this automatically anyway.

And so, here we are in Linux on on the same version ( as Solaris with appropriate one-off patches, using ASM. At this point, I was just under 30 minutes into my downtime window since the original prod instance was shutdown. I don’t think that’s too bad for moving a 1.3T database!

After this, I immediately went into upgrading to 11gR1 ( and then building a new standby instance, but that is outside the scope of the initial migration that I wanted to share with you today. Cheers!

42 Responses to “HOWTO: Oracle Cross-Platform Migration with Minimal Downtime”

  • Peter says:

    Excellent walkthrough!
    I’m about to embark on a similar journey and its nice to see that my thinking isn’t completely out of whack.


  • girlgeek says:

    I did this on a Solaris / Solaris move last year, but when it came time for another team in our shop to do this AIX / Solaris there were concerns that the duplicate could not be done cross OS. Is Solaris / Linux one of the OS pairs that the duplicate can be done across?

  • Fred says:

    Good article. Mounting the FRA over NFS is a nice touch.

    Question: Is there a reason you didn’t set up the Linux database as a physical standby, and just let it pump away until cutover time? Seems that would save you a bit of effort. I’m wondering if it was just personal preference, or if there is something (apart from licensing considerations) about this scenario that would prohibit using a physical standby.


  • Don Seiler says:

    Fred, that was one of the first things we considered, but having a standby on a different OS is not supported by Oracle Support Services. We tend to shy away from doing anything that would jeopardize our clients’ “supportability.”

    Says plain and simple:

    “All members of a Data Guard configuration must run an Oracle image that is built for the same platform.

    For example, this means a Data Guard configuration with a primary database on a 32-bit Linux on Intel system can have a standby database that is configured on a 32-bit Linux on Intel system. However, a primary database on a 64-bit HP-UX system can also be configured with a standby database on a 32-bit HP-UX system, as long as both servers are running 32-bit images.”

  • Fred says:

    Ah right, I overlooked the bit about the Solaris/Linux migration. Thanks for the RTFM. :)

  • […] 17 -How to do Oracle Cross-Platform Migration with Minimal Downtime ? Don Seiler – HOWTO: Oracle Cross-Platform Migration with Minimal Downtime […]

  • gary says:

    I am curious if you could guess if this process would work going from Windows to Linux?

  • HOWTO: Oracle Cross-Platform Migration with Minimal Downtime « die Seilerwerks says:

    […] leave a comment » Originally posted at The Pythian Group blog. […]

  • Jonathan Marianu says:

    Thank you for posting this. Do you normally keep your FRA on an NFS volume or was this just to facilitate the migration?

    I am currently researching using an NFS mounted, shared FRA among many databases.

  • Don Seiler says:

    Jonathan, in this case we mounted the FRA from the primary to the standby host just for the migration. It was mounted read-only to ensure that nothing happened to the files on the primary.

  • Michael says:

    I think you need to point out that your task plan would have been infinitely more complex and downtime longer were there an endian difference between migration platforms. I had to do this for a Sun Sparc Solaris to Linux migration, and we had to do the “convert” rman syntax with the database down, which basically copies the datafiles and takes quite a while with a larger database.

  • Don Seiler says:

    Michael, I do specify that both source and destination were of the same CPU type. But you are right it should be specifically stated that this is completely dependent on both ends being the same endian.

    When I did a Solaris SPARC to Solaris x86-64 migration, I used transportable tablespaces with RMAN convert on the destination side. Downtime was much, much, MUCH longer.

  • Jeepal says:

    Michael / Don.

    When there is a endian difference, could you please provide a high level methodology to use.

    We are planning to do an OS cold backup, subsequent restore of the entire database on Target, run convert datafile and synchronizing the data from there on. BTw, we are going from Big to Little Endian.

    Do you have a estimate on how much time it would take to convert a 10 Gig datafile?

    Also once datafile is converted can we apply the archivelog’s I am thinking no, but wanted to know your opinion.

  • Don Seiler says:


    For cross-platform migration, I created a new instance on the destination and used transportable tablespaces to move the data.

    Here is the 10gR2 doc that I used:

    I don’t recall the time needed to convert.

  • Pavel Tsvetkov says:

    it seems to me that it could be possible to use the same database files on different platforms (Solaris x86 & Linux ) if using the same file system type. For sample vxfs (Veritas File System). The only thing to do is to export/import diskgroup with datafiles. I only doubt it would be supportable by Oracle.

  • Don Seiler says:


    In our case, we were also migrating from ZFS on Solaris to ASM on Linux, so we didn’t look into what you are discussing. Would be interesting to explore though.

  • Som says:

    Hi Don, instead of using FRA, can I using tape?

  • Can this be used to go Windows -> Linux.

    I am assuming its not possible, unless my thinking is off

  • Don Seiler says:

    Som, assuming RMAN can easily get the controlfiles and backup files from tape, I suppose it might be possible. I don’t have any personal experience backup up to, or restoring from tape. I would think that it would be much slower than using disk though.

    Mitchell and Gary, I haven’t considered a Windows-to-Linux migration, but it might be possible. I also have no experience with Oracle on windows, so I can’t really offer any solid view.

  • Lukas says:

    I’m facing similar exercise Windows32bit -> Linux 64bit. I’m thinking about using datapump over a db link to do full import on the target system, therefore getting the migration done in single step (hopefully:).
    Are there any caveats one might have against/for that approach?
    I’ll post the my experience once I have a chance to try it out.

  • Dear Don. Nice article!

    I’ve demonstrated Disaster Recovery process with video tutorial. You can get it from the following link:


  • oraboy says:

    very nice.. thanks for sharing.

    if the new database is to have a different block size (apart from all the other you have already covered)..I am thinking its just taken care during the pfile edit (after CREATE PFILE FROM SPFILE) stage. That should bring the new database in diff block size? Is that correct? or would it break anything?


  • Raj Laxman says:

    Very good information. Thanks for sharing.

  • Don Seiler says:

    oraboy: No I don’t believe that you can’t just edit the pfile to change the block size. To change the block size you’ll need to create a new database with the new blocksize and exp/imp the data.

  • Newbie says:

    Hello All,
    If I am doing migration from HP UX 11.11 to IBM AIX, Would I able to create RMAN restore/recover database and just shutdown and start database on AIX. It has difference in Endian that is why I asking. I have approx 5 downtime window and 2TB database size.

    Any and all input are appreciated.

    Thank you in advance.

  • DBRAY says:

    Great sharing!
    I am going to migrate the 10gRAC on ASM from Linux (small endian) to AIX (big endian) on the same Oracle version and ASM.
    I wonder if the ASM DG can be mounted on both platform and can be open by the AIX instances.
    Is anyone know if the endian info is written in the ASM files?
    In your case, there is a NFS that can be read by both platforms. Can you share how the archivelog generated by the Solaris platform can be read in Linux? any conversion on the archivelog needed?

  • uv says:

    Good info, you haven’t shown that you ran the RMAN convert script to convert the UNDO tablespaces, doesn’t UNDO need conversion even when the ‘endian format’ is same across platforms.

    So, should I deduce that you pre-created the UNDO t/s in your target DB? Please clarify.


  • Rafael says:


    If i’m not wrong seems that in some cases is possible to use DataGuard to change server hardware as mentioned in this metalink note:

    Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration [ID 413484.1]

    If your combination is supported the downtime is the one related to switchover.

    What think about this, any have tried this?


  • kjnaby says:

    Is it possible to migrate from AIX to solaris. My disk are controlled by Oracle grid. ASM disk .

  • Ruslan says:

    Great article.. but no cross-platform migration is possible this way. Replying to last comment. Check ML Note 1079563.1


    […] Cross Platform Migration to AIX RMAN scripts for Oracle Database Cross Platform Migration HOWTO: Oracle Cross-Platform Migration with Minimal Downtime | The Pythian Blog Cross Platform Database Conversion with same Endian [ID 415884.1] Regards, Reply With […]

  • If you are considering migrating from Big Endian to Little Endian then GoldenGate is an option that provides for very little downtime. I have a customer that used GoldenGate for their migration from Sun SPARC E25K’s to RedHat Linux on vSphere and it worked a treat. Much faster than RMAN transportable table spaces or import/export, but also much more costly as GoldenGate licenses are not cheap.

  • honcho says:

    Hi Michael,

    I also want to know how can you recovery archivelog from different platform.

    • honcho says:

      Sorry! Should ask Don Seiler for help ~

    • Prosvet says:

      RMAN DUPLICATE/RESTORE/RECOVER Mixed Platform Support [ID 1079563.1]

      Mixed platforms are supported for:

      + Active Database DUPLICATE
      + Backup-based DUPLICATE using image copies or backup sets
      + RESTORE and RECOVER using image copies or backup sets

      For Oracle Database 10g Release 2 and above releases:

      Solaris x86-64 Linux x86-64

  • Rob Ford says:


    Loved your blog.

    I am tasked with platform migration from 2 node RAC cluster to a 2node dataguard setup, your approach above will be I think how I create a new primary on the new server, would love to see any notes on how you did the upgrade and built the standby database.

    I am upgrading to which I have done on single instance clone and a dataguard setup … trying to firm up best approach and keep it simple and well thought out …

    cheers, rob

  • Ranjeet Patil says:

    Thank you for posting this.

  • Bartosz says:

    Hi Don,

    did you do a migration between Big endian (Solaris) and Little endian (Linux)?

    In the article you did mention about Oracle Metalink note 415884.1
    Unfortunately this note does not exist.


  • George says:

    I have license for Oracle 11g for IBM AIX. Can this licensing allow me to install a fresh version of Oracle 11g on Linux platform or Windows platform ?

    Is it okay to migrate schema from the AIX environment to either the Linux or Windows ?

    Thank you

  • Samir says:

    Looks Very good,
    I have question though. When I restored and recover the database, it did not let me open with resetlogs complaining about the old online logs traces which were present in the restored controlfile which were using FileSystem. I had to recreate the control file as @mount stage it does not let DBA remove online redo logs for that oracle needs database to be opened. I’m wondering you guys had similar problem and what did you guys do? The document have no mentioned of that. but I surely have had this problem. I was migrating oracle from REL 5 to REL6.

Leave a Reply

  • (will not be published)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>