Speeding-Up Oracle Export/Import Migration

Nov 10, 2006 / By Shakir Sadikali

Tags: , ,

Or, How To Become an Export/Import Migration Superstar!

We’ve already established that I like broccoli. I get the taste from my dad who takes it with a glob a salad dressing and munches on it. It beats munching on Doritos. He and I both also do this with cucumbers. I don’t get it, but it tastes good and I know I’ll get a real kick out of it if I see my kids doing this someday too. In any case, one thing my dad doesn’t do is work with Oracle, and sadly, because of this, he’ll never get to add the Export/Import superstar trophy to his collection.

Over the last weekend I had the pleasure (I say pleasure because this was actually a very smooth operation) to do an 8i -> 10g, Solaris -> Linux migration. Talk about going in head–first. Now, whenever I run into a situation like this, we generally recommend a two–step process so that we can iron out bugs and be able to isolate causes. However, we were under serious time constraints, and we decided to just go with the following simple plan and move straight there. I had plenty of salad dressing to keep me company overnight.

Step 1 – export
Step 2 – import data
Step 3 – import everything else

That was the plan. We tried this a few times and when I finally had all the indexes etc. where I wanted them on the new server, and since we did some minor storage rejigging, I actually took a norows there and used that on the object import.

Now for the fun part. Typically, if you stick with a standard export/import, it’ll take you say… 10 hours. That’s what it usually took me. 30GB of data and 30GB of indexes. Now, you may ask, what makes me a superstar? Well, the fact that I dropped this down to about 5 hours, that’s what. This assumes you have big pages set up and Oracle able to address the space for a massive SGA.

The tricks:

  1. Assuming a full export, use direct=y if you’re not using any predicates in the export.
  2. Set your buffer to be big (10MB at least).
    Now the good stuff:
  3. Alter all your constraints novalidate prior to export if you can bring the app down, and if you can take a consistent export. This helps in reinstating the constraints instantly without forcing Oracle to validate constraints on massive tables
  4. I set workarea_size_policy=manual
  5. Set sort_area_size=6GB (yes, 6 GB). Combined with 4, it let Oracle build indexes with more space in memory for all the sorts without the need to spill to disk.
  6. Set massive online redo logs at 2GB each, 2 members each, 6 groups.

After the import, I reset everything back down to “normal”. The beauty of this migration was that the the client gave us a significant window of downtime. I cannot stress how much this actually reduces the complexity of the whole operation, and reduces the time needed to verify that everything works. Yay! for clients who provide realistic migration windows, and yay! for less complexity and effort (which equates to cost for all you business–y people) in migration windows.

I’m happy, and the client is happy, and I still have some cucumbers and broccoli left for breakfast.

P.S.: To add to the list of export/import deficiencies and 10.2.0.2 problems — you should note that function–based indexes need to be created manually.

8 Responses to “Speeding-Up Oracle Export/Import Migration”

  • Maxim says:

    Should it not be the 4) workarea_size_policy=auto ?

    Regards

  • Maxim says:

    Sorry, misread your settings (written sort_area_size, imagined – pga_aggregate_target , it is probably to hard associated with workarea_size_policy) – need a cup coffe more ;-)

    Regards

  • Raj Jamadagni says:

    You could further bring down the time for imp by doing parallel schema level imports from the same export file. I have done this in past (8i to 92 migration) with a lot of success. Of course this works best when you import data only, indexes is best left to last stage.

    Raj

  • sadikali says:

    That’s true too! Unfortunately, this particular case involved everything being in 1 single schema so i completely overlooked it. Thanks for bringing it up!

    Shakir

  • Dominic says:

    Hi,

    You say
    “Now, whenever I run into a situation like this, we generally recommend a two–step process so that we can iron out bugs and be able to isolate causes”.

    Without the time constraints mentioned above, can you detail your normally recommended two-step process?

    Thanks.
    Dominic

  • Shakir Sadikali says:

    Hi Dominic,

    In this type of situation, i like to do two separate upgrades. The first would be to move the DB to the new platform, and then a second to actually perform the upgrade. Changing platforms is a big step, and so is going from 8i to 9i. As such, to help isolate potential problems, it’s always a good idea to let a system burn in a bit so that you can catch bugs before you get too many variables into the mix.

  • Andrew Smith says:

    If you could extract the biggest tables into ASCII and the load with sql loader then your total time would be less then a hour. Or even pipe to simulate cascade unload-load

    This doc might help:
    http://www.wisdomforce.com/dweb/resources/docs/UnloadLoadUsingFastReader.pdf

  • lambu999 says:

    Is there is a limit on sort area size in 11g? it does not take anything more than 1 gb.

Leave a Reply

  • (will not be published)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>