Speeding-Up Oracle Export/Import Migration

Posted in: Technical Track

Or, How To Become an Export/Import Migration Superstar!

We’ve already established that I like broccoli. I get the taste from my dad who takes it with a glob a salad dressing and munches on it. It beats munching on Doritos. He and I both also do this with cucumbers. I don’t get it, but it tastes good and I know I’ll get a real kick out of it if I see my kids doing this someday too. In any case, one thing my dad doesn’t do is work with Oracle, and sadly, because of this, he’ll never get to add the Export/Import superstar trophy to his collection.

Over the last weekend I had the pleasure (I say pleasure because this was actually a very smooth operation) to do an 8i -> 10g, Solaris -> Linux migration. Talk about going in head–first. Now, whenever I run into a situation like this, we generally recommend a two–step process so that we can iron out bugs and be able to isolate causes. However, we were under serious time constraints, and we decided to just go with the following simple plan and move straight there. I had plenty of salad dressing to keep me company overnight.

Step 1 – export
Step 2 – import data
Step 3 – import everything else

That was the plan. We tried this a few times and when I finally had all the indexes etc. where I wanted them on the new server, and since we did some minor storage rejigging, I actually took a norows there and used that on the object import.

Now for the fun part. Typically, if you stick with a standard export/import, it’ll take you say… 10 hours. That’s what it usually took me. 30GB of data and 30GB of indexes. Now, you may ask, what makes me a superstar? Well, the fact that I dropped this down to about 5 hours, that’s what. This assumes you have big pages set up and Oracle able to address the space for a massive SGA.

The tricks:

  1. Assuming a full export, use direct=y if you’re not using any predicates in the export.
  2. Set your buffer to be big (10MB at least).
    Now the good stuff:
  3. Alter all your constraints novalidate prior to export if you can bring the app down, and if you can take a consistent export. This helps in reinstating the constraints instantly without forcing Oracle to validate constraints on massive tables
  4. I set workarea_size_policy=manual
  5. Set sort_area_size=6GB (yes, 6 GB). Combined with 4, it let Oracle build indexes with more space in memory for all the sorts without the need to spill to disk.
  6. Set massive online redo logs at 2GB each, 2 members each, 6 groups.

After the import, I reset everything back down to “normal”. The beauty of this migration was that the the client gave us a significant window of downtime. I cannot stress how much this actually reduces the complexity of the whole operation, and reduces the time needed to verify that everything works. Yay! for clients who provide realistic migration windows, and yay! for less complexity and effort (which equates to cost for all you business–y people) in migration windows.

I’m happy, and the client is happy, and I still have some cucumbers and broccoli left for breakfast.

P.S.: To add to the list of export/import deficiencies and 10.2.0.2 problems — you should note that function–based indexes need to be created manually.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Shakir Sadikali is a senior database specialist with The Pythian Group. His areas of interest are scalability, data-modeling, and performance tuning.

8 Comments. Leave new

Should it not be the 4) workarea_size_policy=auto ?

Regards

Reply

Sorry, misread your settings (written sort_area_size, imagined – pga_aggregate_target , it is probably to hard associated with workarea_size_policy) – need a cup coffe more ;-)

Regards

Reply

You could further bring down the time for imp by doing parallel schema level imports from the same export file. I have done this in past (8i to 92 migration) with a lot of success. Of course this works best when you import data only, indexes is best left to last stage.

Raj

Reply

That’s true too! Unfortunately, this particular case involved everything being in 1 single schema so i completely overlooked it. Thanks for bringing it up!

Shakir

Reply

Hi,

You say
“Now, whenever I run into a situation like this, we generally recommend a two–step process so that we can iron out bugs and be able to isolate causes”.

Without the time constraints mentioned above, can you detail your normally recommended two-step process?

Thanks.
Dominic

Reply
Shakir Sadikali
January 9, 2007 9:57 am

Hi Dominic,

In this type of situation, i like to do two separate upgrades. The first would be to move the DB to the new platform, and then a second to actually perform the upgrade. Changing platforms is a big step, and so is going from 8i to 9i. As such, to help isolate potential problems, it’s always a good idea to let a system burn in a bit so that you can catch bugs before you get too many variables into the mix.

Reply
Andrew Smith
March 13, 2007 3:47 pm

If you could extract the biggest tables into ASCII and the load with sql loader then your total time would be less then a hour. Or even pipe to simulate cascade unload-load

This doc might help:
https://www.wisdomforce.com/dweb/resources/docs/UnloadLoadUsingFastReader.pdf

Reply

Is there is a limit on sort area size in 11g? it does not take anything more than 1 gb.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *