A List: Moving Data Between Databases
A simple one, this. But there's a lot of ways out there.
- Spool to Flat File, Load with SQL*Loader or External Tables: SQL*Plus does a reasonable job of unloading. Tom Kyte has a C utility for unloading data also.
- SQL*Plus COPY command: Sadly deprecated, and without support for all data types, but a handy tool for basic data transfers. Allows arbitrary SQL on the data selection.
- Database Links: Direct movement from one database to another. Aside from the network latency this is basically the same as reading data from your local database. If you combine database links and materialized views, then you can replicate remote data to your local database through the databae link.
- DBMS_PIPE: I don't recall reading of anyone using this, but I expect that it's a theoretical possibility to send data from one instance to another through a pipe. scratch this one: see TK's comment
- Export/Import: Sometimes mistaken for part of a backup and recovery strategy!
- Transportable Tablespaces: Closely related to Export/Import, allowing movement of entire tablespaces without the need for unloading and reloading data.
- Data Pump: Export/Import for the new millenium :) and with many new features.
- Oracle Streams: Powerful functionality here. Non-trivial, but probably the modern tool of choice for the regular propagation of data around your corporate network.
12 Comments:
database pipes are for interprocess comunication inside of a single INSTANCE.
They would not be appropriate for moving data between two databases.
Materialized views...
dbms_file_transfer (with transportable tablespaces)
http://download-west.oracle.com/docs/cd/B14117_01/appdev.101/b10802/d_ftran.htm#996730
Heard from a lot of well known people who are saying that an 'Export / Import' is an absolute no-no for backup and recovery strategy. How about the following applications ?
(a) Applications which are updated / modified at specific periods (month ends / once in a week upload through EDI routines). The rest of the time the application is read only.
(b) Another situation I heard off was the disaster recovery site was in a different platform than the production site. The production was a series of applications in 2 different environments. (3 systems running in windows and 1 in unix and so the decision was taken to have the DR site server as a Windows Box). If the platform is different what kind of backup would be suggested? (If the application is not 24/7)
Today I move 13 million records from one database to another with export/import in 4 minutes. (both databases were on the same box). That is impressive to me. Am I the only one? (I first tried INSERT INTO append nologging via a db link , but that took 2 hrs)
Heard from a lot of well known people who are saying that an 'Export / Import' is an absolute no-no for backup and recovery strategy.
That is because export/import utilities are not backup utilities. These utilities are for movement of data from one oracle database to another. You may use it as secondary option with precautions but certainly not as a primary backup method.
How about the following applications ?
(a) Applications which are updated / modified at specific periods (month ends / once in a week upload through EDI routines). The rest of the time the application is read only.
Cold backup after every update. But remember with this option also some day you will lose data.
(b) Another situation I heard off was the disaster recovery site was in a different platform than the production site. The production was a series of applications in 2 different environments. (3 systems running in windows and 1 in unix and so the decision was taken to have the DR site server as a Windows Box). If the platform is different what kind of backup would be suggested? (If the application is not 24/7)
In this case I'm assuming that you are planning to send full database export dumps to your DR Site. So how will you do imports ? Drop database , create a new one and then do the import everytime a new database dump arrives? And if the database size is huge, this can take days...
If the database is important enough to have a DR site for it buy another unix box or you can keep struggling with the imports.
Well, in 9i I'd certainly pick exp/imp over streams as a prefered tool to "move data between databases". Which I guess is a different proposition from backups.
If there is one thing that I've learned recently is that streams in 9i may have been introduced but you'll be lucky to get them to do anything relevant or stressfull. Treat with extreme care.
Do a few searches in Metalick if you feel doubtful. Like all new Oracle features, two releases later they *might* become usable.
Meanwhile those who have to solve real life problems have to use reliable technology. Streams in 9i certainly aren't.
Sorry, but it needs to be said. In the interest of avoiding less experienced people falling into disaster situations.
exp/imp is a backup and recovery option for certain situations. So, to just state that it is not a B&R, should be corrected with “by itself, it is not”. With other backup solutions, it compliments the B&R strategy. Here's a real life an example. Let's say we have a database (100GB) and 2200+ tables. One of the tables is called STATE_FEES. Now STATE_FEES has 50 records (50 states) with associated columns that records various fees per state per category etc. The records never change, except maybe once every 3 months. (This is documented by the business users) Let's pretend that a user deleted the table by mistake in the middle of the business day. (perhaps a developer or DBA). This action does not trigger a recovery for Oracle. Oracle does not say, Hey! Database needs recovery, but the application or users may. RMAN or cold/hot backup, or archives is not an efficient option to restore this table. To restore, I would use the previous night’s exp. Let’s pretend that a DBA drops certain number of indexes and forgets what they were? EXP has those DDL's. I believe EXP/IMP is a backup & recovery measure to be included in the strategy.
Currently use DB LINK to migrate many databases from 8.1.7.4 to 9.2.0.5 through a series of dynamically generated statements of form:
CREATE TABLE table_name
NOLOGGING COMPRESS
AS SELECT * FROM table_name@DB_LINK
Target database initially created from a ROWS=N export and set to NOARCHIVELOG for the migrate only.
Source and target databases in different geographies.
Migrated many databases using this method - typical performance is 100GB database fully migrated in 6 hours (of course, much less if target tables NOCOMPRESS).
We find that this approach offers greatest flexibility to physically reorganize data as part of an easily restartable migration process - e.g. converting tables to partitioned or index organized.
Of course, indexes, grants, constraints and other objects are migrated once the tables have been copied.
Just sharing our experience ...
David: Is RMAN database duplication another one for the list? Follow my thinking:
If you need to transfer all the data from production to development, you can ditch development and duplicate production to create a new development instance. The net product is similar to a data transfer, but on a bigger scale.
OK, it's not really a data transfer in the same sense you were discussing :)
Tim, I have to agree with you. We use RMAN to clone a database, because it faster method to transfer the data, where it involves to many schemas.
"have you used Streams in 10g at all, and if so is it any better (ie more mature) in 10g ?"
No, not at all. I have not used 10g other than casually to learn it. Don't know how stable streams are in it and I won't be running anything production until probably 10gr2 is well out. Sorry, can't help.
I believe EXP/IMP is a backup & recovery measure to be included in the strategy.
You need to be more precise in your usage of the word "recovery."
It is good to have redundant data stores, especially for grabbing small amounts of data lost at unknown times through user error. This is data restoration, not transaction based recovery.
exp/imp is a logical backup. Recovery is a physical backup. They both have uses. The most important point is to clarify what data is important and how you will deal with problems.
Back to the subject, one can use standby to move data between databases. Initially, anyways (aside from the normal standby usage). Can be helpful with a small window to change hardware, for example.
For small, stable data sets in an environment where tables are prone to getting accidentally deleted or their data removed, then I guess exp/imp has a place -- it's certainly easier than a TSPITR.
However, I'd also be putting mechanisms in place to prevent the table from being dropped or truncated, for example "ALTER TABLE MY_TABLE DISABLE TABLE LOCK" which will raise an ORA-00069 on drop or truncate attempts, and maybe a trigger to prevent changes (optionally to prevent changes outside of a particular time window)
Post a Comment
<< Home