DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Copy a very big table from one database to other

  1. #1
    Join Date
    Mar 2003
    Posts
    3

    Copy a very big table from one database to other

    Hi

    I have to copy a very big table from one database o another database. This table has around 74080422 rows and the AVG_ROW_LEN is 151.

    I have tried with export/import but it took 1 hour in the export and the import was running 3 days and it didn't finish.

    I have tried to make a create table as select * from cargos@db1.
    I mean using a database link but we waited 12 hours and it didn't finish, the temp tablespace was full, this terminate the process. The temp tablespace has 14 gigas.

    Does anyone know a different way to copy big tables??
    Does anyone have a suggestion?

    Regards
    Anabel

  2. #2
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    spool to flat file

    load with sql loader, direct path if possible, if not split the flat file into 5 or 6 files and load with sql loader

  3. #3
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    How many indexes and constraints are defined on the target table?

    If possible, drop the indexes and constraints first, precreate necessary extents, run the import.

  4. #4
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439

    Re: Copy a very big table from one database to other

    Originally posted by Anabel Paredes
    I have tried to make a create table as select * from cargos@db1.
    I mean using a database link but we waited 12 hours and it didn't finish, the temp tablespace was full, this terminate the process. The temp tablespace has 14 gigas.
    Something can't be right with the above description. "create table as select * from cargos@db1" doesn't need to perform any sorting whatsoever, so it doesn't need any space in temp tablespace. You couldn't run out of temp tablespace, it must have been something else. Perhaps run out of space in your rollback tablespace?

    How about using SQL*Plus's COPY command? It has been made available just for the tasks like yours. Its surprisingly fast and you can specify the frequency of commits to avoid RBS problems....
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  5. #5
    Join Date
    Jul 2000
    Posts
    521
    If the table export is taking just 1 hour, the table must not be too big. So, concentrate on import. That should do the trick. Play with import parameters. Use big buffer size, use commit=Y, indexes=N.

    Export the table without constraints. And, then put them on the put table once the import completes successfully.
    svk

  6. #6
    Join Date
    Mar 2002
    Posts
    534
    Have a look at the Transportable Tablespace feature which is available since Oracle8i. There are some limitation but it may be a realy fast way to duplicate data.

    For more details:
    http://otn.oracle.com/docs/products/...paces.htm#5757

    HTH
    Mike

  7. #7
    Join Date
    Aug 2001
    Posts
    267
    Myself I used copy command so many times for a table size 1 to 5 GB. It is easy and faster to use. Any way this is my experience. I am not sure which is faster way.

    Syntax
    COPY {FROM database | TO database | FROM database TO database}
    {APPEND|CREATE|INSERT|REPLACE} destination_table
    [(column, column, column, ...)] USING query
    where database has the following syntax:

    username[/password]@connect_identifier
    Raghu

  8. #8
    Join Date
    Aug 2001
    Posts
    267
    If you don't have much resources better use your copy command more than once by spliting your query.

    Syntax
    COPY {FROM database | TO database | FROM database TO database}
    {APPEND|CREATE|INSERT|REPLACE} destination_table
    [(column, column, column, ...)] USING

    query

    where database has the following syntax:

    username[/password]@connect_identifier
    Raghu

  9. #9
    Join Date
    Aug 2001
    Posts
    267
    I fotgot to mention better to use

    ALTER TABLE tab1 NOLOGGING;

    And make all your indexes unusable or make them nologging.

    For faster load.
    Raghu

  10. #10
    Join Date
    Sep 2000
    Posts
    305
    I think export will work try to export with index=n as well as give large buffer size and use direct path as svk told you but before that execute this command "Alter table table_name nologging"

    Hope this will be little bit faster.
    Shailendra

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width