Recently, I was involved in a discussion about import performance in this OTN forum in which the original poster raised the issue of whether or not import
will resort to single-row inserts for tables with date columns.
For array inserts, the buffer parameter essentially specifies the size of this array. We know, however, that if a table has lob columns, the import parameter buffer is not honored, and the import utility will use single-row inserts for those tables. But tables with date columns, so the claim goes, also must suffer single-row inserts. In this blog, I will probe this further and validate that claim.
Let’s create a table and populate it with 300K rows.
create table t1 (n1 number, v1 varchar2 (512), d1 date); insert into t1 select n1, lpad(n1, 500, 'x'), sysdate from (select level n1 from dual connect by level <=300003); commit; REM Creating an export file.. host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1
The above code fragment created a table, inserted 300,000 rows, and exported that table to an export dump file. This dump file is ready to be imported. But, we need to trace the import to measure the effect of the buffer parameter. The problem, though, is how to trace the import session alone without generating every session in the database. This can be achieved by creating a logon trigger as below. Only sessions from a test user will have trace enabled from this trigger (the username is CBQT
).
REM I could use "on schema clause too, but this is part of generic code that I use. REM Riyaj Shamsudeen - To trace a session through logon trigger create or replace trigger set_system_event after logon on database declare v_user dba_users.username%TYPE:=user; sql_stmt1 varchar2(256) :='alter session set events '||chr(39)||'10046 trace name context forever, level 12'||chr(39); begin if (v_user = 'CBQT') THEN execute immediate sql_stmt1; end if; end; /
Let’s drop the table, then import with a default buffer size of 64K. Through the logon trigger, a new SQL trace file will be generated. That trace file will be analyzed with the tkprof utility as shown in the code fragment below:
drop table t1; imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log commt=Y full=Y tkprof orcl11g_ora_3840.trc orcl11g_ora_3840.trc.out sort=execpu, fchcpu
The pertinent lines from the tkprof output file generated, are printed below. The insert statement was executed 5455 times, which works out to be an average array size of 157 rows.
SQL ID : c9nv9yq6w2ydp INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") VALUES (:1, :2, :3) call count cpu elapsed rows ------- ------ -------- ---------- ---------- Parse 1 0.00 0.00 0 Execute 5455 15.06 20.10 300003 Fetch 0 0.00 0.00 0 ------- ------ -------- ---------- ---------- total 5456 15.06 20.10 300003 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 88 Rows Row Source Operation ------- --------------------------------------------------- 0 LOAD TABLE CONVENTIONAL (cr=7 pr=0 pw=0 time=0 us)
Let’s repeat this test case for a buffer size of 1MB.
sqlplus cbqt/cbqt <<EOF drop table t1; EOF imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=1048576 commt=Y full=Y tkprof orcl11g_ora_3846.trc orcl11g_ora_3846.trc.out sort=execpu, fchcpu
The trace lines from the tkprof output file for the 1MB test case shown below:
SQL ID : c9nv9yq6w2ydp INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") VALUES (:1, :2, :3) call count cpu elapsed rows ------- ------ -------- ---------- ---------- Parse 1 0.00 0.00 0 Execute 157 10.40 19.41 300003 Fetch 0 0.00 0.00 0 ------- ------ -------- ---------- ---------- total 158 10.40 19.41 300003 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 88 Rows Row Source Operation ------- --------------------------------------------------- 0 LOAD TABLE CONVENTIONAL (cr=231 pr=0 pw=0 time=0 us)
The number of executions for the insert statement went down from 5855 to 157. The average array size went up from 51 rows to 1910 rows. The buffer size increase from 64KB to 1MB increased the average array size.
Repeating this test for 10MB and 100MB sizes shows executions reduced to 16 and 10, respectively. This proves that the import buffer parameter is honored for tables with date columns also.
10MB: SQL ID : c9nv9yq6w2ydp INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") VALUES (:1, :2, :3) call count cpu elapsed rows ------- ------ -------- ---------- ---------- Parse 1 0.00 0.00 0 Execute 16 9.50 17.78 300003 Fetch 0 0.00 0.00 0 ------- ------ -------- ---------- ---------- total 17 9.50 17.78 300003 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 88 Rows Row Source Operation ------- --------------------------------------------------- 0 LOAD TABLE CONVENTIONAL (cr=2548 pr=0 pw=0 time=0 us) 100MB: SQL ID : c9nv9yq6w2ydp INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") VALUES (:1, :2, :3) call count cpu elapsed rows ------- ------ -------- ---------- ---------- Parse 1 0.00 0.00 0 Execute 10 9.18 18.98 300003 Fetch 0 0.00 0.00 0 ------- ------ -------- ---------- ---------- total 11 9.18 18.98 300003 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 88 Rows Row Source Operation ------- --------------------------------------------------- 0 LOAD TABLE CONVENTIONAL (cr=4269 pr=0 pw=0 time=0 us)
Now let’s repeat a subset of this case for tables with lob columns.
REM at this point, the trigger is enabled. So, it's possible that performance might not be great.. Drop trigger REM recreate later, if needed.. drop table t1; create table t1 (n1 number, v1 varchar2 (512), c1 clob); REM Keeping clob column small. insert into t1 select n1, lpad(n1, 500, 'x'), lpad (n1, 10, 'x') from (select level n1 from dual connect by level <=300003); commit; REM Creating an export file... host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1 drop table t1; imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=104857600 commt=Y full=Y tkprof orcl11g_ora_396.trc orcl11g_ora_396.trc.out sort=execpu, fchcpu
Lines from the tkprof output file for the above test case:
SQL ID : a92gcz9gxjuqh INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "C1") VALUES (:1, :2, :3) call count cpu elapsed rows ------- ------ -------- ---------- ---------- Parse 1 0.00 0.00 0 Execute 300003 185.17 193.95 300003 Fetch 0 0.00 0.00 0 ------- ------ -------- ---------- ---------- total 300004 185.17 193.95 300003 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 88 Rows Row Source Operation ------- --------------------------------------------------- 0 LOAD TABLE CONVENTIONAL (cr=3 pr=0 pw=0 time=0 us)
It’s clear that for lob columns, single row inserts are used. That’s why import performance is poor for lob columns. Date columns, however, do not suffer from any such performance issue.
4 Comments. Leave new
Good one !!!
[…] Moving into Oracle stuff now, Riyaj Shamsudeen looked into Oracle import performance to answer the question, Does Import Use Single-Row Inserts for Date Columns? […]
thanks
Hi,
i was wondering from where did you get this number in tkprof report (157 ) and what’s the meaning of it ?
thanks and regards,