.comment-link {margin-left:.6em;}

Oracle Sponge -- Now Moved To Wordpress

Please use http://oraclesponge.wordpress.com

Thursday, October 06, 2005

Testing the Trace Loader

As I said yesterday, I've been fiddling around with loading trace files into the database for analysis, and today I was looking for a good test case to see how the numbers are coming out. The obvious example to choose was the direct path enhancement to materialized view refresh that I also wrote about recently.

Here is my test script:

drop table master;
drop table master_test_data;
drop materialized view master_mv1;

create table master
(
location_cd number not null,
tx_timestamp date not null,
product_cd number not null,
tx_qty number not null,
tx_cst number not null
)
pctfree 0
nologging
/

create materialized view log on master
with rowid
(
location_cd,
tx_timestamp,
product_cd,
tx_qty,
tx_cst
)
including new values
/

create materialized view master_mv1
Using No Index
Refresh Fast On Commit
Enable Query Rewrite
As
Select
location_cd,
trunc(tx_timestamp,'MM') tx_month,
sum(tx_qty) s_tx_qty,
sum(tx_cst) s_tx_cst,
count(*) c_star
from
master
group by
location_cd,
trunc(tx_timestamp,'MM')
/

create materialized view master_mv2
Using No Index
Refresh Fast On Commit
Enable Query Rewrite
As
Select
product_cd,
trunc(tx_timestamp,'MM') tx_month,
sum(tx_qty) sum_tx_qty,
sum(tx_cst) sum_tx_cst,
count(*) c_star
from
master
group by
product_cd,
trunc(tx_timestamp,'MM')
/

create table master_test_data
(
location_cd not null,
tx_timestamp not null,
product_cd not null,
tx_qty not null,
tx_cst not null
)
pctfree 0
nologging
as
select
trunc(dbms_random.value(1,10)),
to_date('01-jan-2005','DD-Mon-YYYY')
+ dbms_random.value(1,31),
trunc(dbms_random.value(1,1000)),
trunc(dbms_random.value(1,10)),
trunc(dbms_random.value(1,100),2)
from
dual
connect by
1=1 and
level < 1000001
/
commit
/
select count(*) from master_test_data
/
alter session set timed_statistics = true;
alter session set max_dump_file_size=1000000000;
alter session set events '10046 trace name context forever, level 8';

insert into master
select *
from master_test_data
where rownum < 100001
/
commit
/
exit

Rather straightforward stuff. The "select count(*)" of rows in the master test data table is just to eliminate the possiblity of the test results getting skewed by delayed block cleanout, by the way. The example above is for the conventional path insert. The only change required for the direct path enhancement to kick-in is to use an APPEND hint in the last insert statement.

So, having identified the trace files I issue the following command:

begin
trace.purge; -- purges all data from the trace file repository

trace.load_file
(directory =>'UDUMP',
file_name => 'slimdb_ora_4412.trc',
identify_as => 'direct path',
debug => 0,
log_extents => true,
log_files => true);

trace.load_file
(directory =>'UDUMP',
file_name => 'slimdb_ora_4600.trc',
identify_as => 'conv path',
debug => 0,
log_extents => true,
log_files => true);
end;
/

This allows me to query the trace file information directly, giving me such output as:

SQL> select
2 substr(identify_as,1,12),
3 op_name,
4 sum(op_c),
5 sum(op_e)/1000000 sec,
6 count(*)
7 from
8 (
9 select
10 identify_as,
11 cur_dep,
12 f.file#,
13 f.file_name,
14 c.cursor#,
15 o.op_name,
16 OP_C ,
17 OP_E ,
18 OP_P ,
19 OP_CR ,
20 OP_CU ,
21 OP_TIM
22 from
23 trace_ops o,
24 trace_cursors c,
25 trace_files f
26 where
27 f.file# = c.file# and
28 c.cursor# = o.cursor#
29 )
30 group by
31 substr(identify_as,1,12),
32 op_name
33 order by
34 1,2,4 desc
35 /

SUBSTR(IDENT OP_NAME SUM(OP_C) SEC COUNT(*)
------------ ---------- ---------- ---------- ----------
conv path EXEC 20875000 33.358519 100342
conv path FETCH 2296875 3.026768 304
conv path PARSE 156250 .661319 333
direct path EXEC 2890625 3.476836 350
direct path FETCH 140625 .406403 402
direct path PARSE 859375 1.302354 224

6 rows selected.

Of course all the interesting work is going on behind the scenes in the form of recursive SQL:

SQL> select
2 substr(identify_as,1,12),
3 cur_dep,
4 op_name,
5 sum(op_c),
6 sum(op_e)/1000000 sec,
7 count(*)
8 from
9 (
10 select
11 identify_as,
12 cur_dep,
13 f.file#,
14 f.file_name,
15 c.cursor#,
16 o.op_name,
17 OP_C ,
18 OP_E ,
19 OP_P ,
20 OP_CR ,
21 OP_CU ,
22 OP_TIM
23 from
24 trace_ops o,
25 trace_cursors c,
26 trace_files f
27 where
28 f.file# = c.file# and
29 c.cursor# = o.cursor#
30 )
31 group by
32 substr(identify_as,1,12),
33 cur_dep,
34 op_name
35 order by
36 1,2,3,5 desc
37 /

SUBSTR(IDENT CUR_DEP OP_NAME SUM(OP_C) SEC COUNT(*)
------------ ---------- ---------- ---------- ---------- ----------
conv path 0 EXEC 14484375 22.637736 18
conv path 0 FETCH 2125000 2.342986 15
conv path 0 PARSE 93750 .56999 16
conv path 1 EXEC 6343750 10.694057 100188
conv path 1 FETCH 109375 .619292 147
conv path 1 PARSE 62500 .079386 187
conv path 2 EXEC 46875 .026135 128
conv path 2 FETCH 62500 .062152 134
conv path 2 PARSE 0 .009992 128
conv path 3 EXEC 0 .00006 1
conv path 3 FETCH 0 .002111 1
conv path 3 PARSE 0 .001875 1
conv path 4 EXEC 0 .000531 7
conv path 4 FETCH 0 .000227 7
conv path 4 PARSE 0 .000076 1
direct path 0 EXEC 1718750 2.218695 3
direct path 0 PARSE 31250 .17893 2
direct path 1 EXEC 750000 .841392 138
direct path 1 FETCH 15625 .128181 91
direct path 1 PARSE 343750 .47748 133
direct path 2 EXEC 140625 .14998 124
direct path 2 FETCH 109375 .168264 124
direct path 2 PARSE 421875 .52747 27
direct path 3 EXEC 281250 .260751 78
direct path 3 FETCH 15625 .068693 180
direct path 3 PARSE 46875 .109109 61
direct path 4 EXEC 0 .006018 7
direct path 4 FETCH 0 .041265 7
direct path 4 PARSE 15625 .009365 1

29 rows selected.

So what leaps out at us here (or it would do if Blogger didn't trim leading and multiple white spaces) is an enormous number of executions at a recursive depth (cur_dep) of 1 in the conventional path refresh -- coincidentally (well, actually not at all coincidentally) slightly more than the number of rows being inserted into the master table.

This of course, is the maintenance of the materialized view log being performed in an inefficient row-by-row manner:

SQL> select CUR_SQL_TEXT
2 from trace_cursors
3 where cursor# =
4 (
5 select cursor#
6 from
7 (
8 Select cursor#,count(*)
9 from trace_ops
10 where op_name = 'EXEC'
11 group by cursor#
12 order by 2 desc
13 )
14 where rownum = 1
15 )
16 /

CUR_SQL_TEXT
--------------------------------------------------------------------------------

INSERT /*+ IDX(0) */ INTO "DAVE"."MLOG$_MASTER" (dmltype$$,old_new$$,snaptime$$,

change_vector$,m_row$,"LOCATION_CD","TX_TIMESTAMP","PRODUCT_CD","TX_QTY","TX_C

ST") VALUES (:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:m,

:1,:2,:3,:4,:5)

Hey, at least it's using bind variables!

4 Comments:

At 4:32 PM, Blogger William Robertson said...

> or it would do if Blogger didn't trim leading and multiple white spaces)

No, it is the browser that does that, because that is the HTML standard.

Try <pre> tags for preformatted text.

 
At 5:40 PM, Blogger David Aldridge said...

Wheee!

thanks William.

 
At 11:51 AM, Anonymous Anonymous said...

David have you looked at the oraSRP ( free ) tool for looking at trace files? Egorst does a great service by all his work on this resource profiler, based on the hotsos publications.

http://forum.oracledba.ru/index.php

 
At 12:32 AM, Anonymous Anonymous said...

Olá amizade. Meu nome é Ernísio Martines Dias. Sou um sujeito calmo, mas de uma hora para outra posso me tornar agressivo se percebo que não estou conseguindo o que quero. Reconheço que sou mesmo um mau caráter, desonesto e sem escrúpulos, que só penso em ganhar dinheiro à custa dos outros, em ter lucro financeiro em tudo, como sonegar impostos e enganar as pessoas com minha lábia. Eu mesmo acredito na mentira que digo a todo o momento e acabo procurando fazer as coisas por baixo dos panos, pelo modo que me parece ser mais fácil.
Há antídoto para um marginal corrupto? Aceito sugestões construtivas no meu e-mail ernisio@vba.com.br Sabe, me sinto com duas faces. A outra é diferente, pois quando não estou trabalhando, me sinto frágil e até estou com tendência a gostar de homens. Isso é agonizante! Por tudo isso, acabo tendo depressão e insônia, mas ainda estou com esperanças de mudar esta minha vida para melhor e conto com a sua ajuda. Obrigado.

 

Post a Comment

<< Home