Tuning 101, Part 1: Egads! Why is it So Slow?

Posted in: Technical Track

I’ve received the call more times than I can count. You know, the one that arrives ten minutes before you pack up for the trip home to see the kids, play ball, and generally not be at the office. The caller is normally your boss/administrator/guy in charge of all things not working. And when the caller reaches you, you’re the guy in charge of making it work again.

In my experience, the solution to most problems (the ones the caller refers to as “it’s running slow”) are not rooted in hardware, because hardware problems generally cause things to not run at all. It always baffles me when developers and architects prescribe hardware upgrades to make things run faster, because about 80% of the performance-related problems and subsequent solutions I’ve dealt with were resolved by tuning the application. In the hands of an experienced tuner, the cost of getting more out of the whole system does not compare with the cost of a hardware upgrade. It’s just apples and oranges… and my apples are much cheaper than their oranges.

This first installment will deal with the biggest contributor to bad application design — the dreaded bad query. I’ve actually had busy systems crash because a developer unleashed a query that performed a full tablescan too many.

This approach can be used both for troubleshooting in a crisis, or preferably, pro-actively to help you identify potential bottlenecks before they cause problems. It might not be the most efficient, and I can’t guarantee that the query is bug-free, but it works. If you find any problems, please let me know. I always appreciate efforts to make my work better.

You need to know what’s running on the system, and how “expensive” it is. That’s the only way to know what you should attack, and in my experience it’s the hardest part of this making-things-run-faster thing.

On Oracle systems we can identify the SQL that’s been running by querying the v$sqlarea/v$sqltext views. You can also do this with statspack using a level 6 or higher snap. These methods work superbly under ideal conditions. However, real-life is rarely ideal. We flush the shared_pool every now and then; some of the systems have a cold backup taken; some of them crash; and last, some of them obfuscate the SQL by not using bind_variables, making it difficult to isolate queries.

Running the following always works as a good starting point:
select executions, buffer_gets, sql_text from v$sqlarea;

74522 166474 INSERT INTO UNIQUE_OBJECT VALUES ( :b1,:b2,:b3,:b4,:b5,:b6,:b7  ) 
35263 71972 INSERT INTO UNIQUE_USER_ACCOUNT VALUES ( :b1,:b2,:b3  ) 
953 4105 INSERT INTO UQ_USER_SNAPSHOT VALUES ( SEQ_UQ_USER_SNAPSHOT.NEXTVAL,:b1,:b2,SYSDATE  ) 
953 16415 INSERT INTO UQ_USER_SNAPSHOT_ENTRY VALUES ( :b1,:b2,:b3  ) 
1 4 SELECT  MODIFIED_BY "modified_by", MODIFY_DT "modify_dt" FROM UQ_ANSWER where (ID=12899182) 
1 4 SELECT  MODIFIED_BY "modified_by", MODIFY_DT "modify_dt" FROM UQ_ANSWER where (ID=12899184) 
1 4 SELECT  MODIFIED_BY "modified_by", MODIFY_DT "modify_dt" FROM UQ_ANSWER where (ID=12899192) 
1 4 SELECT  MODIFIED_BY "modified_by", MODIFY_DT "modify_dt" FROM UQ_ANSWER where (ID=12899194)

You’ll notice that some of the queries — the ones highlighted in bold — have literal values. These literal values get in the way of figuring out how expensive a query really is. For instance, I could try doing a group by summing the buffer_gets, but those pesky literals just get in the way. They prevent the underlying query from getting grouped properly because Oracle thinks they’re all different! Also, these buffer_gets and executions are relevant only since the time this shared_pool was last flushed or bounced, or if the query hadn’t been aged out. Are we really getting the information we need to make a good judgement?

I found this function from Tom Kyte’s webspace, and I haven’t touched it in any way. It takes unbound SQL and strips away the bind variables. Thank you Tom, it was a life saver.

create or replace function remove_constants( p_query in varchar2 ) return varchar2 as l_query long; 
l_char varchar2(1); 
l_in_quotes boolean default FALSE; 
begin for i in 1 .. length( p_query ) loop l_char := substr(p_query,i,1); 
if ( l_char = '''' and l_in_quotes ) then l_in_quotes := FALSE; 
elsif ( l_char = '''' and NOT l_in_quotes ) then l_in_quotes := TRUE; 
l_query := l_query || '''#'; 
end if; 
if ( NOT l_in_quotes ) then l_query := l_query || l_char; 
end if; 
end loop; 
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' ); 
for i in 0 .. 8 loop l_query := replace( l_query, lpad('@',10-i,'@'), '@' ); 
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' ); 
end loop; 
return upper(l_query); 
end; 
/

The next section is where we get creative. Because we’re flushing the shared_pool, and crashing, and taking cold backups, we need to periodically keep track of the sqlarea. I do this by scheduling the following procedure that takes the shared_pool and simply stuffs it into a holding table. You can schedule it at whatever interval you want. Here’s the syntax for the underlying tables.

drop table bob_sqlarea; 
create table bob_sqlarea tablespace INDX as select hash_value
     , sql_text
     , executions, buffer_gets
     , parsing_user_id from v$sqlarea; 
alter table avail.bob_sqlarea add sql_text_wo_constants varchar2(4000); 
alter table avail.bob_sqlarea add timestamp date;  
create or replace procedure pop_bob_sqlarea as begin insert into bob_sqlarea select sysdate timestamp     
     , hash_value, sql_text
     , executions , buffer_gets
     , parsing_user_id from v$sqlarea; 
commit; end; 
/  

exec sys.dbms_job.submit(:x,'pop_bob_sqlarea;',sysdate+1/24,'sysdate + 1/24');

Now that we’re catching the entire shared_pool, we stand a chance of getting meaningful cumulative stats for the DB. Here I have scheduled my job to run every hour.

When you’re ready to look at the data after a few minutes/hours/days/years of gathering it, you simply do this:

update bob_sqlarea set sql_text_wo_constants = remove_constants(sql_text); commit;

Now, run this query and look at the report. It’s ugly because it takes into account both the SQL getting flushed in/out and also the time of the DB bounce. You do lose some data on a bounce, but if you’re taking stats every x minutes, it’s better than what statspack gives you.

select sum(total_execs), sum(total_bg), case when sum(total_execs) > 0 then round(sum(total_bg)/sum(total_execs),0) else sum(total_bg) end ratio, sql_text
from (
select
username,
substr(sql_text_wo_constants,0,900) sql_text,
hash_value,
executions,
buffer_gets,
count(*) over (partition by hash_value) num_hashes,
executions - lag(executions,1,0) over (partition by hash_value order by timestamp) diff,
case
when count(*) over (partition by hash_value) = 1 then executions
when executions - lag(executions,1,0) over (partition by hash_value order by timestamp asc) = 0 then 0
when executions - lag(executions,1,0) over (partition by hash_value order by timestamp asc) > 0 then executions - lag(executions,1,0) over (partition by hash_value order by timestamp asc)
when count(*) over (partition by hash_value) = 2 then executions
when executions - lag(executions,1,0) over (partition by hash_value order by timestamp asc) < 0 then executions
else executions - lag(executions,1,0) over (partition by hash_value order by timestamp)
end total_execs,
case
when count(*) over (partition by hash_value) = 1 then buffer_gets
when buffer_gets - lag(buffer_gets,1,0) over (partition by hash_value order by timestamp asc) = 0 then 0
when buffer_gets - lag(buffer_gets,1,0) over (partition by hash_value order by timestamp asc) > 0 then buffer_gets - lag(buffer_gets,1,0) over (partition by hash_value order by timestamp asc)
when count(*) over (partition by hash_value) = 2 then buffer_gets
when buffer_gets - lag(buffer_gets,1,0) over (partition by hash_value order by timestamp asc) < 0 then buffer_gets
else buffer_gets - lag(buffer_gets,1,0) over (partition by hash_value order by timestamp)
end total_bg
from bob_sqlarea, dba_users
where bob_sqlarea.parsing_user_id = dba_users.user_id)
group by sql_text
order by case when sum(total_execs) > 0 then round(sum(total_bg)/sum(total_execs),0) else sum(total_bg) end desc;

With this query, you can see a truer picture of what SQL your database is actually running, despite the bounces, crashes, and general badness that the app is throwing at it. It should save you some time in identifiying heavy resource-hogging queries, because you have the queries grouped together despite having unbound literals, and the numbers will make sense even if the DB has been bounced.

This is what you’ll see (indented for readability):

3 12 4 SELECT MUMP_CREATE_DT "CREATEDT", MUMP_MODIFY_DT "MODIFYDT"
    , MUMP_BREAKFAST "MUMP_BREAKFAST", MUMP_DAY_NUMBER "MUMP_DAY_NUMBER" 
    , MUMP_DINNER "MUMP_DINNER", MUMP_ID "MUMP_ID"
    , MUMP_LUNCH "MUMP_LUNCH" , [email protected] "[email protected]"
    , [email protected] "[email protected]" , [email protected] "[email protected]"
    , MUMP_STATUS "MUMP_STATUS" , MUMP_USER_ID "MUMP_USER_ID" 
    FROM MP_USER_MEAL_PLAN WHERE (([email protected])) 
11 44 4 SELECT MODIFY_DT "MODIFY_DT" FROM UT_DATAPOINT WHERE ([email protected])
139 558 4 SELECT MODIFIED_BY "MODIFIED_BY"
, MODIFY_DT "MODIFY_DT" FROM UQ_ANSWER WHERE ([email protected])
1916 6792 4 SELECT ID FROM UQ_USER_SNAPSHOT WHERE USER_ID = :[email protected] AND QUESTIONNAIRE_ID = :[email protected]  
1449 5850 4 SELECT ID FROM UQ_RESPONSE_PROFILE WHERE USER_ID = @

In case you’re wondering who the “bob” is for “bob_sqlarea”, feel free to shoot me an email :-)

In the second part of this series, I’ll be highlighting techniques that use the Oracle Wait interface to identify performance issues.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Shakir Sadikali is a senior database specialist with The Pythian Group. His areas of interest are scalability, data-modeling, and performance tuning.

3 Comments. Leave new

“In the hands of an experienced tuner, the cost of getting more out of the whole system does not compare with the cost of a hardware upgrade. It’s just apples and oranges… and my apples are much cheaper than their oranges.”

This breaks down when the hardware is approximately 3 generations old, where generation is defined as the doubling of compute power per unit cost (commonly considered to be about 18 months). Now, if the experienced tuner only works on new systems where “I upgraded to the new version of Oracle and now it runs slower!” a tuner may be your sort of apple. In my orchard, systems running badly for years before I see them is pretty common. So the apples and oranges really means non-exclusivity – you can fix some stuff (tuning) before making a more global fix (capital expenditure). And doing it the other way around (or preferably, iteratively) can make for happy customers.

Let’s see… Apples are $60/hour and Oranges are $40,000… but a happy customer will buy both!

Reply

“It takes unbound SQL and strips away the bind variables.”

Am I missing something, or does this routine actually strip away literals, rather than bind variables?

Reply
Good Database Design is Mightier than Hardware
March 7, 2008 10:03 am

[…] one of my previous blog posts, I mentioned something along the lines of—well I’ll just cut and paste . . . In my […]

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *