Using Block Dumps to Read Uncommited Transactions

Posted in: Technical Track

Or, Why is My Transaction So Big?

My team and I still use old-style rollback segments for one of my client’s 10g production databases. We just never found the need to switch to automatic undo management. There are a number of 1GB rollback segments. They are that size because they need to be able to support large transactions. At the same time, we don’t want to have transactions bigger than 1GB as this is an OLTP system.

For the past few weeks we’ve had a strange problem. One of the web calls would cause one of the rollback segments to become full by using 1GB of undo data. Eventually the session would do a rollback, but we couldn’t track just what was filling up the transaction.

We could have installed logminer and analyzed the redo logs, but I had a better idea that required less time.

First I looked at v$lock to see which (and how many) tables were been accessed. I got a list of about 5 tables. Not a big list, and we needed to know more than just tables. We began to suspect a bug in the client’s system causing a section of the code to loop endlessly without committing and inserting or updating some rows. We wanted to know exactly which table and possibly what data were in the inserts.

I took samples of v$session_wait_history. There were lots of “db file sequential read” from the rollback process. Looking at the file number, I saw that some of them were on the rollback tablespace, others were in other datafiles. It happened that most of the non-rollback reads were on a specific tablespace that had only 1 table. So now I know which table it is (it is a logging table), but I still need to find out what was being inserted (the table is insert only, no updates).

Since this is a production database, we have a standby. I immediately stopped the standby recovery, and opened it read-only. It being a standby, there are no dirty blocks, and no checkpoint needed. I dumped some of the blocks being rolled back, looked at their contents… and saw a large number of very similar records.

I took a few of them and decoded the hex data to extract all the values of what was been inserted, and included that in an email to the client’s developers. I hope it will help them pinpoint the location of that loop.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

An Oracle ACE with a deep understanding of databases, application memory, and input/output interactions, Christo is an expert at optimizing the performance of the most complex infrastructures. Methodical and efficiency-oriented, he equates the role of an ATCG Principal Consultant in many ways to that of a data analyst: both require a rigorous sifting-through of information to identify solutions to often large and complex problems. A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

4 Comments. Leave new

Alex Gorbachev
June 21, 2007 10:36 am

Nice idea but I think LogMiner would be quick and easy method as well (and I suspect even easier, at least for me).
If I understood correctly, we couldn’t really trace it right? It was already rolling back.

Reply
Alex Fatkulin
June 21, 2007 4:21 pm

And on 10G you don’t even need to setup anything:

select * from flashback_transaction_query

Reply
Christo Kutrovsky
June 21, 2007 6:00 pm

Alex Fatkulin,

Altought you can see uncomitted transactions in flashback_transaction_query, you cant see what data they had. For example, on a insert you only see “insert” without the values. It does have the “undo” sql, but in the case of insert it only has “delete”.

Reply
Christo Kutrovsky
June 21, 2007 6:01 pm

Alex Gorbachev,

Yes it was already rolling back, and we do not know the sequence of events that would cause this to happen, until it has happened.

Even thow logminer sounds easy, this is audited environment, and any new installations (even table creations) need to be approved.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *