Does Oracle 11g’s Result Cache Scale Poorly?

Posted in: Technical Track

In my previous blog entry, I explained why I would expect Result Cache not to scale well. Unfortunately, at the time that blog entry was written, I had no access to hardware with more than two cores. That left me in an everything-but-the-proof state. “Theory without practice is sterile.” ©Albert Einstein.

Since then, I got a chance to re-run my test cases on a quad-core CPU, moving one step forward.

I re-executed my test cases with one to four processes against the Buffer Cache and the Result Cache in order to capture the number of lookups per second. I raised number of iterations to 1M to make the results more stable though.

Here is what I got:

# of processesBuffer Cache% linearResult Cache% linear

1 33613 100% 35398 100%
2 65210 97.00% 68752 97.11%
3 96432 95.63% 99701 93.89%
4 124301 92.54% 127836 90.28%

Both approaches demonstrate almost linear scalability, with Result Cache being slightly faster in all cases. The single latch problem is either non-existent, or four processes are not enough to saturate the latch. In order to clarify this, I collected a table with latch wait times as well:

# of processesBuffer Cache: CBC latches (ms)Result Cache: Latch (ms)% per process

1 0 0 0
2 0 421 0.00036%
3 0 22393 0.01861%
4 0 118454 0.10214%

You can spot the important data there. Although Result Cache: Latch waits are still very insignificant, they were growing very rapidly — at a rate greater than the factorial. The reason I didn’t notice those is that, on a quad-core box with four concurrent processes, those waits are still too small to produce any major effect on results.



Want to talk with an expert? Schedule a call with our team to get the conversation started.

3 Comments. Leave new

Result_cache blocking « OraStory
November 12, 2008 12:28 pm

[…] Possibly related posts: (automatically generated)Oracle/Memory […]

Alex Fatkulin
April 4, 2011 7:09 pm

I know, I’m the one who wrote both of these articles.


Leave a Reply

Your email address will not be published. Required fields are marked *