Why is Oracle Ignoring My Memory Parameters?

Posted in: Technical Track

I was recently asked by a colleague to look at why some memory parameters they had set were not being adhered to. Having done so, I thought a blog post was worthwhile on the subject. This is my first one (ever) so be gentle. :)

On the database in question (version 11.2.0.2), both JAVA_POOL_SIZE and LARGE_POOL_SIZE had been set to 256MB and 128MB respectively. This was clear from the spfile. However, the values were seemingly ignored when we started the database, and we ended up with the below values, taken from the alert log start-up sequence:

large_pool_size          = 768M
 java_pool_size           = 768M

Why?

This is due to two factors:

  • The SGA granule size for this system it is set to: 134217728 (128MB). This is calculated at start-up based on the total SGA size. The value is visible through v$sgainfo or the hidden parameter: “_ksmg_granule_size“.
  • The number of sub-pools in the SGA. For this we can look at the hidden parameter: “_kghdsidx_count”, which is set to 6 on our system. This value is again calculated at instance start-up and is based on several factors, the most notable being CPU_COUNT. The more CPUs you have, the more sub-pools you get (to a maximum of 7). In my example, we have a CPU_COUNT of 24, 1 sub-pool for every 4 CPUs. There are other factors involved in calculating the number sub-pools, and I have seen situations where this hidden parameter (by default) doesn’t seem to accurately reflect the number of pools in use. I am still experimenting with this. If you do set “_kghdsidx_count” manually, this value will be used (but I would not recommend doing this without Oracle Support involvement).

Now using the above information, we can see the calculation involved:

GranuleSize*#SubPools: 134217728*6 = 805306368 = 768M

We therefore end up with a memory overhead of 1155M. On large systems like this, it’s probably not too big a deal I hear you say. Well this time it may have contributed to an issue. We couldn’t fit the SGA into the available huge pages, from the alert log:

***********************************************************
 Huge Pages allocation failed (free: 4288 required: 4737)
 Allocation will continue with default/smaller page size
 **********************************************************

This could obviously have a big knock of effect on performance with a large SGA (much documented; I won’t go into that here).

Also worth noting: If you explicitly set either JAVA_POOL_SIZE or LARGE_POOL_SIZE to zero when using manual memory management, it does seem to be honored in the tests I have done.

The story doesn’t stop there though. STREAMS_POOL_SIZE was set explicitly to 67108864 (64M), but on start-up this only ended up being 128M (i.e. one SGA granule). So the sub-pools do not seem to affect the STREAMS_POOL_SIZE in the same manner, even with manual memory. It is documented that rounding to 1 granule will occur (but that is the case for all SGA areas).

Automatic Memory Management

This database was using manual memory management. If you switch to using ASMM with SGA_TARGET (I haven’t tested behavior with MEMORY_TARGET), then the defaults seem to change again.

Some experiments on my own 11.2.0.3 database show that with SGA_TARGET set, JAVA_POOL_SIZE and LARGE_POOL_SIZE all default to the size of one SGA granule and do not seem to be impacted by the number of sub-pools when ASMM is in use, even if you explicitly set them to zero. Again, STREAMS_POOL_SIZE has to be different and will default to zero if you do not set any value. If you make use of stream components, then memory would be transferred dynamically. If you set a minimum value for any of these pools, it will be rounded to the nearest granule automatically.

Worst Case…

In summary, when using manual memory management remember that some of your memory pools could end up being much larger than you expect. This is particularly the case on systems with a very large SGA and CPU_COUNT. At 11gR2 you can end up with a granule size of 512M and a maximum of 7 sub-pools. All of this means that the minimum size for the java and large pools are a whopping 3584M, for a combined total of 7168M. This is worth being aware of I think, particularly since most people I speak with tend to use manual memory management for systems with a very large SGA. I have deliberately left out SHARED_POOL_SIZE because the calculations here get even messier. Another time maybe…

I would be very interested in other people’s findings on this. For instance, have you tried setting any of the hidden parameters manually to reduce the minimum sizes, and why did you need to do this?

Oracle Support References:

How To Determine Granule Size [ID 947152.1] How To Determine The Default Number Of Subpools Allocated During Startup [ID 455179.1]

email

Author

Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Oracle Cluster Technical Lead for Pythian, follow me on Twitter @pauldtill

8 Comments. Leave new

Hello Paul,

Nice post !

As you are interested to share experience around this (granule + number of subpool), here is mine:

Some times ago we got ORA-4031 during the startup of one of our database:

SQL>
ORACLE instance started.
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 32 bytes of shared memory (“shared
pool”,”GV$SGA”,”KGLS^d5bd6bd9″,”kglHeapInitialize:temp”)

with the following parameter in place:

sga_target=51Gb
db_cache_size=48Gb
shared_pool_size=2Gb

Please dont ask why such values as it has been explicitly requested by the customer ;-)

In our case the subpool and granule values:

Granule size : 128 Mb
Number of subpools: 7

The issue is that there is not enough “SGA” space to allocate the shared pool (28 subpools * 128 Mb according to MOS 947152.1)

Then we set _ksmg_granule_size to 64Mb and the instance has been able to start without any ORA-4031.

Bertrand

Reply

Hi Bertrand,

Thanks for taking the time to reply on this. Did you consult with Oracle before changing the granule size or just do it based on the info in the note? Ever see any issues as a result of lowering it?

Thanks again,
Paul

Reply

Hello Paul,

Your are welcome.

We did not consult with oracle before lowering the _ksmg_granule_size and we did not encounter any issues so far.

Bertrand

Reply

Hi Paul,

Great post and nice timing!
Just a few days ago I noticed a similar thing in one of the databases I manage and thought I had to investigate what it was… Not any more :)

Maris

Reply

Hi Maris,

Glad this helped a little and thanks for the feedback.

There is much more to figure out in this area I think, need to see what happens with memory_target too.

Paul

Reply

Hi Paul

Interesting first post…about time you took the plunge!

Timely too, given we’re configuring memory on 3 node RAC cluster for the next hardware cycle of the DW you and I were involved in putting together a few years back.

Take care
Jeff

Reply

Hi Jeff,

Good to hear from you and nice to see you follow our blog :)

I heard about the RAC project on the DW, sounds interesting, bet there are some “size” challenges with the migration ?

Cheers,
Paul

Reply

There are numerous challenges my friend, migration size being just one of them! RAC is a whole new world for us as well!

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *