Performance Tuning: HugePages in Linux

Posted in: Technical Track

Recently we quickly and efficiently resolved a major performance issue with one of our New York clients. In this blog, I will discuss about this performance issue and its solution.

Problem statement

The client’s central database was intermittently freezing because of high CPU usage, and their business severely affected. They had already worked with vendor support and the problem was still unresolved.

Symptoms

Intermittent High Kernel mode CPU usage was the symptom. The server hardware was 4 dual-core CPUs, hyperthreading enabled, with 20GB of RAM, running a Red Hat Linux OS with a 2.6 kernel.

During this database freeze, all CPUs were using kernel mode and the database was almost unusable. Even log-ins and simple SQL such as SELECT * from DUAL; took a few seconds to complete. A review of the AWR report did not help much, as expected, since the problem was outside the database.

Analyzing the situation, collecting system activity reporter (sar) data, we could see that at 08:32 and then at 8:40, CPU usage in kernel mode was almost at 70%. It is also interesting to note that, SADC (sar data collection) also suffered from this CPU spike, since SAR collection at 8:30 completed two minutes later at 8:32, as shown below.

A similar issue repeated at 10:50AM:

07:20:01 AM CPU   %user     %nice   %system   %iowait     %idle
07:30:01 AM all    4.85      0.00     77.40      4.18     13.58
07:40:01 AM all   16.44      0.00      2.11     22.21     59.24
07:50:01 AM all   23.15      0.00      2.00     21.53     53.32
08:00:01 AM all   30.16      0.00      2.55     15.87     51.41
08:10:01 AM all   32.86      0.00      3.08     13.77     50.29
08:20:01 AM all   27.94      0.00      2.07     12.00     58.00
08:32:50 AM all   25.97      0.00     25.42     10.73     37.88 <--
08:40:02 AM all   16.40      0.00     69.21      4.11     10.29 <--
08:50:01 AM all   35.82      0.00      2.10     12.76     49.32
09:00:01 AM all   35.46      0.00      1.86      9.46     53.22
09:10:01 AM all   31.86      0.00      2.71     14.12     51.31
09:20:01 AM all   26.97      0.00      2.19      8.14     62.70
09:30:02 AM all   29.56      0.00      3.02     16.00     51.41
09:40:01 AM all   29.32      0.00      2.62     13.43     54.62
09:50:01 AM all   21.57      0.00      2.23     10.32     65.88
10:00:01 AM all   16.93      0.00      3.59     14.55     64.92
10:10:01 AM all   11.07      0.00     71.88      8.21      8.84
10:30:01 AM all   43.66      0.00      3.34     13.80     39.20
10:41:54 AM all   38.15      0.00     17.54     11.68     32.63 <--
10:50:01 AM all   16.05      0.00     66.59      5.38     11.98 <--
11:00:01 AM all   39.81      0.00      2.99     12.36     44.85

Performance forensic analysis

The client had access to a few tools, none of which were very effective. We knew that there is excessive kernel mode CPU usage. To understand why, we need to look at various metrics at 8:40 and 10:10.

Fortunately, sar data was handy. Looking at free memory, we saw something odd. At 8:32, free memory was 86MB; at 8:40 free memory climbed up to 1.1GB. At 10:50 AM free memory went from 78MB to 4.7GB. So, within a range of ten minutes, free memory climbed up to 4.7GB.

07:40:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached
07:50:01 AM    225968  20323044     98.90    173900   7151144
08:00:01 AM    206688  20342324     98.99    127600   7084496
08:10:01 AM    214152  20334860     98.96    109728   7055032
08:20:01 AM    209920  20339092     98.98     21268   7056184
08:32:50 AM     86176  20462836     99.58      8240   7040608
08:40:02 AM   1157520  19391492     94.37     79096   7012752
08:50:01 AM   1523808  19025204     92.58    158044   7095076
09:00:01 AM    775916  19773096     96.22    187108   7116308
09:10:01 AM    430100  20118912     97.91    218716   7129248
09:20:01 AM    159700  20389312     99.22    239460   7124080
09:30:02 AM    265184  20283828     98.71    126508   7090432
10:41:54 AM     78588  20470424     99.62      4092   6962732  <--
10:50:01 AM   4787684  15761328     76.70     77400   6878012  <--
11:00:01 AM   2636892  17912120     87.17    143780   6990176
11:10:01 AM   1471236  19077776     92.84    186540   7041712

This tells us that there is a correlation between this CPU usage and the increase in free memory. If free memory goes from 78MB to 4.7GB, then the paging and swapping daemons must be working very hard. Of course, releasing 4.7GB of memory to the free pool will sharply increase paging/swapping activity, leading to massive increase in kernel
mode CPU usage. This can lead to massive kernel mode CPU usage.

Most likely, much of SGA pages also can be paged out, since SGA is not locked in memory.

Memory breakdown

The client’s question was, if paging/swapping is indeed the issue, then what is using all my memory? It’s a 20GB server, SGA size is 10GB and no other application is running. It gets a few hundred connections at a time, and PGA_aggregated_target is set to 2GB. So why would it be suffering from memory starvation? If memory is the issue, how can there be 4.7GB of free memory at 10:50AM?

Recent OS architectures are designed to use all available memory. Therefore, paging daemons doesn’t wake up until free memory falls below a certain threshold. It’s possible for the free memory to drop near zero and then climb up quickly as the paging/swapping daemon starts to work harder and harder. This explains why free memory went down to 78MB and rose to 4.7GB 10 minutes later.

What is using my memory though? /proc/meminfo is useful in understanding that, and it shows that the pagetable size is 5GB. How interesting!

Essentially, pagetable is a mapping mechanism between virtual and physical address. For a default OS Page size of 4KB and a SGA size of 10GB, there will be 2.6 Million OS pages just for SGA alone. (Read wikipedia’s entry on page table for more information about page tables.) On this server, there will be 5 million OS pages for 20GB total memory. It will be an enormous workload for the paging/swapping daemon to manage all these pages.

cat /proc/meminfo

MemTotal:     20549012 kB
MemFree:        236668 kB
Buffers:         77800 kB
Cached:        7189572 kB
...
PageTables:    5007924 kB  <--- 5GB!
...
HugePages_Total:     0
HugePages_Free:      0
Hugepagesize:     2048 kB

HugePages

Fortunately, we can use HugePages in this version of Linux. There are couple of important benefits of HugePages:

  1. Page size is set 2MB instead of 4KB
  2. Memory used by HugePages is locked and cannot be paged out.

With a pagesize of 2MB, 10GB SGA will have only 5000 pages compared to 2.6 million pages without HugePages. This will drastically reduce the page table size. Also, HugeTable memory is locked and so SGA can’t be swapped out. The working set of buffers for the paging/swapping daemon will be smaller.

To setup HugePages, the following changes must be completed:

  1. Set the vm.nr_hugepages kernel parameter to a suitable value. In this case, we decided to use 12GB and set the parameter to 6144 (6144*2M=12GB). You can run:
    echo 6144 > /proc/sys/vm/nr_hugepages

    or

    sysctl -w vm.nr_hugepages=6144

    Of course, you must make sure this set across reboots too.

  2. The oracle userid needs to be able to lock a greater amount of memory. So, /etc/securities/limits.conf must be updated to increase soft and hard memlock values for oracle userid.
    oracle          soft    memlock        12582912
    oracle          hard   memlock        12582912

After setting this up, we need to make sure that SGA is indeed using HugePages. The value, (HugePages_Total- HugePages_Free)*2MB will be the approximate size of SGA (or it will equal the shared memory segment shown in the output of ipcs -ma).

cat /proc/meminfo |grep HugePages
HugePages_Total:  6144
HugePages_Free:   1655 <-- Free pages are less than total pages.
Hugepagesize:     2048 kB

Summary

Using HugePages resolved our client’s performance issues. The PageTable size also went down to a few hundred MB. If your database is running in Linux and has HugePages capability, there is no reason not to use it.

This can be read in a presentation format at Investigations: Performance and hugepages (PDF).

 


Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Oracle expertise, or check out more of our HugePages-related blog posts.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

30 Comments. Leave new

Listener Coredumps on heavy load system | ora-solutions.net - Martin Decker
November 10, 2008 4:21 pm

[…] After contacting Oracle Support with this stack, they confirmed it to be Bug #6752308 which was closed as Duplicate of Bug 6139856. There is patch for 10.2.0.3 available and they also recommend to implement hugepages. By the way, there is an interesting article on the effect of utilizing – or not utilizing – hugepage… […]

Reply

What you have found is that badly tuned VM system can cause trouble. Your solution was to exempt large part of the system
memory from the paging system. Of course, there is a price to
pay for that, too. You have to turn off dynamic SGA sizing, very convenient feature in 10.2. In other words, you need to
set up shared pool, buffer cache, large pool and java pool and get it right based on the rules of thumb. I have tested
huge pages and found out that there is not much difference with the VM parameters set right. In other words, hugepages setup is a crutch and you pay a high price for using that crutch. I prefer doing things the right way and that is to correctly set up the paging parameters.
Kindest regards,
Mladen Gogala

Reply
Polarski Bernard
November 12, 2008 2:51 am

Thanks for all to share this invaluable experience.

Reply
Riyaj Shamsudeen
November 12, 2008 11:06 am

Hi Mladen
Thanks for reading our blog.
I am afraid that there is some form of nomenclature issues here. dynamic_sga is a term associated with 9i. You probably are referring to ASMM (Automatic Shared Memory Management) or VLM. Are you saying that use of hugepages will exclude use of ASMM? I doubt that.
So, VLM [ _use_indirect_buffers] is what’s in question. Well, in a 64 bit software, there is no need for indirect buffers. In a 32 bit environment, I guess, it needs to be carefully considered: Effect of SGA size increase vs effect of excessive scanning for free pages. Either way, I am biased against indirect buffers due to its overhead.

But, specifically,
1. How would you control paging daemons from scanning 10GB SGA pages, looking for free memory?
2. How would you reduce size of paging tables using just vm setup?

Cheers
Riyaj

Reply

I’m just curious, i’ve systems with linux kernel 2.6 and SGA between 4 and 10 GB with RAM installed between 7 and 12 GB by i’ve never seen such behaviour. What does it mean PageTables: 5007924 kB ?
On my systems i’ve never seen such an increase in memory free, why?

Reply

Riyaj, your blog is great and I read it on the regular basis. Let me answer your questions:
1) ASMM and HugePages are mutually exclusive, at least in Oracle10g. Look at the ML note 317141.1 which explicitly asks you to remove SGA_TARGET. And yes, I am a bit old, my nomenclature is from 9i. I do prefer descriptive names like “dynamic SGA management” to the alphabet soup like “ASMM”.
2) There is no scanning of 10GB or memory. System is scanning page tables, not pages themselves. Page tables are 4096 times smaller. The scanning, however, is not a problem. If you leave enough free memory, searching for free memory will not be a problem. In particular, setting vm.min_free_kbytes to 1048576 would make sure that the system will always maintain 1GB of free memory. Also, setting vm.overcommit to 1 would eliminate the need for checking swap every time the memory is allocated. The page cluster should be set to 5, to enable fast writing where possible. Also, you should turn off that pesky swappiness as it would devour resources needlessly.
Kindest regards,
Mladen Gogala

Reply

Hi,

Just curious, how will increasing vm.min_free_kbytes make any difference? Let’s say I set this to 5% of my memory, won’t the kernel still scan PageTables when memory usage hits 95%?

We’re experiencing an issue similar to this, where whenever a system releases a large amount of PageTables (1-2gig’s worth), we see a sharp spike in System CPU. Is there anything we can do to prevent this, or is hugepages the only option?

Reply

I am curious too, from what i read, this is kind of related to Linux kernel page cache implementation, it’s also may be affected by numa if it’s enabled.

I really want to see a more detailed example about vm tuning from Mladen.

Reply
Log Buffer #123: a Carnival of the Vanities for DBAs
November 14, 2008 12:54 pm

[…] on the Pythian Group Blog, Riyaj Shamsudeen contributed an item on performance tuning with HugePages in Linux, showing again the real advantages of knowing your way around the host […]

Reply
Riyaj Shamsudeen
November 17, 2008 12:21 pm

Hi Mladen

Thanks for your kind words.

1. I just tested it out in my linux server running 2.6 kernel. ASMM uses hugepages, as long as, available hugepages is greater than sga_max_size. ML note you referred is for 32 bit+use_indirect_buffers and Of course, use_indirect_buffers will not work with hugepages. But, ASMM itself works fine with hugepages.

11g AMM will not work with hugepages though.

2. You are right, I should have said “5GB pagetable” need to be scanned. Nevertheless, scanning 5GB of page table will consume enormous amount of CPU.

Thanks for those paging parameters. I see your point that if all these parameters are optimally setup, we might be able to reduce this effect.
I would rather prefer to keep page table itself much smaller, two reasons: 1)Bigger page table results in higher CPU usage from user processes due to higher TLB misses 2) Unnecessary waste for page table memory. For e.g., in this specific scenario, after setting up hugepages, pagetable size went down from 5GB to 400MB, a net gain of 4.4GB. We could allocate this memory to SGA allowing further gain.

Hi Cristian
Thanks for reading our blog. We might need more data to understand your specific situation.

Cheers
Riyaj

Reply
Hugepages revisited II: Be aware of kernel bugs! | ora-solutions.net - Martin Decker
January 7, 2009 6:31 am

[…] can reduce the overhead of managing memory pages of Oracle SGA by the operating system thus leading to lower system cpu utilization. I have written two blog entries regarding this topic already: Listener Coredumps on heavy load […]

Reply

Riyaj,

“It gets a few hundred connections at a time” is key to this whole thread and deserves more attention. The multiplier affect of non-shared page tables is what was eating up so much memory. For every foreground process the system required 10MB of page tables. There seems to have been about 500 dedicated connections at the time meminfo was examined.

As for “Most likely, much of SGA pages also can be paged out, since SGA is not locked in memory”, it is true that SGA pages can be swapped out…*only if* they have only been touched by one process. Mutliply referenced shared pages do not get swapped. Allowing this sort of swapping would cause a horrible chain reaction. After all, there are more than one processes using SGA pages. That aside, the page tables mapping the SGA are swapable. It is easy to account for the 4.7GB leaps in available memory you measured with sar by the simple swapping of the data,stack and page tables of just a percentage of the huge number of processes running on the system.

Mladen is right about the cost of losing AMM from a managability perspective.

I blogged about 11g AMM quite a while ago as well:

https://kevinclosson.wordpress.com/2007/08/23/oracle11g-automatic-memory-management-and-linux-hugepages-support/

Reply
Kevin Closson's Oracle Blog: Platform, Storage & Clustering Topics Related to Oracle Databases
February 17, 2009 12:38 pm

Oracle11g Automatic Memory Management – Part I. Linux Hugepages Support….

I spent the majority of my time in the Oracle Database 11g Beta program testing storage-related aspects of the new release. To be honest, I didn’t even take a short peek at the new Automatic Memory Management feature. As I pointed out the other d…

Reply
Suraj Sharma
March 4, 2009 11:18 am

Hi,

Thanks a lot for such meaningful and rare information about HugePages. I have a question thought (I may be confued or did not read your blog properly)

My question is:

How do we calculate the HugePages will it be like:

Let’s assume my SGA is 4GB and my Hugepagesize is 2048KB then my HugePages would be 4*1024*1024 (to convert it into KB) and then divide it with 2048??

(4*1024*1024)/2048=2048 (round off to 3000)???

Please correct me if I am wrong.. Also let me know how the same will work in 32Bit Linux

Reply

Suraj, here is a quick way to calculate the Hugepages.

Hugepages is not a derived value, but an optional setting if you want to use Hugepages. So, if # of Hugepages is set to 2048, then you would have allocated 4G(2048*2M – assumption Hugepagesize=2M) of Hugepages space.

You can now create the SGA, which is allocated from the Hugepage space.

Reply

Excellent article – thanks.
We are implementing a number of recommendations from Oracle’s RAC Assessment, and HugePage support was one of the high impact recommendations.
..
Also – I love your redesigned WEB site.. I found the previous one confusing and “busy”

Reply

Great article. It explains use of HugePages in a way that is easy to understand. This is the best article that I’ve found on it.

Reply
Kenneth Holter
February 25, 2011 9:32 am

Hi and thanks for the great article on hugepages.

I’m learning about linux memory and hugepages, and thought I’d experiment with it by replicating the issue you described in your post. First, I’ve written a small C program (https://pastie.org/1606226) that basically just eats 20 GB of memory. After running the program on my RHEL 5 server I was expecting the PageTable to be huge, but found that it was only about 43 MB. The page size on my RHEL box i 4 kB.

Can someone here maybe clear up why I’m not seeing such a major PageTable size issue like the one described in the post, and perhaps how to reproduce it (without actually running a database)?

Best regards,
Kenneth Holter

Reply

Nice post.

Reply

Hi Riyaz,

Great Article.
One question – In our environment we already have hugepages setup, however when i’m running AWR and ADDM its findings are “virtual memory paging” 100 % impact and OS is experiencing significant paging.

How do we resolve this ?

Regards
Syed

Reply

Note that on recent kernels (2.6.38, or RHEL/CentOS6), Transparent Hugepages should be automatically available. Zero config, all the fun.

See https://events.linuxfoundation.org/slides/2011/lfcs/lfcs2011_hpc_arcangeli.pdf

Quickcheck:
grep trans /proc/vmstat
grep AnonHuge /proc/meminfo

Reply
Kirill Loifman
November 1, 2016 4:44 am

I recommend disabling Transparent Hugepages and enabling original HugePages mechanism, especially when you use RAC or/and Oracle IM
— Kirill Loifman

Reply
HugePages Overhead « OraStory
May 30, 2012 3:08 pm

[…] Riyaj Shamsudeen – Performance tuning: Hugepages in Linux […]

Reply
Linux HugePages and virtual memory (VM) tuning | IT World
July 12, 2012 5:33 am

[…] Performance tuning: HugePages in Linux […]

Reply

I have daemon with 90GB memory used and only 186MB page table. Something is not right in your calculations?

14:33 [email protected]:~ $ ssh ***
Last login: Wed Jul 25 10:28:44 2012 from 192.168.3.181
[email protected]:~$ ps -eF | grep mmd
marko 2616 2569 0 1898 892 8 10:33 pts/0 00:00:00 grep mmd
nobody 11401 1 42 23487968 93628400 3 Jul15 ? 4-07:58:51 mmd: France
[email protected]:~$ cat /proc/meminfo | grep -i page
AnonPages: 93876336 kB
PageTables: 186200 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Reply

> I have daemon with 90GB memory used and only 186MB page table.
Your OS use transparent hugepages – see “AnonPages: 93876336 kB”.

Reply

Redhat Linux Oracle 11.2.0.3 & 10.2.0.5
—————————————

#############
Server 1 – NO Huge Pages
#############

This is a server where Huge Mages are not configured…
Server hosts both 11g and 10g Databases

HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 2048 kB

pick one 11g instance in server:

Starting ORACLE instance (normal)
****************** Large Pages Information *****************

Total Shared Global Region in Large Pages = 0 KB (0%)

Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 4096 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB

–OK, no Large Pages as none is available

tsopl11[uxarbuai006:/home/oracle]$ sysresv

IPC Resources for ORACLE_SID “tsopl11” :
Shared Memory:
ID KEY
81002502 0x00000000
81035271 0x00000000
81068066 0xee9b42e0
Semaphores:
ID KEY
1354104896 0x1b8640f8
Oracle Instance alive for sid “tsopl11”

–pmon (pmap -x)

tsopl11[uxarbuai006:/oracle/diag/rdbms/tsopl11_500/tsopl11/trace]$ e deleted <
0000000060000000 12288 – – – rw-s- [ shmid=0x4d40006 ]
0000000060c00000 528384 – – – rw-s- [ shmid=0x4d48007 ]
0000000081000000 2048 – – – rw-s- [ shmid=0x4d50022 ]

/proc/4103/maps

tsopl11[uxarbuai006:/oracle/diag/rdbms/tsopl11_500/tsopl11/trace]$ 3/maps <
60000000-60c00000 rw-s 00000000 00:06 81002502 /SYSV00000000 (deleted)
60c00000-81000000 rw-s 00000000 00:06 81035271 /SYSV00000000 (deleted)
81000000-81200000 rw-s 00000000 00:06 81068066 /SYSVee9b42e0 (deleted)

OK, so we have 3 segments if different sizes for SGA. Using standard pages…

Now for a 10g database in that same server we see:

0000000060000000 4196352 – – – rw-s- [ shmid=0x4f1001c ]

sactsar[uxarbuai006:/home/oracle]$ grep -e shmid -e deleted /proc/28087/maps
60000000-160200000 rw-s 00000000 00:06 82903068 /SYSVa149d940 (deleted)

We only have one segment…

Question:
Is this expected. Allocation changed between 10g and 11g that the latter attaches to 3 shared segments instead of 1?
If so, why does oracle 11g multiple pages?

################
Server 2 – Huge Pages
################

This server uses Huge Pages and hosts 11g databases

HugePages_Total: 9216
HugePages_Free: 0
Hugepagesize: 2048 kB

SGAs are larger than available Huge Pages, thus some instances have started up
using huge pages, some using standard and some both…

####
Instance with 0 Large Pages
####

Starting ORACLE instance (normal)
****************** Large Pages Information *****************

Total Shared Global Region in Large Pages = 0 KB (0%)

Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 4096 KB)
Large Pages configured system wide = 9216 (18 GB)
Large Page size = 2048 KB

RECOMMENDATION:
Total Shared Global Region size is 534 MB. For optimal performance,
prior to the next instance restart increase the number
of unused Large Pages by atleast 267 2048 KB Large Pages (534 MB)
system wide to get 100% of the Shared
Global Region allocated with Large pages
***********************************************************

phamn[uxuselkg044:/home/oracle]$ sysresv

IPC Resources for ORACLE_SID "phamn" :
Shared Memory:
ID KEY
1576992843 0x00000000
1577058381 0x00000000
1577582685 0x00000000
..truncated
1583186184 0x00000000
1583251722 0x00000000
1583284491 0x00000000
1583317260 0x00000000
1583382798 0x00000000
1583481105 0x00000000
1583546643 0x2793854c
Semaphores:
ID KEY
1309212736 0xd423d1ac
Oracle Instance alive for sid "phamn"

phamn[uxuselkg044:/home/oracle]$ pmap -x 12341 | grep -e shmid -e deleted
0000000060000000 4096 – – – rw-s- [ shmid=0x5dff004b ]
0000000060400000 8192 – – – rw-s- [ shmid=0x5e00004d ]
0000000060c00000 4096 – – – rw-s- [ shmid=0x5e01004f ]
0000000061000000 4096 – – – rw-s- [ shmid=0x5e020051 ]
0000000064000000 4096 – – – rw-s- [ shmid=0x5e0e806a ]
0000000064400000 4096 – – – rw-s- [ shmid=0x5e0f806c ]
..truncated
0000000078c00000 4096 – – – rw-s- [ shmid=0x5e5f010b ]
0000000079000000 4096 – – – rw-s- [ shmid=0x5e5f810c ]
0000000079400000 4096 – – – rw-s- [ shmid=0x5e60810e ]
0000000079800000 126976 – – – rw-s- [ shmid=0x5e620111 ] <– ?? Why all 4k and this one larger?
0000000081400000 2048 – – – rw-s- [ shmid=0x5e630113 ] <– ?? Why some 8k, others 4k and others 2k? is this expected?

OK it allocated all standard pages, yet way more that server NOT configured to use Huge Pages

####
Instance with 100% Large Pages
####

Alert log reads:

****************** Large Pages Information *****************

Total Shared Global Region in Large Pages = 1074 MB (100%)

Large Pages used by this instance: 537 (1074 MB)
Large Pages unused system wide = 4118 (8236 MB) (alloc incr 16 MB)
Large Pages configured system wide = 9216 (18 GB)
Large Page size = 2048 KB
***********************************************************

pspcn[uxuselkg044:/home/oracle]$ sysresv

IPC Resources for ORACLE_SID "pspcn" :
Shared Memory:
ID KEY
1575813166 0x00000000
1575878704 0x00000000
1576599617 0x97335c80
Semaphores:
ID KEY
1308688444 0x15343d60
Oracle Instance alive for sid "pspcn"

phamn[uxuselkg044:/home/oracle]$ pmap -x 11565 | grep -e shmid -e deleted
0000000060000000 16384 – – – rw-s- 223 (deleted)
0000000061000000 1081344 – – – rw-s- 225 (deleted)
00000000a3000000 2048 – – – rw-s- 242 (deleted)

pspcn[uxuselkg044:/home/oracle]$ grep -e shmid -e deleted /proc/11565/maps
60000000-61000000 rw-s 00000000 00:0a 1575813166 /223 (deleted)
61000000-a3000000 rw-s 00000000 00:0a 1575878704 /225 (deleted)
a3000000-a3200000 rw-s 00000000 00:0a 1576599617 /242 (deleted)

OK, seems instance is undeed using 100% large pages… As we have no "shmid" entry

######
Database with "MIXED" pages
######

Starting ORACLE instance (normal)
****************** Large Pages Information *****************

Total Shared Global Region in Large Pages = 4000 MB (99%)

Large Pages used by this instance: 2000 (4000 MB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 16 MB)
Large Pages configured system wide = 9216 (18 GB)
Large Page size = 2048 KB
..
***********************************************************

And we see:

phamn[uxuselkg044:/home/oracle]$ pmap -x 24895 | grep -e shmid -e deleted
0000000060000000 32768 – – – rw-s- 249 (deleted)
0000000062000000 4063232 – – – rw-s- 250 (deleted)
000000015a000000 2048 – – – rw-s- [ shmid=0x5e898110 ] <– ?? Why some 2k instead of 4k? is this expected?

Seems it's using both Large and standard

########
Another "MIXED" pages Instance
########

****************** Large Pages Information *****************

Total Shared Global Region in Large Pages = 32 MB (4%)

Large Pages used by this instance: 16 (32 MB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 4096 KB)
Large Pages configured system wide = 9216 (18 GB)
Large Page size = 2048 KB
..
***********************************************************

And as expected…

pelnn[uxuselkg044:/home/oracle]$ pmap -x 11790 | grep -e shmid -e deleted
0000000060000000 12288 – – – rw-s- 245 (deleted)
0000000060c00000 12288 – – – rw-s- 246 (deleted)
0000000061800000 8192 – – – rw-s- 247 (deleted)
0000000062000000 4096 – – – rw-s- [ shmid=0x5dfd0048 ]
0000000062400000 4096 – – – rw-s- [ shmid=0x5dfd8049 ]
0000000062800000 4096 – – – rw-s- [ shmid=0x5dfe004a ]
0000000062c00000 4096 – – – rw-s- [ shmid=0x5dff804c ]
0000000063000000 4096 – – – rw-s- [ shmid=0x5e00804e ]
0000000063400000 4096 – – – rw-s- [ shmid=0x5e018050 ]
0000000063800000 4096 – – – rw-s- [ shmid=0x5e028052 ]
0000000063c00000 4096 – – – rw-s- [ shmid=0x5e038054 ]
truncated…
000000007a000000 4096 – – – rw-s- [ shmid=0x5e61010f ]
000000007a400000 286720 – – – rw-s- [ shmid=0x5e628112 ] <– ?? Why all 4k and this one larger?
000000008bc00000 2048 – – – rw-s- [ shmid=0x5e638114 ] <– ?? Why some 8k, others 4k and others 2k? is this expected?

Questions:
– Why do we see many 4k pages and one 126976k chunk when using STANDARD or MIXED pages?
– Is this the way if works when huge pages is enabled? See differences between tis server and the one that uses NO huge pages…
– Why, when using both large and standard, we see 2k, 4k, 8k and one large chunk instead of uniform sizes?

Reply

Hello.
I ran a database farm with development servers and we host around 700 databases from 10R1 to 11gR2. Some servers are a mess and I have to improve their performance among other things.
Now, I was trying to implement hugepages on it but I found this output from the hugepages_setting.sh script:

vm.nr_hugepages = 36569

However, physical memory is 48gb and since this is x86_64, we are talking about 2mb hugepages.

36569 * 2 = Something bigger than 48gb :P

Thanks,
Alex.

Reply

Thank you Riyaj Shamsudeen for the great post…

Reply

TOP Article, nice to read and understand!!!

thx Riyaj!!!!

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *