How to Enable RAM Cache in Oracle Exadata

Posted in: DBA Lounge, Oracle, Technical Track

A brief background on RAM cache or in-memory OLTP acceleration

In this post I will present a feature — to simplify let’s call it RAM cache — introduced in Oracle Exadata Storage Server image 18.1. You’ve probably heard a lot about the new Oracle Exadata X8M and its Intel Optane DC Persistent Memory (PMem). This new feature + architecture allows the database processes running on the database servers to remotely read and write through a protocol called Remote Direct Memory Access (RDMA) from / to the PMem cards in the storage servers. You can find the detailed Oracle Exadata X8M deployment process here.

It’s true that RDMA has existed in the Oracle Exadata architecture from the beginning, as Oracle points out in their blog post titled, Introducing Exadata X8M: In-Memory Performance with All the Benefits of Shared Storage for both OLTP and Analytics:

RDMA was introduced to Exadata with InfiniBand and is a foundational part of Exadata’s high-performance architecture.

What you may not know, is there’s a feature called in-memory OLTP acceleration (or simply RAM cache) which was introduced in the Oracle Exadata Storage Server image 18.1.0.0.0 when Oracle Exadata X7 was released. This feature allows read access to the storage server RAM on any Oracle Exadata system (X6 or higher) running that version or above. Although, this is not the same as PMem, since RAM is not persistent, it is still very cool since it allows you to take advantage of the RAM available in the storage servers.

In-memory OLTP acceleration (or simply RAM cache).

Modern generations of Exadata storage servers come with a lot of RAM available. By comparison, X8 and X7 come with 192GB of RAM by default, as opposed to the 128GB of RAM that came with X6.

Unfortunately, the RAM cache feature is only available on storage servers X6 or higher and these are the requirements:

  • Oracle Exadata System Software 18c (18.1.0).
  • Oracle Exadata Storage Server X6, X7 or X8.
  • Oracle Database version 12.2.0.1 April 2018 DBRU, or 18.1 or higher.

That large amount of RAM is rarely fully utilized by the Oracle Exadata storage servers. This RAM cache feature allows you to use all or part of the available RAM in the storage servers. Doing this extends your database buffer cache to the storage server’s RAM for read operations.

In the new Oracle Exadata X8M the I/O latency is under 19µs for read operations. This is due to the PMem cache combined with the RoCE (RDMA over converged ethernet) network. In the Oracle Exadata X7/X8 the I/O latency for reads with RAM cache using RDMA over InfiniBand is around 100µs. Without RAM cache the number goes up to 250µs reading directly from the flash cache. The following information is from the Oracle Exadata Database Machine X8-2 data sheet:

For OLTP workloads Exadata uniquely implements In-Memory OLTP Acceleration. This feature utilizes the memory installed in Exadata Storage Servers as an extension of the memory cache (buffer cache) on database servers. Specialized algorithms transfer data between the cache on database servers and in-memory cache on storage servers. This reduces the IO latency to 100 us for all IOs served from in-memory cache. Exadata’s (sic) uniquely keeps only one in-memory copy of data across database and storage servers, avoiding memory wastage from caching the same block multiple times. This greatly improves both efficiency and capacity and is only possible because of Exadata’s unique end-to-end integration.

How I set up RAM cache in the Exadata storage servers

As I mentioned previously, the recent generation of Oracle Exadata storage servers come with a lot of RAM. This RAM is normally not used at its fullest by the cellsrv services and features. Having said that, I normally take into consideration the amount of free memory (RAM) in the storage servers. First, I pick the storage server using the most RAM and do the math: freemem*0.7=RAM cache value. Next, I set the RAM cache to 70 percent of the free memory of the storage server using more RAM than the others. Note: I avoid using all the free memory for the RAM cache in case the storage server requires more memory for storage indexes or other needs in the future.

Let’s say my busiest storage server has 73GB of free memory. Applying the formula we get to: 73*0.7=51.1GB.

Oracle Exadata architecture was built to spread the workload evenly across the entire storage grid, so you’ll notice that the storage servers use pretty much the same amount of memory (RAM).

Here comes the action and fun. We must first check how much memory is available in our storage servers by running this from dcli (make sure your cell_group file is up-to-date):

[[email protected] ~]# dcli -l root -g cell_group free -g

In my case the cel01 is the storage server using more memory than others. Let’s check some details of this storage server:

[[email protected] ~]# cellcli
CellCLI: Release 19.2.7.0.0 - Production on Thu Aug 06 07:44:59 CDT 2020
Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
CellCLI> LIST CELL DETAIL
name: exaceladm01
accessLevelPerm: remoteLoginEnabled
bbuStatus: normal
cellVersion: OSS_19.2.7.0.0_LINUX.X64_191012
cpuCount: 24/24
diagHistoryDays: 7
fanCount: 8/8
fanStatus: normal
flashCacheMode: WriteBack
flashCacheCompress: FALSE
httpsAccess: ALL
id: 1446NM508U
interconnectCount: 2
interconnect1: bondib0
iormBoost: 0.0
ipaddress1: 192.168.10.13/22
kernelVersion: 4.1.12-124.30.1.el7uek.x86_64
locatorLEDStatus: off
makeModel: Oracle Corporation SUN SERVER X7-2L High Capacity
memoryGB: 94
metricHistoryDays: 7
notificationMethod: mail,snmp
notificationPolicy: critical,warning,clear
offloadGroupEvents:
powerCount: 2/2
powerStatus: normal
ramCacheMaxSize: 0
ramCacheMode: Auto
ramCacheSize: 0
releaseImageStatus: success
releaseVersion: 19.2.7.0.0.191012
rpmVersion: cell-19.2.7.0.0_LINUX.X64_191012-1.x86_64
releaseTrackingBug: 30393131
rollbackVersion: 19.2.2.0.0.190513.2
smtpFrom: "exadb Exadata"
smtpFromAddr: [email protected]
smtpPort: 25
smtpServer: mail.loredata.com.br
smtpToAddr: [email protected]
smtpUseSSL: FALSE
snmpSubscriber: host=10.200.55.182,port=162,community=public,type=asr,asrmPort=16161
status: online
temperatureReading: 23.0
temperatureStatus: normal
upTime: 264 days, 8:48
usbStatus: normal
cellsrvStatus: running
msStatus: running
rsStatus: running

From the output above we can see that the parameter ramCacheMode is set to auto while ramCacheMaxSize and ramCacheSize are 0. These are the default values and mean the RAM cache feature is not enabled.

This storage server has ~73GB of free / available memory (RAM):

[[email protected] ~]# free -m
total used free shared buff/cache available
Mem: 96177 15521 72027 4796 8628 75326
Swap: 2047 0 2047

Now we can enable the RAM cache feature by changing the parameter ramCacheMode to “On”:

CellCLI> ALTER CELL ramCacheMode=on
Cell exaceladm01 successfully altered

Immediately after the change we check the free / available memory (RAM) in the storage server operation system:

[[email protected] ~]# free -m
total used free shared buff/cache available
Mem: 96177 15525 72059 4796 8592 75322
Swap: 2047 0 2047

Not much has changed, because the memory remains available for the storage server to use for RAM cache. However, when we enable the RAM cache feature, the storage server will not automatically allocate / use this memory.

We can see that only 10GB was defined in the ramCacheMaxSize and ramCacheSize parameters:

CellCLI> LIST CELL DETAIL
name: exaceladm01
accessLevelPerm: remoteLoginEnabled
bbuStatus: normal
cellVersion: OSS_19.2.7.0.0_LINUX.X64_191012
cpuCount: 24/24
diagHistoryDays: 7
fanCount: 8/8
fanStatus: normal
flashCacheMode: WriteBack
flashCacheCompress: FALSE
httpsAccess: ALL
id: 1446NM508U
interconnectCount: 2
interconnect1: bondib0
iormBoost: 0.0
ipaddress1: 192.168.10.13/22
kernelVersion: 4.1.12-124.30.1.el7uek.x86_64
locatorLEDStatus: off
makeModel: Oracle Corporation SUN SERVER X7-2L High Capacity
memoryGB: 94
metricHistoryDays: 7
notificationMethod: mail,snmp
notificationPolicy: critical,warning,clear
offloadGroupEvents:
powerCount: 2/2
powerStatus: normal
ramCacheMaxSize: 10.1015625G
ramCacheMode: On
ramCacheSize: 10.09375G
releaseImageStatus: success
releaseVersion: 19.2.7.0.0.191012
rpmVersion: cell-19.2.7.0.0_LINUX.X64_191012-1.x86_64
releaseTrackingBug: 30393131
rollbackVersion: 19.2.2.0.0.190513.2
smtpFrom: "exadb Exadata"
smtpFromAddr: [email protected]
smtpPort: 25
smtpServer: mail.loredata.com.br
smtpToAddr: [email protected]
smtpUseSSL: FALSE
snmpSubscriber: host=10.200.55.182,port=162,community=public,type=asr,asrmPort=16161
status: online
temperatureReading: 23.0
temperatureStatus: normal
upTime: 264 days, 8:49
usbStatus: normal
cellsrvStatus: running
msStatus: running
rsStatus: running

To confirm we can run the following query from cellcli:

CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
10.1015625G On 10.09375G

To reduce the memory used by the RAM cache feature we can simply change the ramCacheMaxSize parameter:

CellCLI> ALTER CELL ramCacheMaxSize=5G;
Cell exaceladm01 successfully altered

If we check the values of the RAM cache parameters we will see this:

CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
5G On 0

As soon as the database blocks start being copied to the RAM cache we will see the ramCacheSize value increasing:

CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
5G On 3.9250G

Increasing a bit more:

CellCLI> ALTER CELL ramCacheMaxSize=15G;
Cell exaceladm01 successfully altered

When checking, you’ll notice it takes a while for the cellsrv to populate the RAM cache with blocks copied from the flash cache:

CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
15G On 0
CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
15G On 11.8125G
CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
15G On 15G

Re-setting to auto makes everything clear again:

CellCLI> ALTER CELL ramCacheMode=Auto
Cell exaceladm01 successfully altered
CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
0 Auto 0

Now we adjust to the value we got from our calculation of 70 percent of the free memory:

CellCLI> ALTER CELL ramCacheMode=On
Cell exaceladm01 successfully altered
CellCLI> ALTER CELL ramCacheMaxSize=51G
Cell exaceladm01 successfully altered
CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
51G On 32.8125G
CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
51G On 35.2500G
CellCLI> LIST CELL ATTRIBUTES ramCacheMaxSize,ramCacheMode, ramCacheSize
51G On 51G

With that configuration in place, if we want to be notified if the storage server is running out memory we can quickly create a threshold based on the cell memory utilization (CL_MEMUT) metric to notify us when the memory utilization goes beyond 95 percent:

CellCLI> CREATE THRESHOLD CL_MEMUT.interactive comparison=">", critical=95

Conclusion

To sum up, RAM cache (aka, in-memory OLTP acceleration) is a feature available only on Oracle Exadata Database Machine X6 or higher with at least the 18.1 image. In addition, it’s available for the Oracle Database 12.2.0.1 with April 2018 DBRU or higher. This feature helps extend the database buffer cache to the free RAM in the storage servers, but only for read operations, since RAM is not persistent. For persistent memory, Oracle introduced the Persistent Memory Cache with Oracle Exadata Database Machine X8M.

It’s worth mentioning that a database will only leverage RAM cache when there is pressure on the database buffer cache. The data blocks present in the RAM cache are persistently stored in the storage server’s flash cache. When a server process on the database side requests a block that is no longer stored in the database buffer cache, but is in the RAM cache, the cellsrv will send this block from the RAM cache to the buffer cache for the server process to read it. It is faster to read from the RAM cache instead of reading it from the flash cache or disk.

While the in-memory OLTP acceleration feature is not a magic solution, it is a plus for our Exadata system. Since we almost always see free memory in the storage server, this is a way of optimizing the resources we’ve already paid for. This feature is already in the Exadata licenses, so there is no extra cost option, and it is not related to the database in-memory option. Having Exadata is all you need.

Happy caching! See you next time!

Franky

References:

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Senior Oracle Database Consultant
Franky works for Pythian as Senior Oracle Database Consultant. He has extensive knowledge in Oracle Exadata and High Availability technologies and in other databases like MySQL, Cassandra and SQL Server. He is always improving his skills focusing on researching Oracle performance and HA. Franky has been involved in some major implementations of multinode RAC in AIX, Linux and Exadata and multisite DataGuard environments. The guy is OCP 12c, OCE SQL, OCA 11g, OCS Linux 6 and RAC 12c and was nominated Oracle ACE in 2017. He is well known in the Brazilian community for his blog https://loredata.com.br/blog and for all the contribution he provides to the Oracle Brazilian community. Franky is also a frequent writer for OTN and speaker at some Oracle and database conferences around the world. Feel free to contact him in social media.

4 Comments. Leave new

Christo Kutrovsky
October 11, 2020 6:59 pm

Hi, great post – that is indeed a good use of potentially wasted resources on the cell nodes.

Do you know if the “cell single block physical read” event is replaced with another event when a cell ramcache cache hit occurs?

If not – that may skew up the overall metrics as it will report much lower latency than reality.

Reply

Hi Christo! Nice to see you here.

Indeed it is a good way to use the resources.

In regards to a wait event when it reads from the DB Cache, RAMCache or Flash Cache or Hard Disk it will be “cell single block physical read”. We already have skewed up wait event metrics since it is the same for any layer where the data block comes from.

If it is a single block read operation it will be “cell single block physical read” (non-Exadata would “db file sequential read”), while if it is a multiblock block read of a segment that is smaller than 2% of the buffer cache it will be “cell multiblock physical read” (non-Exadata would be “db file scattered read”) and if the segment is larger than 2% of the buffer cache it will be “cell smart table/index scan” or “direct path read”.

In order to know if a specific data block during a read operation came from the RAM Cache you can query v$sysstat or v$sessstat by the statistic named “cell ram cache read hits”. This way you’ll know if that block or set of blocks came from RAM Cache during a read operation. That is the only statistic I’ve seen increasing during RAM Cache reads that is related to that feature. You can breakdown the wait event during a period of time or for a specific session by matching the timeframe or session with the statistics.

Hope it helps. Sincerely,

Franky

Reply
Christo Kutrovsky
October 14, 2020 9:03 pm

Right .. forgot about that. V$SQL also has OPTIMIZED_PHY_READ_REQUESTS – which is supposed to track FLASH hits. I’ve used this in the past for some troubleshooting. I am going to guess that cell RAM reads also counts as optimized.

Typically most reads come out of flash, because it’s so large, but PMEM and RAM Cache are significantly smaller and much faster – I think this pmem stuff will dilute the numbers much more. At least they added a session level stat – which is good!

I was looking at some stats, when all from cell RAM “cell single block physical read” is 0.018 ms average on X8M which is very impressive.

Reply

Hey Frank,

Thank you for posting this very useful feature.

is there a way to measure the efficiency of using this feature ?. Any insights is much appreciated.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *