The history and details of why and how I got involved in a SLOB and ORION comparison exercise can be found here (see Status Update 1). In short, Kevin Closson says SLOB is superior to ORION from an IO testing point of view, and Alex Gorbachev states that ORION provided the same results with less setup time.
In this blog post, I will share the physical IO testing results I got by running SLOB and ORION on the same system and on the same disks. I will use those results as a reference in a few blog posts to come. As of now, I would like to make a few points based on the results below.
- With SLOB, it’s difficult to do full scale physical IO testing unless you put a lot of effort in making sure that it behaves very closely to your application. The other way around is to make sure that the application uses the same IO patterns as SLOB.
- ORION makes it much easier to get worst-case scenario physical IO testing results that the application may expect from a storage solution in place.
- We were told by many sources for years now that if we place data on OUTER areas of disks, we may expect significantly faster response time. Based on my testing results, it looks to me like it either is a myth or is not applicable to modern hard drives anymore. The most important thing to do is to place your data in only one area of the disk. If you do, you may get IO response times twice as fast.
If after reading this post you would like to start arguing, please wait until I publish my follow up blog posts, in which I will explain the statements above. In order to make this post short, I don’t want to put all the details here.
ORION vs SLOB RESULTS
Test # | Testing Tool | Data Placement | Latency | IOPS | IO per Spindle |
1 | ORION | Full 12 disks | 11.35 ms | 2114 | 176.2 |
2 | SLOB | OUTER | 5.56 ms | 4072 | 339.3 |
3 | SLOB | INNER | 6.00 ms | 3757 | 313.1 |
4 | SLOB | MIX | 10.45 ms | 2243 | 186.9 |
“Data Placement” column values mean the following:
- Full 12 disks – I specified 12 disks device names in ORION’s LUN file.
- OUTER – I created a SLOB tablespace at the very beginning of disks. (It is the default if you use empty disks.)
- INNER – Data located at the end of disks. I have created a very big tablespace to use all storage, but the SLOB tablespace is at the end of each of 12 disks.
- MIX – SLOB tablespace has been located in 3 sections of the disks (outer, middle, inner).
Note that during all tests, CPU utilization on the host was very low (as it should be in physical IO testing scenario). Even SLOB tests didn’t use too much CPU for data processing (1-2% on average). This makes Kevin’s argument that ORION doesn’t test CPU spent on data blocks processing (Instance load) less important in such a scenario.
DETAILED RESULTS
First of all, let me provide the ORION OLTP 8k test result for 24 processes. Note that I run ORION once and spent rest of the time getting SLOB close to top the result below.
ran (small): my 24 oth 0 iops 2114 size 8 K lat 11.35 ms bw = 16.52 MBps dur 59.96 s READ
The following are results from 6 SLOB’s final test runs. I used the following command to get the output from AWR reports.
egrep "Physical reads|db file sequential read" awr_0w_24r* | egrep "Physical reads|User I/O"
All AWR reports are available in the following file – SLOB_test_run_files_01awr. Note the name of each 10046 file that corresponds to individual tests. 10046 trace files are available here: SLOB_test_run_files_01.
-1- First, I ran SLOB using default data placement (OUTER). A 10GB big SLOB tablespace was created at very begging of 12 x 500GB big hard drives (I used 11.2.0.3 ASM based instance). Note that if you start using empty disks, you will get the data placement alike:
awr_0w_24r.20120522_042001.txt (SLOB_ora_24806.trc): Load Profile Per Second Per Transaction ~~~~~~~~~~~~ --------------- --------------- Physical reads: 4,599.9 2,806,087.0 Avg wait % DB Event Waits Time(s) (ms) time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------- db file sequential read 30,864,917 156,266 5 98.9 User I/O
-2- In the next run, I placed SLOB data at the very end of each 500GB HDD. Notice that the IO response latency difference between the current and previous runs is within 20% binderies.
awr_0w_24r.20120520_211609.txt (SLOB_ora_17895.trc): Physical reads: 3,914.7 2,566,852.8 db file sequential read 30,800,251 171,763 6 99.0 User I/O
-3- In this test, I created 3 data files for the SLOB tablespace, placing the first file at the very begging of the disks, the second in the middle, and the third at the very end of each disk. Please note that this is where we see the worse performance as of now. It is 100% slower than the previous 2 runs.
awr_0w_24r.20120521_053820.txt (SLOB_ora_15754.trc): Physical reads: 2,246.1 2,806,219.2 db file sequential read 30,865,933 323,276 10 99.3 User I/O
-4- In order to eliminate any storage cache impact on the results, I decided to increase SLOB data set 4 times, from 80MB to 320MB. The following 3 runs are OUTER/INNER/MIX runs with bigger data sets
awr_0w_24r.20120522_154303.txt (SLOB_ora_28449.trc): Physical reads: 4,072.2 1,120,329.3 db file sequential read 12,322,724 68,512 6 99.0 User I/O
-5- INNER
awr_0w_24r.20120524_025813.txt (SLOB_ora_1383.trc): Physical reads: 3,757.1 1,120,366.8 db file sequential read 12,322,902 73,939 6 99.0 User I/O
-6- MIX
awr_0w_24r.20120522_232058.txt (SLOB_ora_31173.trc): Physical reads: 2,243.1 1,120,386.1 db file sequential read 12,322,886 128,778 10 99.3 User I/O
I run MrSkew from Method-R on each 10046 file generated for each test to verify the latency. You can find the result in this file: SLOB_test_mrskew_01.
My conclusions based on the testing results above are available in the following blog post: “SLOB – ORION vs SLOB – 2:0“.
5 Comments. Leave new
[…] Final results – ORION vs SLOB – 2:0- results […]
[…] That’s All That Matters There are some folks in the blogosphere putting in good SLOB testing. Some good folks at Pythian are doing the heavy lifting of proving to you that SLOB and Orion are on par or even, perhaps, […]
[…] don’t miss the match between Kevin (SLOB) and Yury (ORION). It’s a really interesting topic about I/O performance measurement and Oracle. Be Sociable, […]
Hi Yuriy!
Did you try compare SLOB and ORION under Oracle VM? I think it will make sense
Regards,
Kirill
Why do you thing I would make a difference if we would run those on Oracle VM? I don’t saying that I question the general idea. Just want to understand what exactly thinking to test that way and would be expected results.