NOTE: Other SLOB-related posts are listed in “My SLOB IO testing index“.
I think the results we got so far may surprise you. At lease they don’t seem to be the results +Alex Gorbachev and +Kevin Closson expected to see. You can find the first related blog post here. It will give you the necessary context for further reading. Just to recap: +Kevin Closson says that “Orion may give It’s VERY easy to get huge Orion nums but reasonable SLOB” and +Alex Gorbachev says that “lots of the system IO bound below the CPU level so you should see similar number with Orion or SLOB”. Let’s see what the first results revealed.
I would say that the Orion results seem to be the expected numbers :
ran (small): my 12 oth 0 iops 1378 size 8 K lat 8.71 ms bw = 10.77 MBps dur 59.97 s READ ... ran (small): my 24 oth 0 iops 2114 size 8 K lat 11.35 ms bw = 16.52 MBps dur 59.96 s READ ... ran (small): my 240 oth 0 iops 4305 size 8 K lat 55.74 ms bw = 33.63 MBps dur 59.81 s READ
We ran the OLTP 8k Orion test in read-only mode, providing 12 x 500 GB disks. It executed 20 tests, increasing the number of parallel threads from 12 to 240. The IOPS numbers grew slowly along with increased response time. The best response time we got was 8.71 ms with 12 processes harming storage and the best IOPS 4305 with correspondent latency 55.74ms running 240 processes. It gave us from 115 to 359 IOPS per IO spindle.
The SLOB results surprised us a bit:
We ran the “runit.sh 0 24” test, where there were 24 Oracle sessions executing single table’s block reads via index access (db file sequential read). We made sure that most IO issues by foreground processes would be PIOs. AWR data confirmed that we succeeded: Logical reads Per Second 4,541.6, Physical reads Per Second 4,524.4. It looked like most block reads (99.6%) have been processed via storage requests. But hold on a second. 4,524.4 is nothing else but IOPS. If we compare it with the Orion numbers, we see that SLOB shows better IO performance than Orion does. On top of that, the SLOB latency is significantly less than Orion’s – 5ms vs 55.74ms. I don’t think anybody expected that type of result. Just to give you a bit more input, here the most relevant parts of the SLOB AWR report:
Snap Id Snap Time Sessions Curs/Sess --------- ------------------- -------- --------- Begin Snap: 104 15-May-12 01:00:13 59 1.2 End Snap: 105 15-May-12 01:43:04 36 .7 Elapsed: 42.86 (mins) DB Time: 990.84 (mins) Load Profile Per Second Per Transaction Per Exec Per Call ~~~~~~~~~~~~ --------------- --------------- ---------- ---------- DB Time(s): 23.1 5,404.6 1.25 499.58 DB CPU(s): 0.3 68.0 0.02 6.28 Logical reads: 4,541.6 1,061,707.7 Physical reads: 4,524.4 1,057,677.6 Top 5 Timed Foreground Events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DB Event Waits Time(s) (ms) time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------- db file sequential read 11,630,768 58,928 5 99.1 User I/O DB CPU 748 1.3 resmgr:cpu quantum 1,854,672 79 0 .1 Scheduler resmgr:internal state change 1 0 101 .0 Concurrenc undo segment extension 2 0 5 .0 Configurat
The status is still a work-in-progress. We are running some other test now, and there is the “runit.sh 0 128” SLOB test on the way. Surprise, surprise! The server didn’t melt down with 128 processes running at the same time :). Speaking of the results, we have gotten so far that I suspect that SLOB doesn’t have objects big enough to hit over the HDDs caches. Some of the SLOB request are served from HDDs’ cache. If this is correct, then in this round Orion should be bigger than SLOB in terms of providing a “cleaner” IO test results.
+Kevin Closson said “Orion may give It’s VERY easy to get huge Orion nums but reasonable SLOB” – kind of FALSE in this case.
+Alex Gorbachev said “lots of the system IO bound below the CPU level so you should see similar number with Orion or SLOB.” – FALSE again in this case. :)
Stay tuned folks,
More results on the way…
Yury & Co
Well, right after I published this blog post while on the way to my English Pronunciation course, I thought about another probable reason for SLOB giving better results than Orion. It still is related to the size of the SLOB data set. Since we are using 12 x 500GB big HDDs for testing and the SLOB data set is relatively small, all the data is located close to the fastest (outer) edge of the disk. Orion, however, read from all the devices’ address space. This may be a good explanation for the discrepancy in the SLOB & Orion results. We will test it and update you on the results.
13 Comments. Leave new
Great post, could You please post Orion command line related parameters as well ?
Regards
GregG
Sure I can :)
nohup time ./run_orion_oltp.sh 2>&1 1>./run_orion_oltp.01.log &
[[email protected] orion]$ cat ./run_orion_oltp.sh
./orion_linux_x86-64 -run oltp -testname data_dg -num_disks 12 -write 0
[[email protected] orion]$
[[email protected] orion]$ cat data_dg.lun
/dev/mapper/disk_0_y00_532421940p1
/dev/mapper/disk_0_y01_532341712p1
/dev/mapper/disk_0_y04_532337776p1
/dev/mapper/disk_0_y05_532378236p1
/dev/mapper/disk_0_y08_532384844p1
/dev/mapper/disk_0_y09_532341692p1
/dev/mapper/disk_1_y02_532253988p1
/dev/mapper/disk_1_y03_532384572p1
/dev/mapper/disk_1_y06_532385008p1
/dev/mapper/disk_1_y07_532341764p1
/dev/mapper/disk_1_y10_532385560p1
/dev/mapper/disk_1_y11_531870356p1
[[email protected] orion]$
Love the testing, Yury..your work will help others so thanks.
A couple of comments:
1. Regarding the HDD cache hypothesis. You have 12 disks atached to your server so I’ll just presume you have an LSI controller. Depending on the vintage that will provide either 256 or 512MB cache on the PCI card. If you run more than 8 reader.sql users you will blow out that cache. The cache on an LSI card is generally a <200us service time PIO. You are seeing 5ms which is physically acceptable for fast drives with short seeks. You cite the capacity and count of your disks but not the performance attributes (e.g., RPM, track buffer size). Your 5ms I/O is quite likely due to short stroke seeks because you are in the neighborhood of 370 IOPS/drive which is quite possible with short seeks.
2. I'll be gentle on this comment. Your testing is at the extreme low end. Modern Xeons (e.g., WSM, SNB) can handle on the order of 20,000 IOPS/core. Since Orion uses no user-mode CPU it has the propensity to get to high-performance numbers easier than SLOB. SLOB has Oracle latches to contend with. Modern QPI servers handle that better, sure, but at your PIO rate there simply is no contention.
All that aside, nice work. Thanks for posting for us too.
Thanks for the comments and attention Kevin,
I think I do not understand your’s 2d comment:
2. I’ll be gentle on this comment. Your testing is at the extreme low end. Modern Xeons (e.g., WSM, SNB) can handle on the order of 20,000 IOPS/core. Since Orion uses no user-mode CPU it has the propensity to get to high-performance numbers easier than SLOB. SLOB has Oracle latches to contend with. Modern QPI servers handle that better, sure, but at your PIO rate there simply is no contention.
I thought IO performance is limited by number of HDDs handles. If I have 12 HDDs in my test. Each HDD have average latency 5ms than each spindle (HDD) can handle 200 (- CPU overhead time). 12 x 200 => 2400 IOPS is the maximum theoretical IOPS I can reach here. Any higher number is ether HW or SW “cheating”.
To reach mentioned by you 20,000 theoretical IOPS number we should have 100 HDDs in the system. IMHO: 100 HDDs is normal for High end system. However I would bet that 80% of Oracle RDBMS clients doesn’t have 50 HDDs in their systems (I am not talking about $ licences %, I am talking about systems and DBAs :)
Yury
Yury,
Orion gets raw disks. How does SLOB access disk? Is it through a file system with direct I/O?
ASM
Sorry, Yury, one more comment. Do you intend to analyze the behavior difference between SLOB and Orion for writes? I’ve been pointing out the difference in CPU profile for the two kits. DBWR is different than Orion. Perhaps some runit.sh N 0 ?
IMHO: Writes is the area where SLOB is superior vs Orion in most of the cases. As to test writes with Orion we need a dedicated device for testing only. This is luxury that happens only during the initial build of a system (and not always even than e.g. ODA). I may put some efforts in place to deallocate a device from ASM and test writes but most luckily I will not have enough time for doing it.
[…] quick exchange of ideas set into motion some Pythian testing by Yury. As it turns out I think the goal of that test was to prove parity between SLOB and Orion […]
Hi Yury,
Are you using resource manager on your testing DB ? As far as I remember event “resmgr:cpu quantum” is related to that and it can impact your tests as well.
If you are looking for disk cache impact on your test check “db file sequential read” histogram. I have been tested SLOB on ext3 file system with filesystemio_options set to all and here are my results (time vs % of event)
under 1 ms 5.1 %
under 2 ms 12.7 %
under 4 ms 35.8 %
under 8 ms 35.9 %
under 16 ms 9.7 %
regards,
Marcin
>> Are you using resource manager on your testing DB ?
Let me check. I didn’t switch it. It may be that in 11.2.0.3 it is a default option.
This is a very good point. Thank you for the hint!
Than you for sharing your results. I am still working on getting consistent results. I hope to publish those as soon as ready.
Yury
>getting consistent results
I’ve never seen SLOB return inconsistent results with read-only. Are you doing write-only (REDO model) or mixed? I’d like to understand your scenario.
[…] My First Experience Running SLOB – Status Update 2 (first results) – first conclusions Orion vs SLOB – 1:0 […]