To be precise, I wonder if OUTERmost tracks of a spinning HDD are faster than INNERmost tracks. Should we put physical IO performance secretive data to OUTERmost parts of the disk and less critical data to INNERmost parts as several vendors suggests? Well I couldn’t find a better solution than grabbing all HDDs I have and start testing :) Yes! It is a work in progress project ….
Disclaimer
Before you start criticizing my testing results I would like to make a few points clear:
- I am not an expert in the hardware space.
- I am just a curious Oracle Administrator, open to any suggestions on how to improve testing results and get closer to clarifying things.
- Some of you will say that it is a useless exercise because we don’t have any control over what areas of a single HDD data is placed nowadays.
- Well, in some rare configurations like Exadata or Oracle Database Appliance, we actually have this power and can possibly impact the IO performance.
- In other cases, it may be helpful to understand how things works to explain why there is a certain performance impact.
- As I am an Oracle DBA and Oracle databases most often are random IO bounded, I have focused my attention on random (8k) IO testing.
- I do believe there are better ways to test HDDs. Unfortunately, I don’t have enough knowledge about other options. I am open to your suggestions on how to do it in a better way.
- Just keep in mind that I have Windows 7 (64 bit), Dell Latitude E6410 for this testing.
- At the moment, I am waiting on SATA adapter to arrive. I will re-run some of the tests I did to confirm or adjust the testing results.
- This is a work in progress project, and I am not ready to make final conclusions ( if will be ready to make those at all ;) )
My expectations
Based on previous experiences, I expect that:
- OUTERmost tracks will not be much faster than INNERmost tracks.
- The worst performance should be when data is accessed from both OUTERmost and INNERmost tracks at the same time
I have focused my attention on 3 tests:
- Data on OUTERmost tracks
- Data on INNERmost tracks
- Data distributed equally through full HDD surface
Note: I have made a fourth test where I run random IO test accessing data from OUTERmost and INNERmost tracks at the same time. The test results were very close to the full surface tests. Therefore I do not provide those in this blog post.
How and what did I test
Hardware
To start, I took 7 HDDs that I happened to have and 2 SATA to USB adapters.
Software
I have used 2 options to test and confirm IOPS results:
- Windows 7 has a nice little (; silly ;) utility winsat. It didn’t take me too long to figure out how to make it do random 8k reads.
winsat disk -ran -ransize 8192 -read -drive E
- Oracle 11GR2 comes with the Oracle-native orion (ORacle IO Numbers) utility. I just installed it and used the command below to test random IOs.
orion -testname e_hdd -duration 20 -cache_size 0 -write 0 -num_disks 1 -run advanced -size_small 8 -matrix row -num_large 0 # e_hdd.lun \\.\e:
Assumption
To make things simple I have assumed that HDD controller (Or Windows, or whoever… Remember that I am not an expert!) allocates space starting from OUTERmost (fastest) tracks of the disk. Therefore, I have allocated 1GB of unformatted space to E: drive first, then filled all space but the last 1GB with an empty partition and created a 1GB G: drive. My assumption is that partition/drive E: is located on OUTERmoston tracks but partition/drive G: on the INNERmost (slowest) tracks. As you will see from the results below, this assumption isn’t correct for all HDDs.
To test full surface, I have deleted all partitions and created one big partition.
Please let me know in the comment section below if there is a better way to ensure that a partition is created on OUTERmost tracks of HDD or at least how to check what tracks the partition is created on (if it is possible).
Results
“Good” results
The following reflects IOPS testing for 3 HDDs that confirmed my expectations. It is clear that there is no less than 10% gain between OUTERmost and INNERmost tracks. However there is a significant performance impact if a HDD’s head should move across whole HDD surface to access data.
HDD Name | Outer | Inner | Full | Outer vs Inner | Outer vs Full |
WD 2.5″ 1TB 5400RPM / WDBBEP0010BRD | 127 | 121 | 64 | 4.96% | 98.44% |
Hitachi 3.5″ 320GB 7200RPM / HDT725032VLA380 | 133 | 124 | 58 | 7.26% | 129.31% |
WD 2.5″ 160 GB 5400RPM / WD1600BEVT | 112 | 103 | 61 | 8.74% | 83.61% |
“Other” results
A careful reader would notice that I didn’t provide all test results so far, and you are right. The reason is that the rest of the results don’t confirm my theory :). Have a look on the other 3 HDDs test results below (Note that one of the 7 HDDs has data on it. Therefore, I have excluded it from OUTERmost and INNERmost tracks testing).
HDD Name | Outer | Inner | Full | Outer vs Inner | Outer vs Full |
HGST 2.5” 1TB 7200RPM / AT-0J22423 | 107 | 132 | 69 | -18.94% | 55.07% |
Segate 2.5” 250GB 7200RPM / ST9250410AS | 88 | 71 | 63 | 23.94% | 39.68% |
Seagate 3.5″ 1TB 7200RPM / ST31000333AS | 141 | 92 | 69 | 53.26% | 104.35% |
Either the 3 HDDs above showed results that I can’t explain (AT-0J22423) as of now or performance difference between OUTERmost and INNERmost tracks is more significant than for the first set of HDDs. However, it is somewhat clear that in both sets, there is a significant performance penalty if a HDD moves head across the whole surface. Those are expected result, aren’t they?
Intermediate conclusions
There are some HDD models in which is no significant performance difference between accessing data located on OUTERmost or INNERmost tracks. However, in some cases, IO performance could be 130% slower if HDD header travels across the whole surface to return data.
This has 2 possible practical implications:
- If someone states that a partition is created on a OUTERmost tracks of a HDD, it doesn’t necessarily mean that IO are significantly faster from that partition than from any other region of the HDD.
- It could significantly slowdown IO operations if data were accessed from both OUTERmost and INNERmost tracks (e.g. if DATA is located on OUTERmost tracks but FRA on INNERmost tracks).
- You may find that your storage performance degrades more “active” data you put on your hard drives (fill hard drives).
Keep in mind that there are possible exceptions. Based on my initial tests, there are some HDD models in which the difference between OUTERmost and INNERmost tracks is significant.
Please help improve the test results
As I stated at the begging of this post: a) I am not an expert in this space, b) This is a work in progress project and I am looking for better ways to test random IOs, and c) I need your help to understand why there are exceptions and need suggestions on how to adjust the HDDs testing process to improve the test results and get closer to good conclusions.
Yury
9 Comments. Leave new
Great Post.. :) Good to see how an Apps DBA, should also have inDepth knowledge to sort out performance issues related to IO. Great experiment… :) Keep improving with different options..
You are testing the wrong thing. Random IO is not where outer tracks are faster. They have faster bandwidth, which cannot be accurately measured with random IO – you need sequential (large number of *consecutive* 8K access) to see any difference. Random IO measures mostly the speed of arm movement which varies only with the distance between start and target tracks, not their location. Which is exactly why you see a big difference when there is large arm movement. You need to measure the bandwidth, not the IOPS. IOPS are relevant for random IO, but not really as much for sequential.
Good point Nuno :). In next test iteration I will run orion throughout test too. it does 1MB reads.
It may explain why the inner tests are slightly slower as the head need to be moved a bit more across the surface there.
Hi Yury,
you may try with Intelligent Data Placement + SLOB/Orion:
https://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#CHDICBEB
– create a tbsp with the hot attribute
– run the test
– modify the datafile’s attribute to cold
– rebalance (should work with a single disk using IDP)
– re-run the same test
:-)
—
Ludovico
Hi Yuri,
Great post as usual! We did similar tests back in November 2010 on an EMC Symmetrix VMAX storage with 146GB 15K FC disks. Since Intelligent Data Placement was something new and I don’t want to risk to use new features :) we did the old way like you did; After many tests and different slicing of the disks, best results we came up was when we used the OUTER part of the disks by creating a slice of 80 GB. Believe it or not for the sake of performance we have wasted and not used the inner 50+GB of each disk.
Best regards,
Lazar
Lazar, no need to waste that! Simply use a logical volume manager to string them together into one or two filesystems and load there both your OS and Oracle software, as well as logs, etcetc.
80GB is plenty for that and performance-wise, once the code is loaded in memory it’s rarely if ever read from disk again, so there! (hey, it’s what I do and it works a treat!)
Hi,
Was on-disk cache equal for all disks? How do you think cache size and (possibly) different caching algorithms from different vendors impact results?
Veiksmi,
J?nis
Teradata pay a great deal of attention to this detail, to optimise the performance of their servers. They also use a technique called short stroking (see: https://www.techarp.com/showarticle.aspx?artno=691), which reduces maximum track-to-track movement and places the most critical data on the fastest to retrieve sectors on the drive. It might be worth investigating their techniques. Also, staggering the first block of each track to decrease rotational latency is worth researching, this will affect large sequential reads, though may be masked by caching if the cache is of a sufficient size.
You give numbers, but you don’t say what the numbers mean. What’s an “IOPS”? Are higher numbers faster, or slower?