Find Communities by: Category | Product

Starting in version 11gR2 Grid Infrastructure, Oracle changed the default striping method for ONLINELOG from fine to coarse.  This change surprised me, as I found it quite by accident.  The reason it surprised me is that it is not documented in the 11gR2 new features manual.  I actually have not been able to find it documented anywhere.  Even worse than that, I have upgraded many of my ASM instances from 11gR1 to 11gR2 and the upgrade process silently changed my existing redo logs from fine to coarse.  At first, I thought the must be some explanation, such as I must have recreated my redo logs after the upgrade, or maybe they were always coarse to begin with. I checked many databases, and the ones that were still on 11gR1 ASM were all were using fine grained striping and all of the databases that were upgraded to 11gR2 ASM were all using coarse.

 

Prior to 11gR2, the default, for redo logs (ONLINELOG template) had been fine grained stripping. Fine grained striping writes 128 KB data to each ASM Disk in the diskgroup in a round robin fashion, 128 KB goes to the first disk, then the next 128 KB, goes to the next disk, etc. With coarse grained striping ASM writes data to each disk in the same round robin fashion, but writes chunks in the size of the ASM instance’s allocation unit (AU) size, default is 1MB. According to the Overview of Oracle Automatic Storage Management manual, “Coarse-grained striping provides load balancing for disk groups while fine-grained striping reduces latency for certain file types by spreading the load more widely.”  So fine-grained striping reduces latency for some file types. The manual explains these types,” The fine-grained stripe size always equals 128 KB in any configuration; this provides lower I/O latency for small I/O operations.”  Small I/O operations sure sounds like a good candidate for redo logs to me.

 

The Testing

Fortunately, I have access to a lab that is a complete replica of EMC’s core mission critical CRM system.  We have developed a method of replicating a typical peak load from our busiest period, end of quarter.  This performance testing method utilizes 3 different sources for reproducing the typical workload.  The first is end-user activity.  This workload is reproduced by using a tool that actually exercises the application by performing the same actions as an actual person.  In this part of the test user response times are tracked against over 50 different transactions from logging in to browsing customers and work queues to actually configuring and quoting our arrays.

 

The second method simulates the batch workload by replaying the exact batch jobs that were executed during an actual peak in our busiest period.  The last method is custom code that executes the worst ad-hoc queries we could fine and specific DML that introduces load across the interconnect, contention on our hottest tables, large amounts of IO, buffer cache hits and shared pool parses.

The result of these three methods produces database stats that almost identically match our production CRM system.  We use this testing to validate major code changes, database versions, eBusiness Suite patches, hardware and OS upgrades and, most recently, virtualization of the database tier.

 

The Result

Using this same performance testing suite, I reverted back the ONLINELOG template to fine-grained striping and recreated the redo logs, to convert them back.  I then ran the test and, not surprisingly, got a 7% performance improvement for all transactions.  When looking at how the IO behaved on the storage array, it made even more sense.  IOs were much better balanced among all of the disks in the redo log diskgroup and this showed up with about a 12% reduction in latency.  In addition, for Symmetrix based systems, like the VMAX, that use SRDF synchronous replication on the redo logs for a zero data loss DR solution, it helped the performance there, as well.  I also tested this for both traditional storage allocations and virtual pool based storage allocations and both performed about the same.

 

The Summary

The performance of redo writes are much better with fine-grained striping, 7% for a busy system and write latency improvements around 12%.  Why would Oracle quietly make this switch and impose it on all databases that reside on an upgraded ASM instance?  Since most storage arrays use DIMMs as cache for writing to disk, except exadata, and spreading out the writes to multiple buffers will improve throughput, then why would you quietly enforce this type of change.  One can only wonder why, but my suggestion, revert the redo logs back to fine-grained striping when you upgrade you ASM instance to 11gR2 and keep an eye on it with every future upgrade.