Find Communities by: Category | Product

1 2 3 Previous Next

Everything Oracle at Dell EMC

44 Posts authored by: Sam Lucido

IT Organizations are spending too much time and budget manually performing tasks. For example, DBAs find themselves provisioning databases to support change management activities like testing patches, updates, and new development. Provisioning is just one example as DBA teams are looking for ways to automate and orchestrate routine tasks. See figure 8.


Figure 8: DBA activities costing excessive time and budget


The Ready Bundle for Oracle is designed streamline database activities by enabling the DBA to do more. There is no time spent building the Ready Bundle for Oracle as it’s designed as a unified system that can go from zero to 100 in less time. Servers, storage, and networking all tested and optimized for Oracle databases: Read Part One of my blog for more information.


Provisioning databases can involve a complex number of steps and multiple IT teams. It’s common to have the Storage Administrators have to refresh a database from production. Using AppSync the Storage Administrator can give control to the Oracle DBA to refresh their databases. AppSync is a program from Dell EMC designed to automate lifecycle management activities for Oracle, SAP, and Microsoft databases. For example, an Oracle DBA can schedule the refresh of a database over the weekend and AppSync will automate the entire process.


AppSync deserves its own blog because there is no way to cover it all here but its core value is integrating with Dell EMC storage to off-load refresh and copy activities to storage arrays like VMAX, XtremIO, and Unity. For example, the VMAX 250F all flash array can make a copy of production in a matter of seconds. This benefits the DBA who has to refresh a database as they can now complete most of the work in minutes versus weeks or days. This also combines the DBA and Storage Administration teams into a delivery model like DevOps. The two teams working together sharing responsibility for the delivery of a database.

The VMAX 250F storage array comes with the Data Storage Analyzer (DSA) at no additional cost. Using DSA the DBA can review storage performance. This means the classic question of storage performance is no longer a challenge to the DBA. Oracle Enterprise Manager combing with Data Storage Analyzer together gives the DBA a comprehensive view into database and storage performance. The combined power of OEM plus DSA can save hours in performance tuning efforts and provide better reporting to the business. It’s worth repeating: DSA is free with the VMAX and is part of the Ready Bundle for Oracle.


In Tuesday’s Oracle OpenWorld keynote, Larry Ellison talked about the importance of security. In my humble opinion Dell EMC RSA offers industry leading security solutions and for data protection there are solutions like storage replication and Data Domain. Dell EMC enables DBAs to use storage replication using AppSync. The DBA can control how frequently the database and applications are replicated and where the data is replicated to.  Using the Ready Bundle for Oracle the DBA has more control and works more closely with the IT Operations team.


Data Domain with DD Boost is among one of the most popular data protection solutions for Oracle. DD Boost integrates with RMAN and deduplicates backups to Data Domain decreasing backup times, reducing network utilization, and enabling more online backups. Additionally, the DBA controls DD Boost and can even configure the database backups can be replicated up to another Data Domain platform in a secondary datacenter.


Greater control for DBA teams, faster time-to-value as the Ready Bundle is a tested and optimized database platform and strong data protection. See figure 9 for a summary of benefits discussed in this blog.

Figure 9: Summary of Ready Bundle for Oracle in this blog


To learn more about the Ready Bundle for Oracle visit:

  Or email me at and I’ll be happy to connect you with our Ready Bundle for Oracle Specialists.

In part one of the Ready Bundle for Oracle blog we explored how the combination of the PowerEdge R940 and VMAX 250F All Flash array accelerate database performance at every layer in the stack. Fast performance is critical for databases but consolidation enables greater scalability and lowers the Total Cost of Ownership (TCO). In part two of this blog we explore how DBA teams can benefit from advanced storage features like compression and snapshots that take no initial space when making copies of a database.


Consolidation is the process of combining a number of things into a single more effective and coherent system. As we seen the PowerEdge R940 is faster than the prior generation of servers and the VMAX 250F all-flash array delivers high density of IOPS at ultra-low response times. There is a third dimension to consolidation and that’s storage capacity savings.  The VMAX uses three features to save disk space:

  • Thin provisioning: ensure that regardless of the LUN size, only the capacity that was written to is allocated
  • VMAX Adaptive Compression Engine (ACE): uses a versatile architecture to achieve high storage compression ratios
  • SnapVX Nocopy: creates copies using no initial disk space and without impacting performance of the source data


Let’s discuss thin provision and ACE as they are storage features that work seamlessly in the background on the storage array and are transparent to the database. Thin provisioning ensures only the space needed is allocated on the storage array. Compression then works on the allocated capacity to further reduce the Oracle database size. Testing was done in this white paper to show the value compressing Oracle. See figure 4


Figure 4: Before and after VMAX Compression in (GB) on an Oracle database


What figure 4 shows is prior to enabling VMAX compression, the allocated capacity was 5,727 GB and after the database size was reduced to 721 GB, an massive savings 5,006 GB.  In this test the VMAX was able to achieve a data compression ratio of 7.9 to 1. Compression will vary from database to database as data is different but this test case shows that significant space savings are possible using VMAX compression. Performance during compression testing showed the CPU utilization was the same for before and after compression and IOPS were reduce by 1% with compression enabled.


Provisioning copies of a production database is a common change management activity for most Oracle DBAs. On average most DBA teams have support between 4 to 7 copies of production, sometimes more. If each database copy was a full copy of production without compression a substantial amount of disk space would be used. For example, if a DBA team has to maintain 3 copies of a 5TB production database the total amount of space used would be 20 TB. See figure 5.


Figure 5: Total TB used in creating 3 full copies of production without compression


Using VMAX snapshots (called, SnapVX) all database copies take no space. A storage snapshot is a set of reference markers pointing back to the source data at a particular point in time. Any changes to the source data will trigger a write to the snapshot copy to save the original copy of the data at the point in time of the snapshot. That’s the long way of saying production data is protected and cannot be impacted by snapshots. Saving disk space with SnapVX is done using the “nocopy” command and is the recommended method for creating snapshots on the all flash VMAX arrays. Let’s look at figure 6 to show the disk space savings.

Figure 6: SnapVX /nocopy initial and 10% change in data over 60 days


Looking at the column “SnapVX /nocopy initial copy” we see how the DBA team can save a substantial amount of space by creating three copies of production (QA, Test, and Sandbox) that initially take no additional space. The 15 terabytes of disk space means less in the way of storage dependencies, greater consolidation, and faster provisioning of databases for the IT Organization. Now it’s realistic that data is going to change over time and in the last column called, “10% change over 60 days” we see production and the copies all have unique data.


The production database has .5TB of unique data since the initial snapshots making the total size 5.5TB. The Quality Assurance, Test, and Sandbox databases also have .5TB of unique data, thus these databases have a total of 1TB of unique data (.5 from production plus .5 from their own growth). Even in this scenario of 10% data change we see the entire database ecosystem takes just 8.5TB of data. Having a bit of fun if we factor in a conservative 2:1 compression ratio the entire ecosystem size goes down to 4.25TB.


Dell EMC believes customers will achieve an overall data reduction ratio of 4:1 by using the VMAX All Flash so much they offer a guarantee: Flash Storage Efficiency Guarantee. “Dell EMC promises our flash arrays will provide a logical usable capacity at least four times the usable physical capacity of your purchased drives — or we’ll give you more drives at no charge.” See figure 7 for how the VMAX 250F All Flash array increase consolidation and provide greater storage efficiency.


Figure 7: Achieve 4X Logical Capacity with the VMAX 250F All Flash array


For DBA teams the Ready Bundle for Oracle enables greater consolidation efficiency and with the guarantee from Dell EMC less risk and more protection in your investment. I like to think of the guarantee as Dell EMC’s way of partnering with DBAs and other IT teams to make sure they are successful.

There was a lot of excitement and energy in the air on Monday morning at Oracle OpenWorld. The Dell EMC booth looked great and we were all ready to talk with customers.


In the booth we have a new solution that combines the PowerEdge R940 with the VMAX 250F all flash array for Oracle databases. The big question is, “Will customers be interested?”


As people started walking by the booth, they would stop and ask about the Ready Bundle for Oracle. We really had a lot of great discussions with customers interested in a unified solution like the Ready Bundle. I want to take a moment to give you my take on being at a show like Oracle OpenWorld representing Dell EMC. In my opinion customer go to learn and the Dell EMC team we go to learn from customers. Every question is important and every discussion is an opportunity to learn what customers are truly interested in.


Here are a few questions and discussions I had on day one. “What is the Ready Bundle for Oracle?”

It’s a solution that includes servers, storage, and networking that has been tested with different database workloads like OLTP, OLAP to optimize performance and provide you the best platform for your database ecosystem. – People were very interested in how the Ready Bundle was optimized for Oracle. We talked about how in our testing we captured best practices and performance metrics that enable us to work with you to accurately size the Ready Bundle for your business. For example, customers where interested to learn that database storage layout is very simple and following a few recommendations means improved management and protection of the database.

Customer conversation at the show:


Performance is always at the top of minds for people so there were a few questions like, “How is performance?”

Well in a recent study a two-node Oracle RAC system generated over 275,000 IOPS on a VMAX all flash array and the average response times were: .8 ms for reads and .3 ms for writes! We also tested the PowerEdge R940 servers with an OLAP workload and the server was 52% faster for the light workload and 9% faster for the heavy workload than the prior generation of servers… with 16 fewer processor cores. People picked up on the benefit right away. The Ready Bundle for Oracle is very fast and there is the potential for greater consolidation thus, a TCO savings. If you are interested in learning more read this blog that was just posted as it goes deeper into Oracle performance.

There were a few questions about comparing the Ready Bundle for Oracle to Exadata. My response was simple: The Ready Bundle for Oracle has been designed to maximize performance and return on your investment. There is no extra database licensing, hardware licensing, or features that add to the complexity of the database system. Instead, the Ready Bundle for Oracle provides a clear path to a platform with over 20 years of engineering in every layer of the solution.


Right up there with performance were questions about protection. The VMAX 250F all flash array is a leader in protecting data with proven features like replication (SRDF) and snapshots (SnapVX). Using replication the DBA can protect against a disaster and bring up the database in a secondary datacenter. Using AppSync DBAs now have the capability to configure and manage storage based replication. Normally, replication is the responsibility of the storage administrator but AppSync enables the DBA to control and manage replication. Snapshots can be used to quickly copy the production database to a backup server which has the advantage of off-loading backups from the production server.


We also have a great Oracle protection team at the show.  There were many times in which customers would start a conversation at one place in the booth and travel to the data protection experts to talk about Data Domain with DD Boost for Oracle.


There were some fun questions too! “What is Dell EMC doing at Oracle OpenWorld?” I thought it was a great question. We talked about how Oracle and Dell EMC have worked together for over 20 years and have over 80,000 mutual customers. Dell EMC is also a Platinum Partner of Oracle.


Please join us at the Dell EMC booth as we have a great team to answer your questions!


And get a chance to win a Google Home device!

The Ready Bundle for Oracle is a solution designed to boost performance and operational agility for your database ecosystem. Oracle databases run the most complex and critical applications used by a company, often supporting ERP and CRM systems responsible for all back office processes. The pressure to modernize the database infrastructure means businesses are looking for solutions that offer greater agility, operational efficiencies, and resiliency in a single solution.


The Ready Bundle for Oracle integrates Dell EMC PowerEdge R940 servers, networking, and the enterprise VMAX 250F all-flash array into a Ready Solution. There is little, if any, of the risk associated with the best-of-breed approach that involves design, test, and release phases that accumulatively can take months or weeks to complete. All the engineering is included in the Ready Bundle, providing the business with a faster time-to-value in reaching operational readiness.


There is less risk too, as the entire solution is supported by a single vendor, full service company. Most anyone in the IT Organization understands the complexity of coordinating multiple vendors to resolve an issue impacting operations. Dell EMC solves this support challenge with three years of award-winning ProSupport services and a three-year extended hardware warranty. Application owners and DBAs call one number for server, networking, and storage support with the Ready Bundle for Oracle.


Check out the performance

One of the key advantages of a Ready Solution like the Ready Bundle is the Oracle performance testing we complete in our labs to ensure accurate sizing for your workloads. Testing starts with an Online Transactional Processing (OLTP) battery of tests to validate the solution. This test is optimized for ERP, CRM, and other systems of records applications. The goal of these tests is to evaluate system performance under a high volume of transactions in which response times remain under 1 millisecond.


In a recent VMAX Engineering white paper called, Dell EMC VMAX all-flash storage for mission-critical Oracle database the first test generated 275,626 IOPS across 32 SSD drives in a RAID5 configuration but response times remained under the gold standard of 1 millisecond. See figure 1.


Figure 1: Two-node Oracle RAC configuration driving over 275K IOPS with sub-millisecond latencies


Notice the speed of the average write times for data and log write operations. This is because the 1TB VMAX storage array cache is designed to immediately acknowledge all write quests thus, both data writes and log writes are at  an outstanding .3 millisecond average response times.


What about data read response times? The massive number of IOPS generated in this test was done purposely to force the VMAX all flash array to read from the flash drives. This means most of the reads were not satisfied by the VMAX cache which is the worst case scenario. Even with the majority of reads satisfied from flash drives the average read response times were at .8 milliseconds.

Typically, production Oracle RAC systems do not generate nearly as many IOPS as was done with this test but, this does lead to another advantage of the Ready Bundle for Oracle in that it can easily support large database ecosystems and meet performance Service Level Agreements (SLA).


Let’s use a scenario of a database ecosystem consisting of production, quality assurance, test and sandbox. See figure 2 for the daily average IOPS for each of the databases.


Figure 2: Scenario of Oracle database ecosystem


Figure 2 shows the four databases account for 65,000 IOPS which is a fraction of what was done in our testing. Looking at it another way, it would take five times (20 databases) the number of databases to be nearly equivalent in IOPS in our test. This means the Ready Bundle for Oracle offers exceptional scalability for database workloads.

OLAP test results

The second type of database workload test was with Online Analytical Processing (OLAP) used for business intelligence, reporting, and data mining. OLAP uses de-normalized data, fewer tables, and longer queries that depend more throughput than response times. Let’s focus on the PowerEdge R940 server in terms of OLAP performance to provide a more thorough overview of the entire solution. The new 14th Generation PowerEdge R940 server was tested using OLAP / TCP-H-like tests with Oracle’s in-memory database option to evaluate processor performance. See figure 3 for test results.


Figure 3: Prior generation PowerEdge R930 compared to PowerEdge R940 in Oracle OLAP Tests


Looking at this graph, the lower the bar, the faster the average execution time of the queries. In the light OLAP test with one user and 22 queries, the R940 completed queries 52% faster than the R930! In the heavy OLAP test with seven users and 154 queries, the R940 completed 9% faster. There is another factor to this testing ─ the PowerEdge R940 was faster even though the server had 16 fewer processors than the R930! Dell EMC estimates using a new 14th generation PowerEdge R940 server with fewer faster processor could save customer on Oracle Licensing.


It’s the comprehensive performance testing of the Ready Bundle for Oracle that makes this optimized database solution ideal for ecosystems of all sizes. The PowerEdge R940 accelerates database workloads using fewer processors and the VMAX 250F provides a wealth of IOPS but more importantly response times under one millisecond. There is no integration work or risk as the Ready Bundle is delivered as a unified solution to the datacenter for DBAs teams to start using very quickly.

Tomorrow I will focus on the value of consolidation for databases and the capability for you to experience a big disk space savings using the Ready Bundle for Oracle. If you have any questions I’ll be in the Dell EMC booth at Oracle OpenWord 2017 or email me at

Bitly URL:



Tweet this document:

#EMC #XtremIO all flash array uses intelligent software to deliver unparalleled #Oracle database performance New blog


#XtremIO X-Brick can lose 1 storage controller and multiple SSD and still have your #Oracle database up and running:



Related content:


EMC Optimized Flash Storage for Oracle databases


Follow us on Twitter:




Click to learn more about XtremIO in the EMC Store

The EMC XtremIO storage array is an all-flash system that uses proprietary intelligent software to deliver unparalleled levels of performance. XtremIO’s inline data reduction stops duplicated write I/Os from being written to disk, which improves application response time. XtremIO is highly scalable where performance, memory, and capacity increase linearly. XtremIO has its own data protection algorithm dedicated to fast rebuilds and all-around protection, and performs better than the traditional RAID types. The application I/O load is balanced across the XtremIO system. XtremIO provides native thin provisioning. All volumes are thin provisioned as they are created. Since XtremIO dynamically calculates the location of the 4 KB data blocks, it never pre-allocates or thick provisions storage space before writing the actual data. Thin provisioning is not a configurable property, it is always enabled. There is no performance loss or capacity overhead. Furthermore, there is no volume defragmentation necessary, since all blocks are distributed over the entire array by design.


There are many environments, applications and solutions that would benefit from the addition of an XtremIO storage array. This includes Virtual Desktop Infrastructure (VDI), Server virtualization, and database analytics and testing. The idea is to implement XtremIO in an environment where there is a high number of small random I/O requests, low latency is required, and data has a high rate of deduplication. These features are very beneficial from the perspective of Oracle DB performance . Hence , Oracle DBAs will love to work on this all flash storage arrays.


The benefits of XtremIO extend across multiple audiences in the IT organization.


Application owners benefit from accelerating performance resulting in faster transactions, scaling more end-users and improving efficiency.


Infrastructure owners can now drive consolidation of database infrastructure even across mixed database workload environments, whether physical or virtual, and service all environments with all flash.


DBAs can now eliminate the need for constant database tuning and chasing hot spots. They can provision new databases in less time and reduce downtime for capacity planning and growth management.


CIOs can improve overall database infrastructure economics through consolidating databases and storage and controlling costs even as multiple databases are deployed and copied over time.




XtremIO supports both 8 Gb/s Fibre Channel (FC) and 10 Gb/s iSCSI with SFP+ optical connectivity to the hosts. Each X-Brick provides four FC and four iSCSI front-end ports. Access to the XtremIO Management server (XMS) or to the Storage Controllers in each X-Brick is provided via Ethernet. XtremIO can also use LDAP to provide user authentication


Fibre Channel (FC) is a serial data transfer protocol and standard for high-speed enterprise-grade storage networking. It supports data rates up to 10 Gbps and delivers storage data over fast optical networks. Basically, FC is the language through which storage devices such as HBAs, switches and controllers can communicate. The FC protocol helps to clear I/O bottlenecks and makes the DB faster.



XtremIO offers storage connectivity via Fibre Channel and iSCSI therefore the proper cables must be supplied and correctly configured in order to successfully present storage. XtremIO also requires connectivity for management via Ethernet. An additional RJ45 port will be required if a physical XMS is being used.




As in every SAN storage environment, a highly available environment requires at least two HBA connections per host with each HBA connected to a separate Fibre Channel switch, as shown here. Connecting the XtremIO cluster to the FC switches is also straight forward. Each Storage Controller has 2 FC ports and therefore you should connect each Storage Controller to each Fibre Channel switch.


Each X-Brick of an XtremIO system can lose one Storage Controller and still remain fully functional. In general, every host should be connected to as many Storage Controllers as possible on an XtremIO cluster, as long as the host and multipathing software supports that number of connections. Best practice indicates 4-8 paths per host for Two X-Brick clusters, Up to four paths for Single X-Brick clusters, up to 16 paths for a four X-Brick cluster, and never to include more than one host initiator in a zone. To avoid multipathing performance degradation, do not use more than 16 paths per device. All volumes are accessible via any and all of the front-end ports. To get optimal performance of Oracle DB on XtremIO storage array, these best practices need to be followed.




When performing the cabling for iSCSI connectivity, the ideal configuration is to have redundant paths and redundant switches as well. General best practice for highly available iSCSI environments is for every host to have two physical adapters and for these adapters to be connected to separate VLANs, as shown here. As with FC connectivity, connecting the XtremIO iSCSI ports is easy. Since each Storage Controller has two iSCSI ports, simply connect each SC to a separate iSCSI subnet or VLAN.



In this blog , I tried to explain the architecture of XtremIO with special reference to the Oracle database. If we consider XtremIO and Oracle DB in unison , then below are the top 5 features for using Oracle DB on the top of XtremIO storage array.


  1. Predictable Oracle Performance
    • All DBAs get all flash all the time allowing dramatic improvement in database IOPS with sub-millisecond response times. XtremIO removes nearly all database tuning required for OLTP, data warehouses or analytics.
  2. Oracle Scalability with Endurance
    • Storage capacity is thin provisioned all the time, allocating capacity only when data is written. There is no need for overprovisioning of capacity, no fragmentation and no need for database block space reclamation as you scale. XtremIO also deduplicates Oracle data blocks and the remaining unique data blocks are compressed inline, delivering 17X more capacity access for all of your DBAs.
  3. Amazingly Simple Oracle Provisioning
    • DBAs simply need to request capacity, storage teams define volume sizes, map to Oracle hosts and go. The XtremIO operating system XIOS eliminates the complex configuration steps required by traditional storage—there is no need to set RAID levels, determine drive group sizes, set stripe widths or caching policies, or build aggregates. XIOS automatically configures every volume, every time.
  4. More Oracle Copies without Penalties
    • Storage snapshots are core to enabling DBAs to use multiple copies of production for multiple tasks.  Unlike other storage snapshots, XtremIO snapshots are fingerprinted in memory and deduplicated inline all the time allowing more snapshot capacity for DBAs. Test, development, maintenance and reporting can be deployed in minutes for DBAs all on the same platform as production without impacting production.
  5. Continuous Oracle Operations
    • DBAs no longer need to worry about storage as a source of database downtime. XtremIO Data Protection (XDP) delivers in-memory data protection while exceeding the performance of RAID 1. When combined with active-active hardware, non-disruptive software & firmware upgrades & hot-pluggable components remove storage as a source of downtime for DBAs.

N.B. : The above discussion is with respect to XtremIO version 2.4. For the latest features of XtremIO 3.0 , pls. click here.

Bitly URL:



Tweet this document:

#EMC flash strategy can now be used to power performance across the full spectrum of an #Oracle DBA's area: New blog!


Flash is fast, but it introduces caveats that need to be addressed to take advantage of the potential of flash #EMC


Related content:


EMC VSPEX Design Guide for Virtualized Oracle 11g OLTP - Part 1


Follow us on Twitter:




Click to learn more about XtremIO solutions in the EMC Store



Over the last 3-5 years, flash technology has become a standout performance-enhancing storage option for the most demanding and mission-critical enterprise applications and databases.  With an industry-leading full range of flash options – from the storage array to your database and application servers – we believe this makes EMC your most powerful partner in flash technology!



Given that flash technology can now be used to power performance improvement across the full spectrum of an Oracle DBA’s areas of responsibility, this article is intended to introduce you to EMC’s flash strategy.






EMC’s flash strategy is “flash everywhere”—flash based on your application needs and the specific considerations of your workload. EMC’s flash storage architecture allows additional data services to provide functionality that’s just not possible with spinning disks. EMC provides a full portfolio of solutions to directly address your specific workload and application needs.






XtremIO All-Flash Array


Get breakthrough shared-storage benefits for Oracle database acceleration, consolidation, and agility. Plus scale-out, consistent low latency and IOPS, and data services like deduplication, compression, space-efficient snapshots, and encryption, and an amazing system administration experience.



Flash is fast, but it introduces caveats that need to be addressed with the right architecture so that organizations can take advantage of the potential of flash. EMC XtremIO enterprise storage, EMC’s purpose-built all-flash storage array, is built from the ground up to take full advantage of flash.



XtremIO’s scale-out architecture is designed to deliver maximum performance and consistent low-latency and response times. More importantly, XtremIO’s architecture delivers in-line, always-on data services that just aren’t possible with other architectures.



XtremIO’s in-line data services bring out the value of flash: always-on thin provisioning, in-line data deduplication, in-line data compression, flash-optimized data protection, in-line data at rest encryption, and instantly available snapshots—features that are always-on and architected for performance at scale with no additional overhead.

All this while achieving a competitive cost of ownership, XtremIO’s architecture addresses all the requirements for flash-based storage including achieving longevity of flash media, lowering the effective cost of flash capacity, delivering performance and scalability, and providing operational efficiency and advanced storage array functionality.
res that are always-on and architected for performance at scale with no additional overhead.




Explore XtremIO All-Flash Array »




VMAX, VNX and Isilon Hybrid-Flash Arrays


Get advantages like Fully Automated Storage Tiering (FAST) software that automatically copies frequently accessed data to Flash, and moves other data across your existing storage tiers.



Hybrid storage arrays allow a little flash to go a long way, but flash alone isn’t enough. You can use software like EMC FAST to change the economics of deploying storage. EMC FAST automatically moves the most performance-hungry applications to a tier of flash while allowing lesser referenced data to move to a lower cost spinning disk tier of storage, providing a blended mix of performance and low cost.



While historically EMC VNX, EMC VMAX, and EMC Isilon, have previously been HDD-based solutions, you can now deploy these powerful storage platforms in hybrid configurations.


Explore Hybrid Arrays »






ScaleIO, XtremSF and XtremCache Server Flash


Take advantage of flash in your data center. ScaleIO provides convergence, scale, and elasticity, while maximizing performance. EMC XtremCache delivers turbo-charged performance for individual database servers and their applications.



You can also implement flash at the server level via Periphera

l Component Interconnect Express (PCIe) or Server Solid State Disk (SSD) as location storage devices. This reduces latency and increases input/output operations per second (IOPS) to accelerate a specific application component.

EMC XtremSF’s PCIe flash hardware delivers sub-100 microsecond response time. Configured as local storage, you can leverage XtremSF to increase specific workloads or components of workloads such as database index or temp space.
storage devices. This reduces latency and increases input/output operations per second (IOPS) to accelerate a specific application component.




By adding software, you can also use server flash as a caching device. EMC XtremCache software complements hybrid storage arrays. By integrating XtremCache with EMC’s fully automated storage tiering (FAST), organizations can increase performance without sacrificing the advanced data services of a hybrid storage deployment.



Explore Server Flash - XtremCache »

Explore Server SAN - ScaleIO »




As you can see, EMC gives you the flexibility to deploy flash where needed across your

Oracle database environments, depending on your performance, cost, capacity, and protection requirements.



In a future article, I will consider specific use cases and solutions for Oracle DBAs to further understand how EMC’s industry-leading flash technology described in this article can be used to dramatically improve your Oracle database performance.

Bitly URL:



Tweet this document:

Improve #oracle database performance

on #EMC using DNFS New blog talks #VNX best practices


Simplify network setup & management by taking advantages of #oracle dNFS with #EMC infrastructure New #VNX blog


Related content:


EMC VSPEX Design Guide for Virtualized Oracle 11g OLTP - Part 1


Follow us on Twitter:




Click to learn more about other solutions in the EMC Store



Networked-Attached Storage (NAS) systems have become commonplace in enterprise data centers. This widespread adoption can be credited in large part to the simple storage provisioning and inexpensive connectivity model when compared to block-protocol Storage Area Network technology (e.g. FCP SAN, iSCSI SAN).   EMC Unified Storage products offer a flexible architecture and multi-protocol connectivity.  This enables connectivity over IP/Ethernet, iSCSI, and Fibre Channel SAN environment.  Multi-protocol functionality is available on integrated EMC Unified Storage arrays for very low cost.


NAS appliances and their client systems typically communicate via the Network File System (NFS) protocol. NFS allows client systems to access files over the network as easily as if the underlying storage was directly attached to the client. Client systems use the operating system provided NFS driver to facilitate the communication between the client and the NFS server. While this approach has been successful, drawbacks such as performance degradation and complex configuration requirements have limited the benefits of using NFS and NAS for database storage.


Oracle Database Direct NFS Client integrates the NFS client functionality directly in the Oracle software. Through this integration, Oracle is able to optimize the I/O path between Oracle and the NFS server providing significantly superior performance. In addition, Direct NFS Client simplifies, and in many cases automates, the performance optimization of the NFS client configuration for database workloads.


Direct NFS Client Overview


Standard NFS client software, provided by the operating system, is not optimized for Oracle Database file I/O access patterns. With Oracle Database 11g or above, you can configure Oracle Database to access NFS V3 NAS devices directly using Oracle Direct NFS Client, rather than using the operating system kernel NFS client. Oracle Database will access files stored on the NFS server directly through the integrated Direct NFS Client eliminating the overhead imposed by the operating system kernel NFS. These files are also accessible via the operating system kernel NFS client thereby allowing seamless administration.


Benefits of Direct NFS Client


Direct NFS Client overcomes many of the challenges associated with using NFS with the Oracle Database. Direct NFS Client outperforms traditional NFS clients, is simple to configure, and provides a standard NFS client implementation across all hardware and operating system platforms.

  • Performance, Scalability, and High Availability
  • Cost Savings
  • Administration Made Easy

With EMC Unified Storage,  Oracle DNFS connectivity and configuration can be used to deploy a NAS architecture with lower cost and reduced complexity than direct-attached storage (DAS) and storage area network (SAN).  EMC Unified storage can be used with DNFS to:

  • Simplify network setup and management by taking advantages of DNFS automated management of tasks, such as IP port trunking, and tuning of Linux NFS parameters
  • Increase the capacity and throughput of their existing networking infrastructure


Configure the Oracle DNFS client with EMC VNX Unified Storage


Configure oranfstab:  When you use DNFS, you must create a new configuration file, oranfstab, to specify the options/attributes/parameters that enable Oracle database to use dNFS.


Apply ODM NFS library:  To enable DNFS, Oracle database uses an ODM library called  You must replace the standard ODM library, with the ODM NFS library


Enable transChecksum on the VNX DataMover:  EMC recommends to enable transChecksum on the DataMover that serves the Oracle DNFS clients.  This avoids the likelihood of TCP port and XID (transaction identifier) reuse by two or more database running on the same physical server, which could possibly cause data corruption


DNFS network setup:  The network setup can now be managed by an Oracle DBA, through the oranfstab file.  This free up the database sysdba from specific bonding tasks previously necessary for OS LACP-type bonding


Mounting DNFS:  Add oranfstab to the ORACLE_BASE\ORACLE_HOME\dbs directory.  For Oracle RAC, replicate the oranfstab file on all nodes and keep them synchronized.  When oranfstab is placed in the ORACLE_BASE\ORACLE_HOME\dbs directory, the entries in this file are specific to a single database.  The DNFS client searches for the mount point entries as they appear on oranfstab.  DNFS uses the first matched entry as the mount point.


You need to verify if the DNFS has been enabled by checking the available DNFS storage paths, the data files configured under DNFS, and the server & directories configured under DNFS.

Bitly URL:


Tweet this document:

New #EMC #Oracle VM blog about #VMAX integration: Automate LUN discovery, create LUNs, and clone VMs with FREE plugin


Are you using #Oracle VM? #EMC enables adminstrators to discover and manage VM storage on #VMAX: More detail in blog


Create and remove LUNs using #EMC integration with #Oracle VM: Makes provisioning VMs easy and fast!

Follow us on Twitter:


OracleVM.pngThe EMC Storage Integrator (ESI) for Oracle VM version 3.3 is a plug-in that enables Oracle VM to discover and provision EMC Storage arrays. The Integration Module is built upon the Oracle VM Storage Connect (OCS) framework. The framework provides a set of storage discovery and provisioning Application Programming Interfaces (API) that enhance the ability to manage and provision EMC storage in an Oracle VM environment.




  • EMC SMI-S Provider server to provision and manage VMAX arrays

These are the steps for installing the ESI for Oracle VM plug-in. We encourage you to visit EMC online support to updates to the ESI installation steps.

  1. Download the installation package file from EMC online support
  2. Log on as the root user to the Oracle VM server
  3. Type the following command to install the package
    # rpm –ivh ./emc-osc-isa1.0.0.1-1.e16.x86_64.rpm


Edit the ESI for Oracle VM configuration file:


The ESI for Oracle VM uses isa.conf to define its runtime behavior. For example:





A string composed of alphanumerical characters, “_” or “-“

User defined prefix for OSC

managed initiator group, port

group, device group, and

masking view on VMAX arrays.

This prefix makes it easier to

differentiate between OSC

managed groups and other

managed groups.


True or False

Specifies whether VMAX auto meta is enabled (true) or disabled (false).



Sets to verbose logging for troubleshooting.


The properties in the above table can be set in the isa.conf file. This file is manually created in plain text under the directory:



The root user must have permissions to the isa.conf file.

Register a VMAX storage Array

The Oracle VM administrator will have to work with the EMC storage administrator for this one time registration of the storage array. As the list below does have sensitive information the recommendation is to have the EMC storage administrator enter the information.

  • Name of the storage array
  • Storage Plugin – Select EMC FC SCSI Plugin
  • Plugin Private Data – Storage array type and ID for example: SYMMETRIX+000195900311
  • Admin Host – Host name or IP address of the host where SMI-S provider is installed
    • Amin Username – to connect to the SMI-S provider
    • Admin Password – to connect to the SMI-S provider

Repeat these steps for each VMAX array that supports Oracle VM.

LUN Discovery


Once the VMAX storage array has been registered the Oracle VM administrator can refresh the list of available storage arrays and see the newly added EMC storage array(s). Clicking on the storage array the Oracle VM administrator will see storage available for use.

Create and remove thin LUNs


Provisioning storage is very easy as the Oracle VM administrator would follow the normal steps in creating a storage repository to present to the Oracle VM server(s) as a storage pool. Once the storage repository is available the administrator can add the storage resources to virtual machines.

It is important to know how storage is created on the EMC arrays as part of the preparing and configuring storage resources. The parameter “AutoMetaEnabled” directs the EMC storage array to create a thin device on of two ways:

  • If  “AutoMetaEnabled=False” then the plug-in creates a concatenated meta device using newly-created member devices and presented to the Oracle VM administrator. For example, a request for 500GB of space will result in the plug-in using 240GB of one LUN, 240GB of another and 20GB of a third LUN. A concatenated meta device is a group of storage devices concatenated together to form a larger LUN. When filling a concatenated meta device the first device fills first, followed by the second, and so on until the end.
  • If  “AutoMetaEnabled=True” then the plug-in requests the storage array to create the LUN. This offloads the provisioning of the disk space to the VMAX array which will automatically create one concatenated 500GB (240+240+20)meta device (reusing the same example as above).

Tip: Planning thin devices for Oracle databases The maximum size of a stand thin device in a Symmetrix VMAX is 240 GB. If a larger size is needed, then a metavolume comprised of thin devices can be created. When host striping is used, like Oracle ASM, it is recommended that the metavolume be concatenated rather than striped since the host will provide a layer of striping, and the thin pool is already striped based on data devices.


After the repository is created all the Oracle VM administrator needs to do is present the repository. Presenting the repository involves selecting which Oracle VM servers can use the current storage repository. It’s that easy!

Use Auto-Provisioning (LUN masking)


Making the discovered array usable by your Oracle VM servers involves access groups. An access group has a name and sometimes a description but most importantly the administrator can select from a list of available storage initiators and assign which storage initiators belong to the access group. A storage initiator is similar to an email address as it allows communication between people. Adding a storage initiator to a access group means the administrator is granting access to the underlying storage. Not adding or removing a storage initiator from an access group is called, “masking” and is a way to prevent access to the storage array.


Create Clones


Creating clones is a simple four step process that enables the Oracle VM administrator to very quickly create point in time copies of virtual machines. A newly created clone is immediately accessible to the host, even while data copying is occurring in the background. Here are the steps:

  • Select the EMC storage as the Clone Target Type
  • Type the source device name as the Clone Target
  • Select Thin Clone as the Clone Type
  • Click on OK


The ability to create very quick clones of virtual machines enables DBA teams and application owners to save time in activities like patching and functional testing. At EMC we have embraced the use of virtualization and automation to drive Database-as-a-Service within the company and can now provision Oracle, SQL Server and other databases in one hour. To learn more I recommend reading, “EMC IT’s Database-as-a-Service” paper and viewing the video, “EMC IT’s eCALM Demo.”




What I found amazing about writing this blog is how easy it is to start using the EMC Storage Integrator for Oracle VM and the immediate benefits of auto discovery, fast storage provisioning and ease of management. I was able to summarize the installation steps in about half a page! EMC is integrating up the Oracle stack enabling the Oracle VM administrator and DBA to do more with our storage arrays and this ESI integration is a strong example. Other points of integration include our FREE Plug-in for Oracle Enterprise Manager 12c and the new application (Oracle database) awareness in Unisphere 8.0. Hope you enjoyed reading this blog and let us know if you are using the ESI storage integrator.

Bitly URL:


Tweet this document:

New #XtremIO blog: Using Adv. Format 4 #Oracle databases. Great overview sure 2 improve the performance of database


Related content:

EMC XtremIO Snapshots are Different!


Virtual Storage Zone: Getting the Best Oracle Performance on XtremIO


XtremIO for Oracle -- Low Latencies & Better Use of Datacenter Resources


Features and Benefits of Using Oracle in XtremIO Array Part 1


Features and Benefits of Using Oracle in XtremIO Array Part 2


Features and Benefits of Using Oracle in XtremIO Array Part3


EMC XtremIO Introduction - Scalable Performance for Oracle Database


Follow us on Twitter:


XtremIO Best Practices for Oracle using Advanced Format

Architecting a database on an All Flash Array (AFA) like EMC’s XtremIO is best done by reviewing practices to optimize I/O performance. One consideration is the use of Advanced Format and how it impacts the performance of the database Redo logs. Advanced Format refers to a new physical sector size of 4096 bytes (4KB) replacing original 512 byte standard. The larger 4KB physical sector size has these benefits:

  • Greater storage efficiency for larger files (but conversely less efficiency for smaller files)
  • Enablement of improved error correction algorithms to maintain data integrity at higher storage densities [1]


A DBA might be very interested in using a 4KB physical sector size to gain the efficiencies and improved error correction but there are a few considerations to review. For example, some applications and databases do not recognize the newer 4KB physical sector size. At EMC we have extensively tested Oracle on XtremIO following recommendations by Oracle support in Support Note, “4K ASM” (1133713.1).” To address the problem of a database or application not recognizing the new 4KB physical sector size there is the option to use 512-byte (512e) emulation mode.



Emulation mode uses a 4KB physical sector with eight 512 byte logical sectors. A database expecting to update (read and write) a 512 byte sector can accomplish this by using the logical block address (LBA) to update the logical sector. This means the 4KB physical sector size is transparent to the database as it can write to the 512 byte logical sector and thus backwards compatibility is maintained.


Picture 1: 512-Byte Emulation Mode


Unfortunately, there is the possibility of misaligned 4KB operations: one 512 byte update causing two 4K physical sectors to be updated. Before exploring the impact of misaligned operations on the Oracle database we need to how writes are managed in emulation mode.

Picture 2: Writes in 512-Byte Emulation Mode


Shown above writes are managed by:

  • The entire 4KB physical sector is read from disk into memory
  • Using the LBA the 512 byte logical sector is modified
  • The entire 4KB physical sector is written to disk


The key point of this read-modify-write process is the entire 4KB physical sector is modified. A request to modify one 512 bytes logical sector means reading 4KB into memory and writing the 4KB physical sector back to disk. For optimal efficiency it would be ideal to update multiple logical sectors belonging to one physical sector in one operation. When properly aligned writes to logical sectors are a one-to-one match to the physical sector and do not cause excessive I/O.


Misalignment is caused when incorrectly partitioning the LUN. To quote Thomas Krenn’s Wiki on Partition Alignment:

  • Partitioning beginning at LBA address 63 as such is a problem for these new hard disk and SSDs[2]
  • If partitions are formatted with a file system with a typical block size of four kilobytes, the four-kilobyte blocks for the file system will not directly fit into the four-kilobyte sectors for a hard disk or the four-, or eight-, kilobyte pages for an SSD. When a four-kilobyte file system block is written, two four-kilobyte sectors or pages will have to be modified. The fact that the respective 512-byte blocks must be maintained simply adds to the already difficult situation, meaning that a Read/Modify/Write process will have to be performed. [2]

Picture 3: Negative Impact of Misalignment


Quick side note:

I enjoyed reading the blogs “4k Sector Size” and “Deep Dive: Oracle with 4k Sectors” by flashdba. Although flashdba works for a competitor many of the recommendations apply to all Oracle users.


The solution is not to partition the LUN but present the unpartitioned device to ASM. There is an excellent blog by Bart Sjerps (Dirty Cache) called, “Fun with Linux UDEV and ASM: UDEV to create ASM disk volumes” that provides steps on using unpartitioned devices with ASM. In the blog Linux UDEV is reviewed as a solution to eliminate the misalignment:

  • We completely bypassed the partitioning problem, Oracle gets a block device that is the whole LUN and nothing but the LUN[3]
  • We assigned the correct permissions and ownership and moved to a place where ASM only needs to scan real ASM volumes (not 100s of other thingies) [3]
  • We completely avoid the risk of a rookie ex-Windows administrator to format an (in his eyes) empty volume (that actually contains precious data). An admin will not look in /dev/oracleasm/ to start formatting disks there[3]

Reading through the blog Bart points to the fact using UDEV and maintaining the ASM rules can be hard work and so he created a script called, “asm” and a RPM called, “asmdisks” to automate the use of UDEV with ASM. Highly recommended reading and look to the bottom of the blog for the link to download the RPM. Why not use ASMlib? Bart goes into detail on some of the challenges in using ASMlib in the same blog so I’m not going to list them here but rather encourage you to review the ASMlib section.


Here are a few examples of how to determine if you are using emulation mode as detailed in the Unix & Linux website under “How can I find the actual size of a flash disk?”

Using sgdisk:

sgdisk –print <device>


Disk /dev/sdb: 15691776 sectors, 7.5 GiB

Logical sector size: 512 bytes

The output shows the number of sectors and the logical sector size.


Using the /sys directly:
For the number of sectors:

cat /sys/block/<device>/size


For the sector size:

cat /sys/block/<device>/queue/logical_block_size


Using udisks:

udisks outputs the information directly.

udisks –show-info <device> | grep size

Using blockdev:

Get physical block sector size:

blockdev –getpbsz <device>


Print sector size in bytes:

blockdev –getss <device>


Print device size in bytes:

blockdev –getsize64 <device>


Print the size in 512-byte sectors:

blockdev –getsz <device>


Beyond using unpartitioned devices for ASM to bypass the misalignment issue is there any other recommendations? Interestingly, the database online redo log files by default have a block size of 512 bytes. For optimal online redo log efficiency it would be ideal to change from 512 byte block size to 4096 byte block size. As of version 11.2 this can be changed by specifying the BLOCKSIZE clause to values of 512 (default), 1024, or 4096 bytes.

Picture 4:Online Redo Log Blocksize Recommendation


For example, in recent XtremIO testing we used:



Before creating new online redo logs with the 4K blocksize there is a known issue with emulation mode. Emulation mode makes the 4K physical sector size transparent to the database so when creating online redo log files the database checks the sector size and finds 512 byte sectors. Unfortunately, discovering 512 byte sectors when attempting to write 4096 byte blocks results in an error like:


ORA01378: The logical block size (4096) of file +DATA is not compatible with the disk sector size (media sector size is 512 and host sector size is 512) [4]


The solution is using a hidden database parameter _DISK_SECTOR_SIZE_OVERRIDE to TRUE. This parameter overrides the sector size check performed with creating optimally sized redo log files. This parameter can be changed dynamically.




If you creating new online redo logs with the 4K blocksize then you might have to drop the original 512 byte redo log files groups.



Summary of 512e emulation mode best practices:

Below is a summary of the recommendations in this blog. Time for a disclaimer: I would also encourage you to review Oracle support note 1681266.1 “4K redo logs and SSD based storage” as a good place to start in determining if these recommendations are a good fit for your databases. Test these recommendations in a copy of production and validate the impact. Now that the disclaimer is over here are the steps.

  • Create your LUNs
  • Validate use of emulation mode
    • Example: blockdev –getss <device>
  • Do NOT partition the LUNs
  • Use UDEV to create ASM disk volumes (See Dirty Cache blog)
  • Set the database initialization parameter “_DISK_SECTOR_SIZE_OVERRIDE”=”TRUE”
  • Create Online Redo Logs with using the BLOCKSIZE clause for 4096 bytes

4KB Native Mode

In native mode the physical sector size and the logical sector size are the same: 4 KB. If planning on using advanced format native mode the DBA will have to create 4 KB block size redo logs. Outside of the redo logs there are a few other considerations for 11gR2 and higher.

Picture 5: 4K Native Mode


Unfortunately, I haven’t the time to fully explore 4K native mode but promise a follow-up for my next blog. I did want to provide this summary table below because it highlights Oracle’s recommendation to use 4KB online redo logs for emulation mode and native mode. In native mode there is no option to use 512 byte redo logs so in a good way Oracle automatically directs the DBA into using the optimal 4KB blocksize for the redo logs.

Summary table: Supported and Preferred Modes

Mode Type

512-Byte Redo Logs

4 KB Redo Logs


Emulation Mode




Emulation Mode



Native Mode




In the above summary table we see that emulation mode will support both 512 KB and 4 KB redo log block sizes but 4 KB is preferred. The overall recommendation is to use 4 KB block size for your redo logs.


Next Blog: Exploring the 4K native mode and insights into XtremIO testing using Advanced Format.


Table of References

[1] Advanced Format from Wikipedia, URL:

[2] Thomas Krenn Wiki on Partition Alignment, URL:


[3] Bart Sjerps (Dirty Cache) called, “Fun with Linux UDEV and ASM: UDEV to create ASM disk volumes” URL:


[4] Oracle Support Note 1681266.1 entitled, “4K redo logs and SSD – based Storage”



Just announced the new VMAX3 is delivers more performance, simplifies storage management, and is the optimum route to the hybrid cloud. In today’s Redefine Possible  mega event you might have heard about the new capabilities of the VMAX3 but perhaps you didn’t catch the part about Oracle application awareness. In the interest of full disclosure I missed the part about how using the VMAX3 BOTH the Oracle DBA and storage administrator can monitor database and tune database performance. That means it’s time to blog!

Driving Awareness

At the time of writing this blog it might be a challenge to find information on VMAX3 application awareness with the exception of this 6 minute video: Application Awareness Demo. In the video we learn that there is a “dedicated URL for DBAs” to access DBclassify in Unisphere 8.0. This means the DBA can independently access the DBclassify web page without having to contact the storage administrator and can gain immediate insight into database storage performance.

Picture 1: DBclassify Dashboard


In the above picture above we see the DBclassify dashboard and several statics: IO Wait vs. Non IO Wait, average active sessions wait, response time, IOPS, and throughput. The solid lines in the graph denote performance as reported by the database and the dashed lines show storage performance. In this way it is very easy to investigate physical reads, writes and redo writes and see the delta between database and storage metrics. This eliminates the blame storms that sometimes occur between the database and storage administrators.


Clicking on the ‘Analytics’ table up top brings the DBA to a management page that shows IO Wait over time and which database objects actively used during that time. This provides the capability for the DBA to investigate historical performance and find which database objects were used during high IO wait times.

Picture 2: DBclassify Analytics


Looking to the right in the list of database objects you will see a bar that indicates the type of activity for the object: random read, sequential read, direct I/O, system I/O, commit I/O, and other I/O. This is important because moving database objects to enterprise flash drives is best for objects with that are weighted towards random reads. For example, given a choice between an object with mostly random reads (purple color) and another object with direct I/O (green color) the best opportunity to improve performance is with the object that has the purple bar.

Picture 3: DBclassify Hinting


Sometimes it’s not what you hear that matters but what you see. This picture was taken at approximately 3 minutes and 24 seconds into the video and the objects selected are very important: all three objects show a healthy amount of random reads. The selected objects then become part of a hinting wizard in which the DBA can move the objects for the flash drives.


Picture 4: DBclassify Hinting Wizard


In the hinting wizard the DBA can:

  • Assign a name to the hint: for example, “Billing” to identify objects related to billing activity
  • Priority: hint priority represents how important the object are
    • 1: Top
    • 2: Very high
    • 3: High
    • 4: Above average
  • Hint Scheduling
    • One time: a onetime promotion of database objects
    • Ongoing: keep the databases objects on the faster tier of storage
    • Recurrence: schedule a time to promote database objects to accelerate cycles of workloads

Once a hint has been created the DBA can then monitor the effectiveness of the hint. There is also a hint management tab that show all the hints created (not shown) and allows the DBA to enable, disable, edit, and remove hints. As you can see using the hinting wizard the DBA can at a very granular level improve database / application performance by selecting database objects to be promoted to flash drives. EMC is truly enabling the Oracle DBA to use VMAX storage arrays!


Stay tuned more coming!

Discussion.pngOracle DBAs and EMC Storage Administrators working together


Using tools like OEM 12c Plug-in & DBClassify

Bitly URL:


Tweet this document:

Bridging the Gap between the #Oracle DBA and #EMC Storage Administrator: #OEM12c #dbclassify


Follow us on Twitter:



Customer facing slides attached at the bottom of the blog.

Perhaps you have been in one of those meetings: the Oracle DBA believes storage is causing a performance problem and the Storage Administrator insists there are no storage bottlenecks. You might have even said to yourself, “If there was one tool showing both Oracle database and storage metrics together this might be resolved very quickly.” And you would be correct! That unified vision showing in real-time both Oracle and storage performance together could really shorten the time of remediation and strengthen collaboration.



The good news is EMC is working closely with Oracle to continue development of our Oracle Enterprise Manager plug-in. This FREE plug-in enables the DBA to view EMC storage (works for both the VMAX and VNX) configuration and performance metrics within OEM 12c. On the other side Storage Administrators can use DBClassify which is a service that involves setting up monitoring software and training. With DBClassify the Storage Admin and DBA both have insight into database and storage performance with ability pin blocks in storage cache to assure performance.



I would maintain applications like DBClassify and OEM Plug-in are about bridging the divide between databases and storage. When I started using the OEM 12c plug-in I had to undergo a learning curve to understand the configuration and metrics presented on screen. Thanks to some outstanding storage administrators I quickly learned the configuration differences between traditional RAID groups and storage pools. This is a key point: both Oracle DBAs and Storage Administrators are critically important in architecting and supporting enterprise systems but we have different expertise and terminology that require us to collaborate to drive success. Most Oracle DBAs use AWR reports and view performance in terms of wait times, latencies and end user experience. Storage Administrators view performance in terms of IOPS, storage configuration, and advanced features like FAST Cache or FAST VP. Our performance worlds are different yet highly dependent upon each other.



Collaborating together can mean we learn more about each other’s area of expertise using tools that enable us form a common analysis. After learning to use the OEM Plug-in I was able to work closely with the Storage Administrator to accurately identify storage pools on the VNX array that were supporting my databases and review if those storage pools FAST Cache enabled.  Great from an enablement and collaboration standpoint, right? But just as important we were able to turn on FAST Cache for my databases after a few minutes of discussion. Better collaboration can mean faster action and hopefully time saved not having to go to long meetings.



If you enjoyed reading this blog then please join my session at EMC World where we will explore in detail how EMC Storage Administrators and Oracle DBAs can collaborate, remediate and architecture storage together.


VNX Multicore FAST Cache Improves Oracle Database Performance


Comparing Papers: VNX7500 to the new VNX8000

Bitly URL:


Tweet this document:

New #VNX blog showing how the new Multicore FAST Cache improves performance 5X and 3X for #Oracle Databases #EMC


Follow us on Twitter:



Customer facing slides attached at the bottom of the blog.

It’s 2014 and time for a fresh approach to looking at Oracle storage performance. Let’s compare two EMC dNFS proven solutions to see the performance benefits the new VNX offers to Oracle DBAs. I’ll be comparing some of the findings in EMC VNX7500 Scaling Performance for Oracle 11gR2 RAC on VMWare vSphere 5.1 (published in December 2012) with EMC VNX Scaling Performance for Oracle 12c RAC on VMware vSphere 5.5(published in December 2013). I was going to add NetApp to the mix but unfortunately finding a recent performance paper was difficult. Perhaps the best place to start is by doing a comparison between the two studies.


The table below shows the major difference between the VNX7500 and the VNX8000 papers is in the database versions: 11gR2 versus 12cR1. Reading through both papers there were no findings suggesting one database version was faster than the other or to put another way the focus of the paper was not to compare the performance of 11gR2 to 12c. Looking over the Oracle stacks across both papers we have a close apples-to-apples comparison.

Table 1: Comparing software and network stacks
Oracle Stack on VNX7500Oracle Stack VNX8000
Oracle RAC 11g Release 2 (       Oracle 12c Release 1 (
Oracle Direct NFS (dNFS)Oracle Direct NFS (dNFS)
Interconnect Networks: 10 GbEInterconnect Networks: 10 GbE
Oracle Enterprise Linux 6.3Red Hat Enterprise Linux 6.3
Swingbench 2.4Swingbench 2.4

Below is a table comparing the storage configuration of the VNX7500 to the VNX8000. The differences between the storage arrays include:


Table 2: Comparing storage array configuration

VNX7500 Array Configuration

VNX8000 Array Configuration

2 storage process each with 24 GB cache        

2 storage processors each with 128 GB cache

75 x 300 GB 10k SAS drives

75 x 300 GB 10k SAS drives

4 x 300 GB15k SAS drives (vault disk)

4 x 300 GB15k SAS drives (vault disk)

11 x 200 GB Flash drives

5 x 200 GB Flash drives

4 x data movers (2 primary & 2 standby)

4 x data movers (2 primary & 2 standby)


9 x 3 TB 7.2K NL SAS drives


The processors

The VNX7500 (specifications) uses Xeon 56000 8-core CPUs with 24 GB of cache and the VNX800 uses the Xeon E2600 (specifications) 8-core CPUs with 128 GB of cache. Referencing the CPU benchmarks here, “CPU benchmarks” we find the newer Xeon X2680 is a least 200% faster than the Xeon 5600. Faster processor improves most everything in the storage array but most notably MCx multi-core storage operating environment.


FAST Cache Comparison

In the table it is interesting to note that primary difference in the number of flash (SSD) drives used: the VNX7500 paper having 11 x 200 GB flash drives and the VNX8000 paper only 5 x 200 GB flash drives. Why did EMC engineering decide to use less than half the flash drives in the new VNX8000 paper? At this point I decided to investigate the difference in FAST Cache between the two studies. Below are my findings after doing some research. Note that my initial focus is on read I/O performance and I start with the legacy FAST Cache process and build into how the new Multicore FAST Cache works.


Legacy FLARE FAST Cache

It was helpful for me to do a quick refresher on how the legacy FLARE FAST Cache worked. There are many FLARE VNX storage arrays used by customers so modeling how FAST Cache works will assist in identifying what has changed and how the changes improved performance.


Walking through the above picture you see on the left side the legacy FAST Cache process flow:

  1. Incoming read requests will be serviced by the FAST Cache improving performance but because the DRAM Cache is faster not the ideal performance path
  2. The read request is serviced by the FAST Cache: in this case 5.61 ms
  3. READ MISS: A check is performed with the FAST Cache Memory Map to determine if the I/O can be serviced using DRAM Cache
  4. If the I/O can be serviced from DRAM Cache then the Policy Engine redirects the I/O to the DRAM Cache: best performance and lowest latency times
    1. If NOT in DRAM Cache then the I/O is serviced reading from high capacity disk
  5. Frequently used data is promoted from HDDs into the FAST Cache then the subsequent read I/O is satisfied from FAST Cache
  6. Frequently used can also be promoted to DRAM Cache


On the right side you see how the read I/O is not ordered for optimal performance. For example, the absolute fastest performance is obtained by requesting the read I/O be serviced by the DRAM cache however, with the legacy FLARE version of FAST Cache the initial read I/O request is serviced by FAST Cache. Having read I/Os initially serviced by FAST Cache provides good performance as seen the paper, VNX7500 Scaling Performance for Oracle 11gR2 on VMware vSphere 5.1 just not the best possible performance. For example, the baseline ‘db file sequential read’ average wait time was 96+ms and after FAST Cache it significantly dropped to 5+ms average wait time an increase of performance of 85%. So FAST Cache will provide a strong performance boost but there is certainly the opportunity for even more performance by simply changing the order of how a read I/O is serviced.


On the right side it is very easy to see how the read I/O order is not optimal for performance:


  1. SSD is the first tier of performance for read I/Os but is slower than DRAM
  2. DRAM is the fastest tier of performance but is referenced after FAST Cache
  3. HDD tier replies to the read I/O if both the FAST Cache and DRAM Cache results in read misses

New Multicore FAST Cache on the VNX8000

DRAM is used for the Mutlicore cache which means the lowest response times (latency) achievable will be host I/O serviced by the multicore cache in the new VNX storage array.


Looking at the above picture to the left you can see the new Multicore FAST Cache process flow:


  1. Incoming read requests will be serviced by the multicore cache first providing the lowest latency and fastest response times.
  2. The read request is serviced by the multicore cache with no need to read the FAST Cache memory map (3) saving cycles.
  3. READ MISS: A check is performed with the FAST Cache Memory Map to determine if the I/O can be serviced using Multicore FAST Cache.
    1. If the I/O can be serviced from Multicore FAST Cache (MFC) then the Policy Engine redirects the I/O to the MFC: 2nd best performance and latency times.
  4. The data then is copied from the Multicore FAST Cache to the Multicore Cache.
    1. If NOT in the MFC then I/O is serviced reading from high capacity disk.
  5. The data is copied from HDDs into the Multicore Cache then the read I/O is satisfied from the Multicore Cache.
  6. Frequently used data is promoted to the Multicore FAST Cache by the Policy engine.


In the EMC proven solution entitled, VNX Scaling Performance for Oracle 12c RAC on VMware vSphere 5.5 the baseline performance of the database wait event ‘db file sequential read’ was compared using no Multicore FAST Cache to performance with Multicore FAST Cache. The following ‘db file sequential read’ wait times were observed:


  • Baseline average (no FAST Cache): 18.88 milliseconds
  • With Multicore FAST Cache: 1.52 milliseconds a latency reduction of 91% for database reads.


The new I/O path of Multicore Cache then Multicore FAST Cache yields performance gains for your databases and applications using an improved I/O service path. This is interesting as a simple change in which cache services the host I/O can have a profound impact in performance. On the right side of the picture it is very easy to see how the read I/O order is optimal for performance:


  1. DRAM is the fastest tier of performance but is referenced first
  2. SSD is the second tier of performance for read I/Os
  3. HDD tier replies to the read I/O if both the FAST Cache and DRAM Cache results in read misses


To see more clearly the performance deltas let’s compare the wait times between the two studies in more detail.


Comparing Overall Performance Improvements


Comparing the VNX7500 study to the new VNX8000 study we see a major overall performance improvement across all “db file sequential read” requests.



In the picture above the baseline value for the VNX7500 was 96ms and comparatively for the VNX8000 only 18ms a 5X improvement in baseline performance for ‘db file sequential reads’. This baseline performance improvement is significant because databases NOT using Multicore FAST Cache most likely will experience a performance boost when migrated to the new VNX8000. At most companies the Multicore FAST Cache (MFC) will be used for production databases and perhaps TEST copies of production. Typically, development databases will not use MFC but can represent the majority of capacity on the storage array. Increasing performance by 5 times for databases not using MFC could be very beneficial to the DBA team.


The Multicore FAST Cache (MFC) uses the multicore cache as the first performance tier for read I/Os. This reordering to use the multicore cache has increased performance 3X when comparing the VNX7500 with FC (5.61 ms) to the VNX8000 with MFC (1.53 ms). This is a substantial read performance improvement for Oracle databases meaning faster response times and lower latency times.


In summary Oracle DBAs can substantially improve performance for their Oracle databases by moving to a new VNX storage array. The performance gains are not limited to databases using the new MFC design but apply to databases not using MFC too. In comparing the two studies we found that non-MFC databases could run 5X faster and MFC databases 3X faster on the new VNX storage arrays. Perhaps the most important point is that the VNX8000 was faster and used 50% less flash drives. This means Oracle DBAs can drive greater database performance with less cost (fewer flash drives) providing an excellent TCO to the business. Now that is cool!


I hope you enjoyed reading this first blog on and look for part 2 in which we explore more points of comparisons between the two studies.

Sam Lucido

The 3 E's in EMC Elect

Posted by Sam Lucido Jan 16, 2014

You might have noticed on Twitter lots of tweets with the hash tag #EMCElect2014. What is EMC Elect? It is a group of people that enjoyed socializing EMC solutions. We are now part of a team called, "EMC Elect" and as a team we will collaborate to turn the volume up EMC solutions. A big part of this is us listening to you, the customer, to learn, collaborate, create, and have fun with all we do together. So it's the collaboration with you that enables us to socialize the best message and ensure our efforts make an impact to our customers.


Most everyone has heard there is no "I" in TEAM! In "EMC Elect" we have three amazing E's:

  • Engagement
  • Energy
  • Energizers





The secret is the EMC Elect team is all about you but we call it “Engagement.” Having a bit of fun I Googled “engagement” and found:


  1. A formal agreement to get married
  2. An arrangement to do something or go somewhere at a fixed time




The first definition is not going to work as I have a wife and kids but much more interesting is the second definition. Using the second meaning I’m going to take some creative license and apply it to the EMC Elect program:




“We promise to listen to you across all types of media (Twitter, Facebook, Communities, and more) and events like EMC World to collaborate and socialize EMC solutions for Oracle in 2014 and beyond.” <-- Change Oracle for the application or technology you are most interested in"




So much depends upon collaborating together as no one knows better than you how to make EMC products and solutions the best for all things Oracle. Here is a complete list of all the EMC Elect members in the blog, "The EMC Elect of 2014 - Official List" but if you are interested in the Oracle members on the team then here is the short list:


Allan RobertsonEMC@dba_hba
Jeff BrowningEMC@OracleHeretic
Sam LucidoEMC@Sam_Lucido


Disclaimer: In the case someone was missed please let me know and I'll add to the list.




Listening to customers is the best way of understanding how we can mutually develop architectures to provide the best value to your business. 2014 is going to be a great year of engaging with you to socialize EMC solutions!






It is a great honor to be working with people on the EMC Elect team and more importantly you the customer. There is so much positive energy on performance, protection, continuous availability and many other solutions from the team. It’s that boundless energy that is going to make this year so exciting. The thing about energy is it can come from anywhere and anyone can create the new trend that electrifies the community. What is truly exciting is working with the EMC Elect team together with customers so we can move from creating a few thunderbolts to a magnificent display of lighting that is inspiring. 




It’s time to ride 2014 technology highway and push the petal to the metal: can’t drive 55.





This could be anybody that sparks the community. It’s fun to be part of a new idea or cause and become that crusader blogging about it. Below is a fun picture taken from VMworld 2013 showing the “Monster VM.”


Want to have some fun? Try googling, “monster vm vmworld” to see all the related content that came out of VMworld. Energisers don’t have to be risk takers (many times they are) but more importantly they generate enthusiasm, excitement, interest and are quick to give others credit. How awesome would it be to have a huge community of Energisers!




I’m looking forward to being part of the EMC Elect team and even more interested in working with you the customer. Let’s engage, get energized and become Energisers of EMC.

VPLEX RAC.pngContinuous Availability with Extended Oracle RAC and EMC VPLEX Metro

Continuous Availability for Oracle

Bitly URL:


Tweet this blog post:

#EMC VPLEX #Oracle stretched RAC provides zero downtime. #EMCElect

Follow us on Twitter:


Over the years we have presented at shows like IOUG COLLABORATE, EMC World, and Oracle OpenWorld on how VPLEX Metro and Extended Oracle RAC can together provide a zero downtime continuous UPTIME architecture. Those 60 minutes are important as we have to cover the foundation of how to architect this continuous uptime solution. Let’s explore a technical tip and show where you can learn more about this Oracle solution.


Logging Volume Considerations


Benefits of reading this configuration tip include:


  • Increased performance for applications
  • More granular monitoring capabilities
  • Faster sync restores after a failed WAN connection


The content for this tip can be found on the ECN VPLEX community in a blog called, “logging volume Considerations.” A logging volume is dedicated capacity for tracking any blocks written to a cluster. To use an Oracle analogy the logging volume for VPLEX is similar to the online redo logs for a database. A logging volume is a required prerequisite to creating a distributed device(s) and a remote device. The default configuration is a one-to-many relationship meaning many distributed devices using the same logging volume.

Sams Slide 1.PNG.png

In the above picture we show the logging volume is used for tracking written blocks. There are several components of the VPLEX Metro architecture illustrated in the picture and providing a definition of the components is important to our understanding:

Storage Volumes shown at the bottom of the picture next to the VMAX and VNX are LUNs. These LUNs are presented to the back-end ports for VPLEX and therefor visible and available for use.Extents ~ are created from the storage volumes. The general recommendation is to have one extent per storage volume but if necessary multiple extents per storage volume is supported. For example, if you plan to use VPLEX for a database requiring 1TB of capacity then create one extent of the same size (1TB).

Devices are created from extents. Multiple extents can be used for create one device. In the configuration of devices the administrator specifies RAID type. For example, RAID 0 (no mirroring of devices together), RAID 1 (mirroring of devices), and RAID-C (concatenating devices). As DBAs we write scripts used to concatenating files together and so we are familiar with the concept. From the storage perspective RAID-C is the ability to create devices that span multiple extents. One tip to mention is to avoid creating a RAID 0 and RAID C devices within the same virtual volume. Having a homogenous RAID 1 or RAID C configuration for the virtual volume improves responsiveness and reduces complexity.

In a VPLEX Metro configuration devices are referred to as distributed devices meaning they are mirrored across two VPLEX clusters. As you might have guessed using VPLEX Metro requires the distributed devices are configured using RAID 1.

Virtual Volumes are built from devices. It is the virtual volume that is presented to the Oracle database server. As virtual volume(s) appear as normal storage to the Oracle database the VPLEX Metro configuration is transparent and requires no DBA complexity or management overhead. In a VPLEX Metro configuration the virtual volume is referred to as a distributed virtual volume.

Recommendations for building a dedicated logging volume:

Oracle DBAs working in a physical (non-virtualized) infrastructure like dedicating one server to one database. This is because we can guarantee the database using a dedicated server will not have to compete for server resources. Most of the time this is 1-to-1 architecture is for production only but the benefits are: consistent performance and more granular monitoring. Building a dedicated logging volume for your production database using VPLEX offers similar advantages. Below are some of the guidelines in building the logging volume.The best practices for creating logging volumes can be found in the paper, “Vblock Data Protection Best Practices for EMC VPLEX with Vblock Systems.
  • Create one logging volume for each cluster
  • Use RAID 1 for logging volumes
  • Configure at lease 1GB (preferably more) of logging volume space for every 16TB of distributed device space.


Planning for resynchronization in the case of a failure

Most likely your company is using VPLEX Metro with Extended Oracle RAC to create an continuous availability architecture in which the loss of a storage array or data center does not impact availability of your enterprise applications. Architecting for an unplanned outage the infrastructure team should consider dependencies related to recovery. In this case the logging volumes will be subject to high levels of I/O when resychronizing the local and remote devices. Having a dedicated logging volume for your production database(s) means resychronizing of I/O will be for your database and not other applications translating into faster recovery. When a database is sharing the same logging volume with other applications resychronization involves the database and all the other applications thus lengthening the time to have the devices reach a sychronuous state. Our objective is to avoid this situation by having a dedicated logging volume for the database.



Can you create more than one logging volume to use with the same device? The answer is yes, as this enables the business to grow the logging volume capacity with the growth of the database. The part to be mindful of is the default behavior meaning “that if no dedicated logging volume is specified, a logging volume is automatically selected from any available logging volume that has sufficient space for the requested entries. If no available logging volume exists, an error message is returned.” The quote was taken from the blog, “Logging Volume Considerations.”Checklist of guidelines in this blog:


For more Oracle Extended RAC with VPLEX Metro I recommend:


Interested in see a live demonstration? Use the Oracle Solution Center by completing this form: Booking request form.

By all appearances Oracle has made big moves towards embracing a hybrid cloud strategy. Oracle’s most recent press release entitled, “Oracle Licenses VMware vSphere Storage APIs for Oracle Storage” is very positive news. In this press release Oracle has licensed VMware Storage APIs to enable customers using VMware virtualization to more effectively manage Oracle on Pillar Axiom and ZFS Storage. This means Oracle storage solutions joins EMC and other vendors in offering integration with VMware vSphere. What might customers expect from Oracle using VMware APIs?


vSphere API for Array Integration (VAAI): Offloads traditionally expensive resource management of clones and snaps from the hypervisor to the storage array. Let’s say that you are ready to upgrade from 11gR2 to 12c (checkout this EMC proven solution for upgrading Oracle) and you have three recovery points built into the upgrade plan. Through VAAI these snapshots will take much less time as the storage array will do the job! Faster clones and snaps will reduce the database upgrade time.


VMware vSphere API for Storage Awareness (VASA): This enables Oracle and other storage vendors to provide vSphere with information from the storage array. Information around disk array features like snapshot, replication, thin provisioning and RAID levels represents some of the configuration and status information presented up to vSphere. Having the storage information in vSphere can mean the VMware administrator can more easily use Oracle storage for virtualized databases.


Site Recovery Manager (SRM): automates recovery plans from vCenter Server. Using SRM the VMware administrator can collaborate with the Oracle DBA to include databases and applications in the automation plan. This means with some scripting the databases and applications can start-up at a secondary site. This is very important as all the manual steps can be scripted and coordinated with interconnected systems for a holistic disaster recovery plan.


Most importantly this gives customers choice; no lock-in! This seems a positive step in the direction of enabling customers to build the infrastructure they choose to run their Oracle databases and applications. Adding VMware to the list of vendors also has value for Oracle. Now when working with customers Oracle Sales doesn’t have to explain “why not VMware” rather the conversation takes a much more positive “we work with VMware.”  In the press release some positive comments included, “expanded support of VMware environments” and “deepening the integration of VMware infrastructure with Oracle storage systems” hopefully this is the beginning of continued collaboration.


Optimistically, this is also the end of any Fear Uncertainly and Doubt (FUD) relating to using VMware to virtualize Oracle databases. I’ll provide this link, “EMC & Oracle Customer References Virtual Rolodex” to see how many customers use EMC and Oracle together. Here are some highlights:


Seacore on Virtualizing Oracle with EMC: Watch Ben Marino, Director of Technology, talks about virtualizing Oracle. In this video virtualization improved Oracle database provisioning from 2 weeks to about 2 days.


AAR Corp. on the Private Cloud with EMC: AAR Corp is a company that is in the aerospace industry and in the video Jim Gross, Vice President of IT, talks about performance gains with the VMAX 10K, RecoverPoint for lower RPO and Avamar in its VMware and Oracle environment.


Zebra Technologies paper: Zebra Technologies is a global leader known its printing technologies, including RFID and real-time location solutions.  A great quote, “All of Zebra’s storage resides on VMAX 10K, as well as EMC VNX unified storage. Zebra uses VMware vSphere to virtualize its server environment.”


You might ask why I’m I talking about Oracle storage and its integration with VMware on the EMC Oracle community? After all it Oracle storage competes with EMC, right? In my opinion EMC storage solutions are best in class and customers stand to benefit from more competition. Did you see the new VNX? Here is the press release “Accelerates Virtual Applications and File Performance Up-To 4X; New Multi-Core Optimized VNX with MCx Software Unleashes The Full Power of Flash” and some metrics:

  • More-than the performance of 4 previous generation systems combined
  • More-than 3X performance for transactional NAS applications (such as VMware over NFS) with 60% faster response time than previous VNX systems (thinking dNFS is going to rock on the new VNX)
  • More than 735K concurrent Oracle and SQL OLTP IOPS—4X more than previous VNX systems
  • More than 6,600 virtual machines—a 6X improvement from the previous generation
  • More than 3X the bandwidth—up to 30GB/second for Oracle and SQL data warehousing than previous generation.


It’s time to bring the “virtualizing Oracle” FUD to the curb for garbage collection and focus on customer value: broad integration and performance. Oracle using VMware Storage APIs is awesome and gives the customers more choices. Well done Oracle!


Tweet this blog!

Bitly URL:

Sample tweets:

#Oracle Embraces Cloud Strategy (Finally!).

#Oracle has licensed #VMware Storage APIs.

#Oracle is licensing #VMware vSphere Storage APIs for Oracle Storage.

#Oracle embraces VMware Storage APIs.

Filter Blog

By date:
By tag: