Find Communities by: Category | Product

1 2 3 Previous Next

Everything Oracle at Dell EMC

40 Posts authored by: Sam Lucido

Bitly URL:

http://bit.ly/1pxxZZD

 

 

Tweet this document:

#EMC #XtremIO all flash array uses intelligent software to deliver unparalleled #Oracle database performance http://bit.ly/1pxxZZD New blog

 

#XtremIO X-Brick can lose 1 storage controller and multiple SSD and still have your #Oracle database up and running: http://bit.ly/1pxxZZD

 

 

Related content:

 

EMC Optimized Flash Storage for Oracle databases

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

 

store_open-218x245.png

Click to learn more about XtremIO in the EMC Store

The EMC XtremIO storage array is an all-flash system that uses proprietary intelligent software to deliver unparalleled levels of performance. XtremIO’s inline data reduction stops duplicated write I/Os from being written to disk, which improves application response time. XtremIO is highly scalable where performance, memory, and capacity increase linearly. XtremIO has its own data protection algorithm dedicated to fast rebuilds and all-around protection, and performs better than the traditional RAID types. The application I/O load is balanced across the XtremIO system. XtremIO provides native thin provisioning. All volumes are thin provisioned as they are created. Since XtremIO dynamically calculates the location of the 4 KB data blocks, it never pre-allocates or thick provisions storage space before writing the actual data. Thin provisioning is not a configurable property, it is always enabled. There is no performance loss or capacity overhead. Furthermore, there is no volume defragmentation necessary, since all blocks are distributed over the entire array by design.

 

There are many environments, applications and solutions that would benefit from the addition of an XtremIO storage array. This includes Virtual Desktop Infrastructure (VDI), Server virtualization, and database analytics and testing. The idea is to implement XtremIO in an environment where there is a high number of small random I/O requests, low latency is required, and data has a high rate of deduplication. These features are very beneficial from the perspective of Oracle DB performance . Hence , Oracle DBAs will love to work on this all flash storage arrays.

 

The benefits of XtremIO extend across multiple audiences in the IT organization.

 

Application owners benefit from accelerating performance resulting in faster transactions, scaling more end-users and improving efficiency.

 

Infrastructure owners can now drive consolidation of database infrastructure even across mixed database workload environments, whether physical or virtual, and service all environments with all flash.

 

DBAs can now eliminate the need for constant database tuning and chasing hot spots. They can provision new databases in less time and reduce downtime for capacity planning and growth management.

 

CIOs can improve overall database infrastructure economics through consolidating databases and storage and controlling costs even as multiple databases are deployed and copied over time.

 

bb1.png

 

XtremIO supports both 8 Gb/s Fibre Channel (FC) and 10 Gb/s iSCSI with SFP+ optical connectivity to the hosts. Each X-Brick provides four FC and four iSCSI front-end ports. Access to the XtremIO Management server (XMS) or to the Storage Controllers in each X-Brick is provided via Ethernet. XtremIO can also use LDAP to provide user authentication

 

Fibre Channel (FC) is a serial data transfer protocol and standard for high-speed enterprise-grade storage networking. It supports data rates up to 10 Gbps and delivers storage data over fast optical networks. Basically, FC is the language through which storage devices such as HBAs, switches and controllers can communicate. The FC protocol helps to clear I/O bottlenecks and makes the DB faster.

bb2.png

 

XtremIO offers storage connectivity via Fibre Channel and iSCSI therefore the proper cables must be supplied and correctly configured in order to successfully present storage. XtremIO also requires connectivity for management via Ethernet. An additional RJ45 port will be required if a physical XMS is being used.

 

bb3.png

 

As in every SAN storage environment, a highly available environment requires at least two HBA connections per host with each HBA connected to a separate Fibre Channel switch, as shown here. Connecting the XtremIO cluster to the FC switches is also straight forward. Each Storage Controller has 2 FC ports and therefore you should connect each Storage Controller to each Fibre Channel switch.

 

Each X-Brick of an XtremIO system can lose one Storage Controller and still remain fully functional. In general, every host should be connected to as many Storage Controllers as possible on an XtremIO cluster, as long as the host and multipathing software supports that number of connections. Best practice indicates 4-8 paths per host for Two X-Brick clusters, Up to four paths for Single X-Brick clusters, up to 16 paths for a four X-Brick cluster, and never to include more than one host initiator in a zone. To avoid multipathing performance degradation, do not use more than 16 paths per device. All volumes are accessible via any and all of the front-end ports. To get optimal performance of Oracle DB on XtremIO storage array, these best practices need to be followed.

 

bb4.png

 

When performing the cabling for iSCSI connectivity, the ideal configuration is to have redundant paths and redundant switches as well. General best practice for highly available iSCSI environments is for every host to have two physical adapters and for these adapters to be connected to separate VLANs, as shown here. As with FC connectivity, connecting the XtremIO iSCSI ports is easy. Since each Storage Controller has two iSCSI ports, simply connect each SC to a separate iSCSI subnet or VLAN.

 

Summary

In this blog , I tried to explain the architecture of XtremIO with special reference to the Oracle database. If we consider XtremIO and Oracle DB in unison , then below are the top 5 features for using Oracle DB on the top of XtremIO storage array.

 

  1. Predictable Oracle Performance
    • All DBAs get all flash all the time allowing dramatic improvement in database IOPS with sub-millisecond response times. XtremIO removes nearly all database tuning required for OLTP, data warehouses or analytics.
  2. Oracle Scalability with Endurance
    • Storage capacity is thin provisioned all the time, allocating capacity only when data is written. There is no need for overprovisioning of capacity, no fragmentation and no need for database block space reclamation as you scale. XtremIO also deduplicates Oracle data blocks and the remaining unique data blocks are compressed inline, delivering 17X more capacity access for all of your DBAs.
  3. Amazingly Simple Oracle Provisioning
    • DBAs simply need to request capacity, storage teams define volume sizes, map to Oracle hosts and go. The XtremIO operating system XIOS eliminates the complex configuration steps required by traditional storage—there is no need to set RAID levels, determine drive group sizes, set stripe widths or caching policies, or build aggregates. XIOS automatically configures every volume, every time.
  4. More Oracle Copies without Penalties
    • Storage snapshots are core to enabling DBAs to use multiple copies of production for multiple tasks.  Unlike other storage snapshots, XtremIO snapshots are fingerprinted in memory and deduplicated inline all the time allowing more snapshot capacity for DBAs. Test, development, maintenance and reporting can be deployed in minutes for DBAs all on the same platform as production without impacting production.
  5. Continuous Oracle Operations
    • DBAs no longer need to worry about storage as a source of database downtime. XtremIO Data Protection (XDP) delivers in-memory data protection while exceeding the performance of RAID 1. When combined with active-active hardware, non-disruptive software & firmware upgrades & hot-pluggable components remove storage as a source of downtime for DBAs.

N.B. : The above discussion is with respect to XtremIO version 2.4. For the latest features of XtremIO 3.0 , pls. click here.

Bitly URL:

http://bit.ly/1vMsBaA

 

 

Tweet this document:

#EMC flash strategy can now be used to power performance across the full spectrum of an #Oracle DBA's area: http://bit.ly/1vMsBaA New blog!

 

Flash is fast, but it introduces caveats that need to be addressed to take advantage of the potential of flash http://bit.ly/1vMsBaA #EMC

 

Related content:

 

EMC VSPEX Design Guide for Virtualized Oracle 11g OLTP - Part 1

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

 

store_open-218x245.png

Click to learn more about XtremIO solutions in the EMC Store

INTRODUCTION

 

Over the last 3-5 years, flash technology has become a standout performance-enhancing storage option for the most demanding and mission-critical enterprise applications and databases.  With an industry-leading full range of flash options – from the storage array to your database and application servers – we believe this makes EMC your most powerful partner in flash technology!

 

 

Given that flash technology can now be used to power performance improvement across the full spectrum of an Oracle DBA’s areas of responsibility, this article is intended to introduce you to EMC’s flash strategy.

 

 

 

WHERE TO USE FLASH?

 

EMC’s flash strategy is “flash everywhere”—flash based on your application needs and the specific considerations of your workload. EMC’s flash storage architecture allows additional data services to provide functionality that’s just not possible with spinning disks. EMC provides a full portfolio of solutions to directly address your specific workload and application needs.

 

 

 

THE ALL-FLASH ARRAY

1flash-hl-all-flash.jpg

XtremIO All-Flash Array

 

Get breakthrough shared-storage benefits for Oracle database acceleration, consolidation, and agility. Plus scale-out, consistent low latency and IOPS, and data services like deduplication, compression, space-efficient snapshots, and encryption, and an amazing system administration experience.

 

 

Flash is fast, but it introduces caveats that need to be addressed with the right architecture so that organizations can take advantage of the potential of flash. EMC XtremIO enterprise storage, EMC’s purpose-built all-flash storage array, is built from the ground up to take full advantage of flash.

 

 

XtremIO’s scale-out architecture is designed to deliver maximum performance and consistent low-latency and response times. More importantly, XtremIO’s architecture delivers in-line, always-on data services that just aren’t possible with other architectures.

 

 

XtremIO’s in-line data services bring out the value of flash: always-on thin provisioning, in-line data deduplication, in-line data compression, flash-optimized data protection, in-line data at rest encryption, and instantly available snapshots—features that are always-on and architected for performance at scale with no additional overhead.


All this while achieving a competitive cost of ownership, XtremIO’s architecture addresses all the requirements for flash-based storage including achieving longevity of flash media, lowering the effective cost of flash capacity, delivering performance and scalability, and providing operational efficiency and advanced storage array functionality.
res that are always-on and architected for performance at scale with no additional overhead.

 

 

 

Explore XtremIO All-Flash Array »

 

 


THE HYBRID-FLASH ARRAY


VMAX, VNX and Isilon Hybrid-Flash Arrays

2flash-hl-hybrid-array.jpg

Get advantages like Fully Automated Storage Tiering (FAST) software that automatically copies frequently accessed data to Flash, and moves other data across your existing storage tiers.

 

 

Hybrid storage arrays allow a little flash to go a long way, but flash alone isn’t enough. You can use software like EMC FAST to change the economics of deploying storage. EMC FAST automatically moves the most performance-hungry applications to a tier of flash while allowing lesser referenced data to move to a lower cost spinning disk tier of storage, providing a blended mix of performance and low cost.

 

 

While historically EMC VNX, EMC VMAX, and EMC Isilon, have previously been HDD-based solutions, you can now deploy these powerful storage platforms in hybrid configurations.


 

Explore Hybrid Arrays »

 

 

 


FLASH IN THE SERVER

3flash-hl-in-the-server.jpg

ScaleIO, XtremSF and XtremCache Server Flash

 

Take advantage of flash in your data center. ScaleIO provides convergence, scale, and elasticity, while maximizing performance. EMC XtremCache delivers turbo-charged performance for individual database servers and their applications.

 

 

You can also implement flash at the server level via Periphera

l Component Interconnect Express (PCIe) or Server Solid State Disk (SSD) as location storage devices. This reduces latency and increases input/output operations per second (IOPS) to accelerate a specific application component.


EMC XtremSF’s PCIe flash hardware delivers sub-100 microsecond response time. Configured as local storage, you can leverage XtremSF to increase specific workloads or components of workloads such as database index or temp space.
storage devices. This reduces latency and increases input/output operations per second (IOPS) to accelerate a specific application component.

 

 

 

By adding software, you can also use server flash as a caching device. EMC XtremCache software complements hybrid storage arrays. By integrating XtremCache with EMC’s fully automated storage tiering (FAST), organizations can increase performance without sacrificing the advanced data services of a hybrid storage deployment.

 

 

Explore Server Flash - XtremCache »

Explore Server SAN - ScaleIO »

 

 

 

As you can see, EMC gives you the flexibility to deploy flash where needed across your

Oracle database environments, depending on your performance, cost, capacity, and protection requirements.

 

 

In a future article, I will consider specific use cases and solutions for Oracle DBAs to further understand how EMC’s industry-leading flash technology described in this article can be used to dramatically improve your Oracle database performance.


Bitly URL:

http://bit.ly/1qtv7lH

 

 

Tweet this document:

Improve #oracle database performance

on #EMC using DNFS http://bit.ly/1qtv7lH New blog talks #VNX best practices

 

Simplify network setup & management by taking advantages of #oracle dNFS with #EMC infrastructure http://bit.ly/1qtv7lH New #VNX blog

 

Related content:

 

EMC VSPEX Design Guide for Virtualized Oracle 11g OLTP - Part 1

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

 

store_open-218x245.png

Click to learn more about other solutions in the EMC Store

INTRODUCTION

 

Networked-Attached Storage (NAS) systems have become commonplace in enterprise data centers. This widespread adoption can be credited in large part to the simple storage provisioning and inexpensive connectivity model when compared to block-protocol Storage Area Network technology (e.g. FCP SAN, iSCSI SAN).   EMC Unified Storage products offer a flexible architecture and multi-protocol connectivity.  This enables connectivity over IP/Ethernet, iSCSI, and Fibre Channel SAN environment.  Multi-protocol functionality is available on integrated EMC Unified Storage arrays for very low cost.

 

NAS appliances and their client systems typically communicate via the Network File System (NFS) protocol. NFS allows client systems to access files over the network as easily as if the underlying storage was directly attached to the client. Client systems use the operating system provided NFS driver to facilitate the communication between the client and the NFS server. While this approach has been successful, drawbacks such as performance degradation and complex configuration requirements have limited the benefits of using NFS and NAS for database storage.

 

Oracle Database Direct NFS Client integrates the NFS client functionality directly in the Oracle software. Through this integration, Oracle is able to optimize the I/O path between Oracle and the NFS server providing significantly superior performance. In addition, Direct NFS Client simplifies, and in many cases automates, the performance optimization of the NFS client configuration for database workloads.

 

Direct NFS Client Overview

 

Standard NFS client software, provided by the operating system, is not optimized for Oracle Database file I/O access patterns. With Oracle Database 11g or above, you can configure Oracle Database to access NFS V3 NAS devices directly using Oracle Direct NFS Client, rather than using the operating system kernel NFS client. Oracle Database will access files stored on the NFS server directly through the integrated Direct NFS Client eliminating the overhead imposed by the operating system kernel NFS. These files are also accessible via the operating system kernel NFS client thereby allowing seamless administration.

 

Benefits of Direct NFS Client

 

Direct NFS Client overcomes many of the challenges associated with using NFS with the Oracle Database. Direct NFS Client outperforms traditional NFS clients, is simple to configure, and provides a standard NFS client implementation across all hardware and operating system platforms.

  • Performance, Scalability, and High Availability
  • Cost Savings
  • Administration Made Easy

With EMC Unified Storage,  Oracle DNFS connectivity and configuration can be used to deploy a NAS architecture with lower cost and reduced complexity than direct-attached storage (DAS) and storage area network (SAN).  EMC Unified storage can be used with DNFS to:

  • Simplify network setup and management by taking advantages of DNFS automated management of tasks, such as IP port trunking, and tuning of Linux NFS parameters
  • Increase the capacity and throughput of their existing networking infrastructure

 

Configure the Oracle DNFS client with EMC VNX Unified Storage

 

Configure oranfstab:  When you use DNFS, you must create a new configuration file, oranfstab, to specify the options/attributes/parameters that enable Oracle database to use dNFS.

 

Apply ODM NFS library:  To enable DNFS, Oracle database uses an ODM library called libmfsodm11.so.  You must replace the standard ODM library libodm11.so, with the ODM NFS library libnfsodm11.so

 

Enable transChecksum on the VNX DataMover:  EMC recommends to enable transChecksum on the DataMover that serves the Oracle DNFS clients.  This avoids the likelihood of TCP port and XID (transaction identifier) reuse by two or more database running on the same physical server, which could possibly cause data corruption

 

DNFS network setup:  The network setup can now be managed by an Oracle DBA, through the oranfstab file.  This free up the database sysdba from specific bonding tasks previously necessary for OS LACP-type bonding

 

Mounting DNFS:  Add oranfstab to the ORACLE_BASE\ORACLE_HOME\dbs directory.  For Oracle RAC, replicate the oranfstab file on all nodes and keep them synchronized.  When oranfstab is placed in the ORACLE_BASE\ORACLE_HOME\dbs directory, the entries in this file are specific to a single database.  The DNFS client searches for the mount point entries as they appear on oranfstab.  DNFS uses the first matched entry as the mount point.

 

You need to verify if the DNFS has been enabled by checking the available DNFS storage paths, the data files configured under DNFS, and the server & directories configured under DNFS.

Bitly URL:

http://bit.ly/1wse8Vr

 

Tweet this document:

New #EMC #Oracle VM blog about #VMAX integration: http://bit.ly/1wse8Vr Automate LUN discovery, create LUNs, and clone VMs with FREE plugin

 

Are you using #Oracle VM? #EMC enables adminstrators to discover and manage VM storage on #VMAX: http://bit.ly/1wse8Vr More detail in blog

 

Create and remove LUNs using #EMC integration with #Oracle VM: http://bit.ly/1wse8Vr Makes provisioning VMs easy and fast!

Follow us on Twitter:

EMCOracle.jpeg.jpg

OracleVM.pngThe EMC Storage Integrator (ESI) for Oracle VM version 3.3 is a plug-in that enables Oracle VM to discover and provision EMC Storage arrays. The Integration Module is built upon the Oracle VM Storage Connect (OCS) framework. The framework provides a set of storage discovery and provisioning Application Programming Interfaces (API) that enhance the ability to manage and provision EMC storage in an Oracle VM environment.

 


Installation

Pre-requisites

  • EMC SMI-S Provider server to provision and manage VMAX arrays

These are the steps for installing the ESI for Oracle VM plug-in. We encourage you to visit EMC online support to updates to the ESI installation steps.

  1. Download the installation package file from EMC online support
  2. Log on as the root user to the Oracle VM server
  3. Type the following command to install the package
    # rpm –ivh ./emc-osc-isa1.0.0.1-1.e16.x86_64.rpm

 

Edit the ESI for Oracle VM configuration file:

 

The ESI for Oracle VM uses isa.conf to define its runtime behavior. For example:

Properties

Values

Descriptions

AccessGroupPrefix

A string composed of alphanumerical characters, “_” or “-“

User defined prefix for OSC

managed initiator group, port

group, device group, and

masking view on VMAX arrays.

This prefix makes it easier to

differentiate between OSC

managed groups and other

managed groups.

AutoMetaEnabled

True or False

Specifies whether VMAX auto meta is enabled (true) or disabled (false).

LogLevel

Debug

Sets to verbose logging for troubleshooting.

 

The properties in the above table can be set in the isa.conf file. This file is manually created in plain text under the directory:

/opt/storage-connect/plugins/emc/isa/

 

The root user must have permissions to the isa.conf file.


Register a VMAX storage Array


The Oracle VM administrator will have to work with the EMC storage administrator for this one time registration of the storage array. As the list below does have sensitive information the recommendation is to have the EMC storage administrator enter the information.

  • Name of the storage array
  • Storage Plugin – Select EMC FC SCSI Plugin
  • Plugin Private Data – Storage array type and ID for example: SYMMETRIX+000195900311
  • Admin Host – Host name or IP address of the host where SMI-S provider is installed
    • Amin Username – to connect to the SMI-S provider
    • Admin Password – to connect to the SMI-S provider

Repeat these steps for each VMAX array that supports Oracle VM.


LUN Discovery

 

Once the VMAX storage array has been registered the Oracle VM administrator can refresh the list of available storage arrays and see the newly added EMC storage array(s). Clicking on the storage array the Oracle VM administrator will see storage available for use.


Create and remove thin LUNs

 

Provisioning storage is very easy as the Oracle VM administrator would follow the normal steps in creating a storage repository to present to the Oracle VM server(s) as a storage pool. Once the storage repository is available the administrator can add the storage resources to virtual machines.

It is important to know how storage is created on the EMC arrays as part of the preparing and configuring storage resources. The parameter “AutoMetaEnabled” directs the EMC storage array to create a thin device on of two ways:

  • If  “AutoMetaEnabled=False” then the plug-in creates a concatenated meta device using newly-created member devices and presented to the Oracle VM administrator. For example, a request for 500GB of space will result in the plug-in using 240GB of one LUN, 240GB of another and 20GB of a third LUN. A concatenated meta device is a group of storage devices concatenated together to form a larger LUN. When filling a concatenated meta device the first device fills first, followed by the second, and so on until the end.
  • If  “AutoMetaEnabled=True” then the plug-in requests the storage array to create the LUN. This offloads the provisioning of the disk space to the VMAX array which will automatically create one concatenated 500GB (240+240+20)meta device (reusing the same example as above).


Tip: Planning thin devices for Oracle databases The maximum size of a stand thin device in a Symmetrix VMAX is 240 GB. If a larger size is needed, then a metavolume comprised of thin devices can be created. When host striping is used, like Oracle ASM, it is recommended that the metavolume be concatenated rather than striped since the host will provide a layer of striping, and the thin pool is already striped based on data devices.

 

After the repository is created all the Oracle VM administrator needs to do is present the repository. Presenting the repository involves selecting which Oracle VM servers can use the current storage repository. It’s that easy!


Use Auto-Provisioning (LUN masking)

 

Making the discovered array usable by your Oracle VM servers involves access groups. An access group has a name and sometimes a description but most importantly the administrator can select from a list of available storage initiators and assign which storage initiators belong to the access group. A storage initiator is similar to an email address as it allows communication between people. Adding a storage initiator to a access group means the administrator is granting access to the underlying storage. Not adding or removing a storage initiator from an access group is called, “masking” and is a way to prevent access to the storage array.

 

Create Clones

 

Creating clones is a simple four step process that enables the Oracle VM administrator to very quickly create point in time copies of virtual machines. A newly created clone is immediately accessible to the host, even while data copying is occurring in the background. Here are the steps:

  • Select the EMC storage as the Clone Target Type
  • Type the source device name as the Clone Target
  • Select Thin Clone as the Clone Type
  • Click on OK

 

The ability to create very quick clones of virtual machines enables DBA teams and application owners to save time in activities like patching and functional testing. At EMC we have embraced the use of virtualization and automation to drive Database-as-a-Service within the company and can now provision Oracle, SQL Server and other databases in one hour. To learn more I recommend reading, “EMC IT’s Database-as-a-Service” paper and viewing the video, “EMC IT’s eCALM Demo.”

 

Summary

 

What I found amazing about writing this blog is how easy it is to start using the EMC Storage Integrator for Oracle VM and the immediate benefits of auto discovery, fast storage provisioning and ease of management. I was able to summarize the installation steps in about half a page! EMC is integrating up the Oracle stack enabling the Oracle VM administrator and DBA to do more with our storage arrays and this ESI integration is a strong example. Other points of integration include our FREE Plug-in for Oracle Enterprise Manager 12c and the new application (Oracle database) awareness in Unisphere 8.0. Hope you enjoyed reading this blog and let us know if you are using the ESI storage integrator.

Bitly URL:

http://bit.ly/XtremIOAdvFormat

 

Tweet this document:

New #XtremIO blog: Using Adv. Format 4 #Oracle databases. Great overview sure 2 improve the performance of database http://bit.ly/XtremIOAdvFormat

 

Related content:

EMC XtremIO Snapshots are Different!

 

Virtual Storage Zone: Getting the Best Oracle Performance on XtremIO

 

XtremIO for Oracle -- Low Latencies & Better Use of Datacenter Resources

 

Features and Benefits of Using Oracle in XtremIO Array Part 1

 

Features and Benefits of Using Oracle in XtremIO Array Part 2

 

Features and Benefits of Using Oracle in XtremIO Array Part3

 

EMC XtremIO Introduction - Scalable Performance for Oracle Database

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

XtremIO Best Practices for Oracle using Advanced Format


Architecting a database on an All Flash Array (AFA) like EMC’s XtremIO is best done by reviewing practices to optimize I/O performance. One consideration is the use of Advanced Format and how it impacts the performance of the database Redo logs. Advanced Format refers to a new physical sector size of 4096 bytes (4KB) replacing original 512 byte standard. The larger 4KB physical sector size has these benefits:

  • Greater storage efficiency for larger files (but conversely less efficiency for smaller files)
  • Enablement of improved error correction algorithms to maintain data integrity at higher storage densities [1]

 

A DBA might be very interested in using a 4KB physical sector size to gain the efficiencies and improved error correction but there are a few considerations to review. For example, some applications and databases do not recognize the newer 4KB physical sector size. At EMC we have extensively tested Oracle on XtremIO following recommendations by Oracle support in Support Note, “4K ASM” (1133713.1).” To address the problem of a database or application not recognizing the new 4KB physical sector size there is the option to use 512-byte (512e) emulation mode.

 

 

Emulation mode uses a 4KB physical sector with eight 512 byte logical sectors. A database expecting to update (read and write) a 512 byte sector can accomplish this by using the logical block address (LBA) to update the logical sector. This means the 4KB physical sector size is transparent to the database as it can write to the 512 byte logical sector and thus backwards compatibility is maintained.

 


Picture 1: 512-Byte Emulation Mode

Slide3.PNG.png


Unfortunately, there is the possibility of misaligned 4KB operations: one 512 byte update causing two 4K physical sectors to be updated. Before exploring the impact of misaligned operations on the Oracle database we need to how writes are managed in emulation mode.


Picture 2: Writes in 512-Byte Emulation Mode

Slide4.PNG.png


Shown above writes are managed by:

  • The entire 4KB physical sector is read from disk into memory
  • Using the LBA the 512 byte logical sector is modified
  • The entire 4KB physical sector is written to disk

 

The key point of this read-modify-write process is the entire 4KB physical sector is modified. A request to modify one 512 bytes logical sector means reading 4KB into memory and writing the 4KB physical sector back to disk. For optimal efficiency it would be ideal to update multiple logical sectors belonging to one physical sector in one operation. When properly aligned writes to logical sectors are a one-to-one match to the physical sector and do not cause excessive I/O.

 

Misalignment is caused when incorrectly partitioning the LUN. To quote Thomas Krenn’s Wiki on Partition Alignment:

  • Partitioning beginning at LBA address 63 as such is a problem for these new hard disk and SSDs[2]
  • If partitions are formatted with a file system with a typical block size of four kilobytes, the four-kilobyte blocks for the file system will not directly fit into the four-kilobyte sectors for a hard disk or the four-, or eight-, kilobyte pages for an SSD. When a four-kilobyte file system block is written, two four-kilobyte sectors or pages will have to be modified. The fact that the respective 512-byte blocks must be maintained simply adds to the already difficult situation, meaning that a Read/Modify/Write process will have to be performed. [2]


Picture 3: Negative Impact of Misalignment

Slide5.PNG.png


Quick side note:

I enjoyed reading the blogs “4k Sector Size” and “Deep Dive: Oracle with 4k Sectors” by flashdba. Although flashdba works for a competitor many of the recommendations apply to all Oracle users.

 

The solution is not to partition the LUN but present the unpartitioned device to ASM. There is an excellent blog by Bart Sjerps (Dirty Cache) called, “Fun with Linux UDEV and ASM: UDEV to create ASM disk volumes” that provides steps on using unpartitioned devices with ASM. In the blog Linux UDEV is reviewed as a solution to eliminate the misalignment:

  • We completely bypassed the partitioning problem, Oracle gets a block device that is the whole LUN and nothing but the LUN[3]
  • We assigned the correct permissions and ownership and moved to a place where ASM only needs to scan real ASM volumes (not 100s of other thingies) [3]
  • We completely avoid the risk of a rookie ex-Windows administrator to format an (in his eyes) empty volume (that actually contains precious data). An admin will not look in /dev/oracleasm/ to start formatting disks there[3]


Reading through the blog Bart points to the fact using UDEV and maintaining the ASM rules can be hard work and so he created a script called, “asm” and a RPM called, “asmdisks” to automate the use of UDEV with ASM. Highly recommended reading and look to the bottom of the blog for the link to download the RPM. Why not use ASMlib? Bart goes into detail on some of the challenges in using ASMlib in the same blog so I’m not going to list them here but rather encourage you to review the ASMlib section.

 

Here are a few examples of how to determine if you are using emulation mode as detailed in the Unix & Linux website under “How can I find the actual size of a flash disk?”


Using sgdisk:

sgdisk –print <device>

[…]

Disk /dev/sdb: 15691776 sectors, 7.5 GiB

Logical sector size: 512 bytes

The output shows the number of sectors and the logical sector size.

 

Using the /sys directly:
For the number of sectors:

cat /sys/block/<device>/size

 

For the sector size:

cat /sys/block/<device>/queue/logical_block_size

 

Using udisks:

udisks outputs the information directly.

udisks –show-info <device> | grep size


Using blockdev:

Get physical block sector size:

blockdev –getpbsz <device>

 

Print sector size in bytes:

blockdev –getss <device>

 

Print device size in bytes:

blockdev –getsize64 <device>

 

Print the size in 512-byte sectors:

blockdev –getsz <device>

 

Beyond using unpartitioned devices for ASM to bypass the misalignment issue is there any other recommendations? Interestingly, the database online redo log files by default have a block size of 512 bytes. For optimal online redo log efficiency it would be ideal to change from 512 byte block size to 4096 byte block size. As of version 11.2 this can be changed by specifying the BLOCKSIZE clause to values of 512 (default), 1024, or 4096 bytes.


Picture 4:Online Redo Log Blocksize Recommendation

Slide10.PNG.png


For example, in recent XtremIO testing we used:

ALTER DATABASE ADD LOGFILETHREAD 1 GROUP 4 (‘+REDODG’) SIZE 8G BLOCKSIZE 4096;

 

Before creating new online redo logs with the 4K blocksize there is a known issue with emulation mode. Emulation mode makes the 4K physical sector size transparent to the database so when creating online redo log files the database checks the sector size and finds 512 byte sectors. Unfortunately, discovering 512 byte sectors when attempting to write 4096 byte blocks results in an error like:

 

ORA01378: The logical block size (4096) of file +DATA is not compatible with the disk sector size (media sector size is 512 and host sector size is 512) [4]

 

The solution is using a hidden database parameter _DISK_SECTOR_SIZE_OVERRIDE to TRUE. This parameter overrides the sector size check performed with creating optimally sized redo log files. This parameter can be changed dynamically.

 

ALTER SYSTEM “_DISK_SECTOR_SIZE_OVERRIDE”=”TRUE”;

 

If you creating new online redo logs with the 4K blocksize then you might have to drop the original 512 byte redo log files groups.

 

ALTER DATABASE DROP LOGFILE GROUP 1;


Summary of 512e emulation mode best practices:

Below is a summary of the recommendations in this blog. Time for a disclaimer: I would also encourage you to review Oracle support note 1681266.1 “4K redo logs and SSD based storage” as a good place to start in determining if these recommendations are a good fit for your databases. Test these recommendations in a copy of production and validate the impact. Now that the disclaimer is over here are the steps.

  • Create your LUNs
  • Validate use of emulation mode
    • Example: blockdev –getss <device>
  • Do NOT partition the LUNs
  • Use UDEV to create ASM disk volumes (See Dirty Cache blog)
  • Set the database initialization parameter “_DISK_SECTOR_SIZE_OVERRIDE”=”TRUE”
  • Create Online Redo Logs with using the BLOCKSIZE clause for 4096 bytes


4KB Native Mode

In native mode the physical sector size and the logical sector size are the same: 4 KB. If planning on using advanced format native mode the DBA will have to create 4 KB block size redo logs. Outside of the redo logs there are a few other considerations for 11gR2 and higher.


Picture 5: 4K Native Mode

Slide7.PNG.png



Unfortunately, I haven’t the time to fully explore 4K native mode but promise a follow-up for my next blog. I did want to provide this summary table below because it highlights Oracle’s recommendation to use 4KB online redo logs for emulation mode and native mode. In native mode there is no option to use 512 byte redo logs so in a good way Oracle automatically directs the DBA into using the optimal 4KB blocksize for the redo logs.


Summary table: Supported and Preferred Modes

Mode Type

512-Byte Redo Logs

4 KB Redo Logs

Preferred

Emulation Mode

Supported

 

 

Emulation Mode

 

Supported

Native Mode

 

Supported

 

In the above summary table we see that emulation mode will support both 512 KB and 4 KB redo log block sizes but 4 KB is preferred. The overall recommendation is to use 4 KB block size for your redo logs.

 

Next Blog: Exploring the 4K native mode and insights into XtremIO testing using Advanced Format.

 

Table of References

[1] Advanced Format from Wikipedia, URL: http://en.wikipedia.org/wiki/Advanced_Format


[2] Thomas Krenn Wiki on Partition Alignment, URL: http://www.thomas-krenn.com/en/wiki/Partition_Alignment

 

[3] Bart Sjerps (Dirty Cache) called, “Fun with Linux UDEV and ASM: UDEV to create ASM disk volumes” URL: http://bartsjerps.wordpress.com/2014/07/01/linux-udev-create-asm-disk-volumes/#more-1534

 

[4] Oracle Support Note 1681266.1 entitled, “4K redo logs and SSD – based Storage”

 

Research


Just announced the new VMAX3 is delivers more performance, simplifies storage management, and is the optimum route to the hybrid cloud. In today’s Redefine Possible  mega event you might have heard about the new capabilities of the VMAX3 but perhaps you didn’t catch the part about Oracle application awareness. In the interest of full disclosure I missed the part about how using the VMAX3 BOTH the Oracle DBA and storage administrator can monitor database and tune database performance. That means it’s time to blog!


Driving Awareness

At the time of writing this blog it might be a challenge to find information on VMAX3 application awareness with the exception of this 6 minute video: Application Awareness Demo. In the video we learn that there is a “dedicated URL for DBAs” to access DBclassify in Unisphere 8.0. This means the DBA can independently access the DBclassify web page without having to contact the storage administrator and can gain immediate insight into database storage performance.


Picture 1: DBclassify Dashboard

Unisphere8dbclassify.png

In the above picture above we see the DBclassify dashboard and several statics: IO Wait vs. Non IO Wait, average active sessions wait, response time, IOPS, and throughput. The solid lines in the graph denote performance as reported by the database and the dashed lines show storage performance. In this way it is very easy to investigate physical reads, writes and redo writes and see the delta between database and storage metrics. This eliminates the blame storms that sometimes occur between the database and storage administrators.


Analytics

Clicking on the ‘Analytics’ table up top brings the DBA to a management page that shows IO Wait over time and which database objects actively used during that time. This provides the capability for the DBA to investigate historical performance and find which database objects were used during high IO wait times.


Picture 2: DBclassify Analytics

Unishpere8dbclassify2.png


Looking to the right in the list of database objects you will see a bar that indicates the type of activity for the object: random read, sequential read, direct I/O, system I/O, commit I/O, and other I/O. This is important because moving database objects to enterprise flash drives is best for objects with that are weighted towards random reads. For example, given a choice between an object with mostly random reads (purple color) and another object with direct I/O (green color) the best opportunity to improve performance is with the object that has the purple bar.


Picture 3: DBclassify Hinting

Unisphere8dbclassify3.png

Sometimes it’s not what you hear that matters but what you see. This picture was taken at approximately 3 minutes and 24 seconds into the video and the objects selected are very important: all three objects show a healthy amount of random reads. The selected objects then become part of a hinting wizard in which the DBA can move the objects for the flash drives.

 

Picture 4: DBclassify Hinting Wizard

Unisphere8dbclassify5.png

In the hinting wizard the DBA can:

  • Assign a name to the hint: for example, “Billing” to identify objects related to billing activity
  • Priority: hint priority represents how important the object are
    • 1: Top
    • 2: Very high
    • 3: High
    • 4: Above average
  • Hint Scheduling
    • One time: a onetime promotion of database objects
    • Ongoing: keep the databases objects on the faster tier of storage
    • Recurrence: schedule a time to promote database objects to accelerate cycles of workloads


Once a hint has been created the DBA can then monitor the effectiveness of the hint. There is also a hint management tab that show all the hints created (not shown) and allows the DBA to enable, disable, edit, and remove hints. As you can see using the hinting wizard the DBA can at a very granular level improve database / application performance by selecting database objects to be promoted to flash drives. EMC is truly enabling the Oracle DBA to use VMAX storage arrays!

 

Stay tuned more coming!

Discussion.pngOracle DBAs and EMC Storage Administrators working together

 


Using tools like OEM 12c Plug-in & DBClassify

Bitly URL: http://bit.ly/1jOgqH2

 

Tweet this document:

Bridging the Gap between the #Oracle DBA and #EMC Storage Administrator: http://bit.ly/1jOgqH2 #OEM12c #dbclassify

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

 

Customer facing slides attached at the bottom of the blog.

Perhaps you have been in one of those meetings: the Oracle DBA believes storage is causing a performance problem and the Storage Administrator insists there are no storage bottlenecks. You might have even said to yourself, “If there was one tool showing both Oracle database and storage metrics together this might be resolved very quickly.” And you would be correct! That unified vision showing in real-time both Oracle and storage performance together could really shorten the time of remediation and strengthen collaboration.

 

 

The good news is EMC is working closely with Oracle to continue development of our Oracle Enterprise Manager plug-in. This FREE plug-in enables the DBA to view EMC storage (works for both the VMAX and VNX) configuration and performance metrics within OEM 12c. On the other side Storage Administrators can use DBClassify which is a service that involves setting up monitoring software and training. With DBClassify the Storage Admin and DBA both have insight into database and storage performance with ability pin blocks in storage cache to assure performance.

.

BridgingTheGap.png

I would maintain applications like DBClassify and OEM Plug-in are about bridging the divide between databases and storage. When I started using the OEM 12c plug-in I had to undergo a learning curve to understand the configuration and metrics presented on screen. Thanks to some outstanding storage administrators I quickly learned the configuration differences between traditional RAID groups and storage pools. This is a key point: both Oracle DBAs and Storage Administrators are critically important in architecting and supporting enterprise systems but we have different expertise and terminology that require us to collaborate to drive success. Most Oracle DBAs use AWR reports and view performance in terms of wait times, latencies and end user experience. Storage Administrators view performance in terms of IOPS, storage configuration, and advanced features like FAST Cache or FAST VP. Our performance worlds are different yet highly dependent upon each other.

 

 

Collaborating together can mean we learn more about each other’s area of expertise using tools that enable us form a common analysis. After learning to use the OEM Plug-in I was able to work closely with the Storage Administrator to accurately identify storage pools on the VNX array that were supporting my databases and review if those storage pools FAST Cache enabled.  Great from an enablement and collaboration standpoint, right? But just as important we were able to turn on FAST Cache for my databases after a few minutes of discussion. Better collaboration can mean faster action and hopefully time saved not having to go to long meetings.

 

 

If you enjoyed reading this blog then please join my session at EMC World where we will explore in detail how EMC Storage Administrators and Oracle DBAs can collaborate, remediate and architecture storage together.

EMC-VNX.jpg

VNX Multicore FAST Cache Improves Oracle Database Performance

 


Comparing Papers: VNX7500 to the new VNX8000

Bitly URL:http://bit.ly/1cR6jZJ

 

Tweet this document:

New #VNX blog http://ctt.ec/ze4n4+ showing how the new Multicore FAST Cache improves performance 5X and 3X for #Oracle Databases #EMC

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

 

Customer facing slides attached at the bottom of the blog.

It’s 2014 and time for a fresh approach to looking at Oracle storage performance. Let’s compare two EMC dNFS proven solutions to see the performance benefits the new VNX offers to Oracle DBAs. I’ll be comparing some of the findings in EMC VNX7500 Scaling Performance for Oracle 11gR2 RAC on VMWare vSphere 5.1 (published in December 2012) with EMC VNX Scaling Performance for Oracle 12c RAC on VMware vSphere 5.5(published in December 2013). I was going to add NetApp to the mix but unfortunately finding a recent performance paper was difficult. Perhaps the best place to start is by doing a comparison between the two studies.

 

The table below shows the major difference between the VNX7500 and the VNX8000 papers is in the database versions: 11gR2 versus 12cR1. Reading through both papers there were no findings suggesting one database version was faster than the other or to put another way the focus of the paper was not to compare the performance of 11gR2 to 12c. Looking over the Oracle stacks across both papers we have a close apples-to-apples comparison.


Table 1: Comparing software and network stacks
Oracle Stack on VNX7500Oracle Stack VNX8000
Oracle RAC 11g Release 2 (11.2.0.3)       Oracle 12c Release 1 (12.1.0.1.0)
Oracle Direct NFS (dNFS)Oracle Direct NFS (dNFS)
Interconnect Networks: 10 GbEInterconnect Networks: 10 GbE
Oracle Enterprise Linux 6.3Red Hat Enterprise Linux 6.3
Swingbench 2.4Swingbench 2.4

Below is a table comparing the storage configuration of the VNX7500 to the VNX8000. The differences between the storage arrays include:

 

Table 2: Comparing storage array configuration

VNX7500 Array Configuration

VNX8000 Array Configuration

2 storage process each with 24 GB cache        

2 storage processors each with 128 GB cache

75 x 300 GB 10k SAS drives

75 x 300 GB 10k SAS drives

4 x 300 GB15k SAS drives (vault disk)

4 x 300 GB15k SAS drives (vault disk)

11 x 200 GB Flash drives

5 x 200 GB Flash drives

4 x data movers (2 primary & 2 standby)

4 x data movers (2 primary & 2 standby)

 

9 x 3 TB 7.2K NL SAS drives

 

The processors

The VNX7500 (specifications) uses Xeon 56000 8-core CPUs with 24 GB of cache and the VNX800 uses the Xeon E2600 (specifications) 8-core CPUs with 128 GB of cache. Referencing the CPU benchmarks here, “CPU benchmarks” we find the newer Xeon X2680 is a least 200% faster than the Xeon 5600. Faster processor improves most everything in the storage array but most notably MCx multi-core storage operating environment.

 

FAST Cache Comparison

In the table it is interesting to note that primary difference in the number of flash (SSD) drives used: the VNX7500 paper having 11 x 200 GB flash drives and the VNX8000 paper only 5 x 200 GB flash drives. Why did EMC engineering decide to use less than half the flash drives in the new VNX8000 paper? At this point I decided to investigate the difference in FAST Cache between the two studies. Below are my findings after doing some research. Note that my initial focus is on read I/O performance and I start with the legacy FAST Cache process and build into how the new Multicore FAST Cache works.

 

Legacy FLARE FAST Cache

It was helpful for me to do a quick refresher on how the legacy FLARE FAST Cache worked. There are many FLARE VNX storage arrays used by customers so modeling how FAST Cache works will assist in identifying what has changed and how the changes improved performance.

Slide1.PNG.png

Walking through the above picture you see on the left side the legacy FAST Cache process flow:

  1. Incoming read requests will be serviced by the FAST Cache improving performance but because the DRAM Cache is faster not the ideal performance path
  2. The read request is serviced by the FAST Cache: in this case 5.61 ms
  3. READ MISS: A check is performed with the FAST Cache Memory Map to determine if the I/O can be serviced using DRAM Cache
  4. If the I/O can be serviced from DRAM Cache then the Policy Engine redirects the I/O to the DRAM Cache: best performance and lowest latency times
    1. If NOT in DRAM Cache then the I/O is serviced reading from high capacity disk
  5. Frequently used data is promoted from HDDs into the FAST Cache then the subsequent read I/O is satisfied from FAST Cache
  6. Frequently used can also be promoted to DRAM Cache

 

On the right side you see how the read I/O is not ordered for optimal performance. For example, the absolute fastest performance is obtained by requesting the read I/O be serviced by the DRAM cache however, with the legacy FLARE version of FAST Cache the initial read I/O request is serviced by FAST Cache. Having read I/Os initially serviced by FAST Cache provides good performance as seen the paper, VNX7500 Scaling Performance for Oracle 11gR2 on VMware vSphere 5.1 just not the best possible performance. For example, the baseline ‘db file sequential read’ average wait time was 96+ms and after FAST Cache it significantly dropped to 5+ms average wait time an increase of performance of 85%. So FAST Cache will provide a strong performance boost but there is certainly the opportunity for even more performance by simply changing the order of how a read I/O is serviced.

 

On the right side it is very easy to see how the read I/O order is not optimal for performance:

 

  1. SSD is the first tier of performance for read I/Os but is slower than DRAM
  2. DRAM is the fastest tier of performance but is referenced after FAST Cache
  3. HDD tier replies to the read I/O if both the FAST Cache and DRAM Cache results in read misses

New Multicore FAST Cache on the VNX8000


DRAM is used for the Mutlicore cache which means the lowest response times (latency) achievable will be host I/O serviced by the multicore cache in the new VNX storage array.

Slide2.PNG.png

Looking at the above picture to the left you can see the new Multicore FAST Cache process flow:

 

  1. Incoming read requests will be serviced by the multicore cache first providing the lowest latency and fastest response times.
  2. The read request is serviced by the multicore cache with no need to read the FAST Cache memory map (3) saving cycles.
  3. READ MISS: A check is performed with the FAST Cache Memory Map to determine if the I/O can be serviced using Multicore FAST Cache.
    1. If the I/O can be serviced from Multicore FAST Cache (MFC) then the Policy Engine redirects the I/O to the MFC: 2nd best performance and latency times.
  4. The data then is copied from the Multicore FAST Cache to the Multicore Cache.
    1. If NOT in the MFC then I/O is serviced reading from high capacity disk.
  5. The data is copied from HDDs into the Multicore Cache then the read I/O is satisfied from the Multicore Cache.
  6. Frequently used data is promoted to the Multicore FAST Cache by the Policy engine.

 

In the EMC proven solution entitled, VNX Scaling Performance for Oracle 12c RAC on VMware vSphere 5.5 the baseline performance of the database wait event ‘db file sequential read’ was compared using no Multicore FAST Cache to performance with Multicore FAST Cache. The following ‘db file sequential read’ wait times were observed:

 

  • Baseline average (no FAST Cache): 18.88 milliseconds
  • With Multicore FAST Cache: 1.52 milliseconds a latency reduction of 91% for database reads.

 

The new I/O path of Multicore Cache then Multicore FAST Cache yields performance gains for your databases and applications using an improved I/O service path. This is interesting as a simple change in which cache services the host I/O can have a profound impact in performance. On the right side of the picture it is very easy to see how the read I/O order is optimal for performance:

 

  1. DRAM is the fastest tier of performance but is referenced first
  2. SSD is the second tier of performance for read I/Os
  3. HDD tier replies to the read I/O if both the FAST Cache and DRAM Cache results in read misses

 

To see more clearly the performance deltas let’s compare the wait times between the two studies in more detail.

 

Comparing Overall Performance Improvements

 

Comparing the VNX7500 study to the new VNX8000 study we see a major overall performance improvement across all “db file sequential read” requests.

Slide3.PNG.png

 

In the picture above the baseline value for the VNX7500 was 96ms and comparatively for the VNX8000 only 18ms a 5X improvement in baseline performance for ‘db file sequential reads’. This baseline performance improvement is significant because databases NOT using Multicore FAST Cache most likely will experience a performance boost when migrated to the new VNX8000. At most companies the Multicore FAST Cache (MFC) will be used for production databases and perhaps TEST copies of production. Typically, development databases will not use MFC but can represent the majority of capacity on the storage array. Increasing performance by 5 times for databases not using MFC could be very beneficial to the DBA team.

 

The Multicore FAST Cache (MFC) uses the multicore cache as the first performance tier for read I/Os. This reordering to use the multicore cache has increased performance 3X when comparing the VNX7500 with FC (5.61 ms) to the VNX8000 with MFC (1.53 ms). This is a substantial read performance improvement for Oracle databases meaning faster response times and lower latency times.

 

In summary Oracle DBAs can substantially improve performance for their Oracle databases by moving to a new VNX storage array. The performance gains are not limited to databases using the new MFC design but apply to databases not using MFC too. In comparing the two studies we found that non-MFC databases could run 5X faster and MFC databases 3X faster on the new VNX storage arrays. Perhaps the most important point is that the VNX8000 was faster and used 50% less flash drives. This means Oracle DBAs can drive greater database performance with less cost (fewer flash drives) providing an excellent TCO to the business. Now that is cool!

 

I hope you enjoyed reading this first blog on and look for part 2 in which we explore more points of comparisons between the two studies.

Sam Lucido

The 3 E's in EMC Elect

Posted by Sam Lucido Jan 16, 2014

You might have noticed on Twitter lots of tweets with the hash tag #EMCElect2014. What is EMC Elect? It is a group of people that enjoyed socializing EMC solutions. We are now part of a team called, "EMC Elect" and as a team we will collaborate to turn the volume up EMC solutions. A big part of this is us listening to you, the customer, to learn, collaborate, create, and have fun with all we do together. So it's the collaboration with you that enables us to socialize the best message and ensure our efforts make an impact to our customers.

 

Most everyone has heard there is no "I" in TEAM! In "EMC Elect" we have three amazing E's:

  • Engagement
  • Energy
  • Energizers

 

 

Engagement

 

The secret is the EMC Elect team is all about you but we call it “Engagement.” Having a bit of fun I Googled “engagement” and found:

 

  1. A formal agreement to get married
  2. An arrangement to do something or go somewhere at a fixed time

 

 

 

The first definition is not going to work as I have a wife and kids but much more interesting is the second definition. Using the second meaning I’m going to take some creative license and apply it to the EMC Elect program:

 

 

 

“We promise to listen to you across all types of media (Twitter, Facebook, Communities, and more) and events like EMC World to collaborate and socialize EMC solutions for Oracle in 2014 and beyond.” <-- Change Oracle for the application or technology you are most interested in"

 

 

 

So much depends upon collaborating together as no one knows better than you how to make EMC products and solutions the best for all things Oracle. Here is a complete list of all the EMC Elect members in the blog, "The EMC Elect of 2014 - Official List" but if you are interested in the Oracle members on the team then here is the short list:

 

Allan RobertsonEMC@dba_hba
Jeff BrowningEMC@OracleHeretic
Sam LucidoEMC@Sam_Lucido

 

Disclaimer: In the case someone was missed please let me know and I'll add to the list.

 

 

 

Listening to customers is the best way of understanding how we can mutually develop architectures to provide the best value to your business. 2014 is going to be a great year of engaging with you to socialize EMC solutions!

 

 

 

Energy

 

It is a great honor to be working with people on the EMC Elect team and more importantly you the customer. There is so much positive energy on performance, protection, continuous availability and many other solutions from the team. It’s that boundless energy that is going to make this year so exciting. The thing about energy is it can come from anywhere and anyone can create the new trend that electrifies the community. What is truly exciting is working with the EMC Elect team together with customers so we can move from creating a few thunderbolts to a magnificent display of lighting that is inspiring. 

 

 

 

It’s time to ride 2014 technology highway and push the petal to the metal: can’t drive 55.

 

 

Energisers

 

This could be anybody that sparks the community. It’s fun to be part of a new idea or cause and become that crusader blogging about it. Below is a fun picture taken from VMworld 2013 showing the “Monster VM.”

 

Want to have some fun? Try googling, “monster vm vmworld” to see all the related content that came out of VMworld. Energisers don’t have to be risk takers (many times they are) but more importantly they generate enthusiasm, excitement, interest and are quick to give others credit. How awesome would it be to have a huge community of Energisers!

 

VMworldEMEA_2.jpg

 

I’m looking forward to being part of the EMC Elect team and even more interested in working with you the customer. Let’s engage, get energized and become Energisers of EMC.

VPLEX RAC.pngContinuous Availability with Extended Oracle RAC and EMC VPLEX Metro


Continuous Availability for Oracle

Bitly URL: http://bit.ly/1cjvd47

 

Tweet this blog post:

#EMC VPLEX #Oracle stretched RAC provides zero downtime. #EMCElect http://bit.ly/1cjvd47


Follow us on Twitter:

EMCOracle.jpeg.jpg

Over the years we have presented at shows like IOUG COLLABORATE, EMC World, and Oracle OpenWorld on how VPLEX Metro and Extended Oracle RAC can together provide a zero downtime continuous UPTIME architecture. Those 60 minutes are important as we have to cover the foundation of how to architect this continuous uptime solution. Let’s explore a technical tip and show where you can learn more about this Oracle solution.

 

Logging Volume Considerations

 

Benefits of reading this configuration tip include:

 

  • Increased performance for applications
  • More granular monitoring capabilities
  • Faster sync restores after a failed WAN connection

 

The content for this tip can be found on the ECN VPLEX community in a blog called, “logging volume Considerations.” A logging volume is dedicated capacity for tracking any blocks written to a cluster. To use an Oracle analogy the logging volume for VPLEX is similar to the online redo logs for a database. A logging volume is a required prerequisite to creating a distributed device(s) and a remote device. The default configuration is a one-to-many relationship meaning many distributed devices using the same logging volume.

Sams Slide 1.PNG.png

In the above picture we show the logging volume is used for tracking written blocks. There are several components of the VPLEX Metro architecture illustrated in the picture and providing a definition of the components is important to our understanding:


Storage Volumes shown at the bottom of the picture next to the VMAX and VNX are LUNs. These LUNs are presented to the back-end ports for VPLEX and therefor visible and available for use.Extents ~ are created from the storage volumes. The general recommendation is to have one extent per storage volume but if necessary multiple extents per storage volume is supported. For example, if you plan to use VPLEX for a database requiring 1TB of capacity then create one extent of the same size (1TB).


Devices are created from extents. Multiple extents can be used for create one device. In the configuration of devices the administrator specifies RAID type. For example, RAID 0 (no mirroring of devices together), RAID 1 (mirroring of devices), and RAID-C (concatenating devices). As DBAs we write scripts used to concatenating files together and so we are familiar with the concept. From the storage perspective RAID-C is the ability to create devices that span multiple extents. One tip to mention is to avoid creating a RAID 0 and RAID C devices within the same virtual volume. Having a homogenous RAID 1 or RAID C configuration for the virtual volume improves responsiveness and reduces complexity.


In a VPLEX Metro configuration devices are referred to as distributed devices meaning they are mirrored across two VPLEX clusters. As you might have guessed using VPLEX Metro requires the distributed devices are configured using RAID 1.

Virtual Volumes are built from devices. It is the virtual volume that is presented to the Oracle database server. As virtual volume(s) appear as normal storage to the Oracle database the VPLEX Metro configuration is transparent and requires no DBA complexity or management overhead. In a VPLEX Metro configuration the virtual volume is referred to as a distributed virtual volume.

Recommendations for building a dedicated logging volume:

Oracle DBAs working in a physical (non-virtualized) infrastructure like dedicating one server to one database. This is because we can guarantee the database using a dedicated server will not have to compete for server resources. Most of the time this is 1-to-1 architecture is for production only but the benefits are: consistent performance and more granular monitoring. Building a dedicated logging volume for your production database using VPLEX offers similar advantages. Below are some of the guidelines in building the logging volume.The best practices for creating logging volumes can be found in the paper, “Vblock Data Protection Best Practices for EMC VPLEX with Vblock Systems.
  • Create one logging volume for each cluster
  • Use RAID 1 for logging volumes
  • Configure at lease 1GB (preferably more) of logging volume space for every 16TB of distributed device space.

 

Planning for resynchronization in the case of a failure


Most likely your company is using VPLEX Metro with Extended Oracle RAC to create an continuous availability architecture in which the loss of a storage array or data center does not impact availability of your enterprise applications. Architecting for an unplanned outage the infrastructure team should consider dependencies related to recovery. In this case the logging volumes will be subject to high levels of I/O when resychronizing the local and remote devices. Having a dedicated logging volume for your production database(s) means resychronizing of I/O will be for your database and not other applications translating into faster recovery. When a database is sharing the same logging volume with other applications resychronization involves the database and all the other applications thus lengthening the time to have the devices reach a sychronuous state. Our objective is to avoid this situation by having a dedicated logging volume for the database.

 

Slide2.PNG

Can you create more than one logging volume to use with the same device? The answer is yes, as this enables the business to grow the logging volume capacity with the growth of the database. The part to be mindful of is the default behavior meaning “that if no dedicated logging volume is specified, a logging volume is automatically selected from any available logging volume that has sufficient space for the requested entries. If no available logging volume exists, an error message is returned.” The quote was taken from the blog, “Logging Volume Considerations.”Checklist of guidelines in this blog:

 


For more Oracle Extended RAC with VPLEX Metro I recommend:

 

Interested in see a live demonstration? Use the Oracle Solution Center by completing this form: Booking request form.

By all appearances Oracle has made big moves towards embracing a hybrid cloud strategy. Oracle’s most recent press release entitled, “Oracle Licenses VMware vSphere Storage APIs for Oracle Storage” is very positive news. In this press release Oracle has licensed VMware Storage APIs to enable customers using VMware virtualization to more effectively manage Oracle on Pillar Axiom and ZFS Storage. This means Oracle storage solutions joins EMC and other vendors in offering integration with VMware vSphere. What might customers expect from Oracle using VMware APIs?


OracleVMwareStorageAPI.png

vSphere API for Array Integration (VAAI): Offloads traditionally expensive resource management of clones and snaps from the hypervisor to the storage array. Let’s say that you are ready to upgrade from 11gR2 to 12c (checkout this EMC proven solution for upgrading Oracle) and you have three recovery points built into the upgrade plan. Through VAAI these snapshots will take much less time as the storage array will do the job! Faster clones and snaps will reduce the database upgrade time.

 

VMware vSphere API for Storage Awareness (VASA): This enables Oracle and other storage vendors to provide vSphere with information from the storage array. Information around disk array features like snapshot, replication, thin provisioning and RAID levels represents some of the configuration and status information presented up to vSphere. Having the storage information in vSphere can mean the VMware administrator can more easily use Oracle storage for virtualized databases.

 

Site Recovery Manager (SRM): automates recovery plans from vCenter Server. Using SRM the VMware administrator can collaborate with the Oracle DBA to include databases and applications in the automation plan. This means with some scripting the databases and applications can start-up at a secondary site. This is very important as all the manual steps can be scripted and coordinated with interconnected systems for a holistic disaster recovery plan.

 

Most importantly this gives customers choice; no lock-in! This seems a positive step in the direction of enabling customers to build the infrastructure they choose to run their Oracle databases and applications. Adding VMware to the list of vendors also has value for Oracle. Now when working with customers Oracle Sales doesn’t have to explain “why not VMware” rather the conversation takes a much more positive “we work with VMware.”  In the press release some positive comments included, “expanded support of VMware environments” and “deepening the integration of VMware infrastructure with Oracle storage systems” hopefully this is the beginning of continued collaboration.

 

Optimistically, this is also the end of any Fear Uncertainly and Doubt (FUD) relating to using VMware to virtualize Oracle databases. I’ll provide this link, “EMC & Oracle Customer References Virtual Rolodex” to see how many customers use EMC and Oracle together. Here are some highlights:

 

Seacore on Virtualizing Oracle with EMC: Watch Ben Marino, Director of Technology, talks about virtualizing Oracle. In this video virtualization improved Oracle database provisioning from 2 weeks to about 2 days.

 

AAR Corp. on the Private Cloud with EMC: AAR Corp is a company that is in the aerospace industry and in the video Jim Gross, Vice President of IT, talks about performance gains with the VMAX 10K, RecoverPoint for lower RPO and Avamar in its VMware and Oracle environment.

 

Zebra Technologies paper: Zebra Technologies is a global leader known its printing technologies, including RFID and real-time location solutions.  A great quote, “All of Zebra’s storage resides on VMAX 10K, as well as EMC VNX unified storage. Zebra uses VMware vSphere to virtualize its server environment.”

 

You might ask why I’m I talking about Oracle storage and its integration with VMware on the EMC Oracle community? After all it Oracle storage competes with EMC, right? In my opinion EMC storage solutions are best in class and customers stand to benefit from more competition. Did you see the new VNX? Here is the press release “Accelerates Virtual Applications and File Performance Up-To 4X; New Multi-Core Optimized VNX with MCx Software Unleashes The Full Power of Flash” and some metrics:


  • More-than the performance of 4 previous generation systems combined
  • More-than 3X performance for transactional NAS applications (such as VMware over NFS) with 60% faster response time than previous VNX systems (thinking dNFS is going to rock on the new VNX)
  • More than 735K concurrent Oracle and SQL OLTP IOPS—4X more than previous VNX systems
  • More than 6,600 virtual machines—a 6X improvement from the previous generation
  • More than 3X the bandwidth—up to 30GB/second for Oracle and SQL data warehousing than previous generation.

 

It’s time to bring the “virtualizing Oracle” FUD to the curb for garbage collection and focus on customer value: broad integration and performance. Oracle using VMware Storage APIs is awesome and gives the customers more choices. Well done Oracle!

 

Tweet this blog!

Bitly URL: http://bit.ly/18dEcIA

Sample tweets:


#Oracle Embraces Cloud Strategy (Finally!). http://bit.ly/18dEcIA

#Oracle has licensed #VMware Storage APIs. http://bit.ly/18dEcIA

#Oracle is licensing #VMware vSphere Storage APIs for Oracle Storage. http://bit.ly/18dEcIA

#Oracle embraces VMware Storage APIs. http://bit.ly/18dEcIA

In many of our presentations we have a slide or two related to Moore’s Law. The following quote is from Wikipedia:

 

“Moore’s law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years.”

 

The doubling of CPU performance approximately every two years has benefited databases both in terms of speed and cost. In the picture below we show the ‘gap challenge’ as storage performance hasn't doubled every two years.

Performance Gap.png

 

In many of my customers’ interactions rarely is CPU the bottleneck however the storage performance gap means Oracle DBAs are looking for solutions to improve the I/O response times for their databases. Looking to improve performance of part of the system generally relates to Amdahl’s law:

 

“…is used to find the maximum expected improvement to an overall system when part of the of the system is improved.”

 

This quote from the research article entitled, An Efficient Schema for Cloud Systems Based on SSD Cache Technology pulls the concepts together:

 

“For some applications which have quite high requirements on response time, simply to improve CPU performance cannot improve the overall performance of a system: hence, it is necessary to improve the I/O performance and reduce the speed difference between storage system and CPU.”

 

Oracle database(s) fall into the category of high requirements on response time with the goal of reducing the speed difference between storage system and CPU. In the research article mentioned above the characteristics of caching technology are:


  • Goals placing frequently accessed data in fast accessing device, speeding up the access and reducing latency time [4]
  • Most of the current caching algorithms take into account locality of reference and accessing frequency [3]

 

The strength of solid-state drives, commonly called flash drives, is they can be used to significantly improve I/O performance. EMC has two proven technologies that reduce the speed difference between the storage system and CPU:

 

  • Fully Automated Storage Tiering Virtual Provisioning (FAST VP) is available on both the VMAX and VNX storage arrays and identifies frequently accessed data for promotion to Flash drives. Employing the principle of locality of reference and frequency of access enables the intelligent movement of hot blocks from high capacity drives to much lower latency Flash drives.
  • FAST Cache on the VNX similarly identifies the frequently accessed data and copies the hot blocks to the Flash drives. The ‘copy’ part is the differentiator as the original block remains on the high capacity drives. By coping the hot blocks to Flash drives read performance is accelerated.

 

Caching technology is challenged in situations that involve access to blocks not frequently accessed and therefor not in cache but require the same level of performance.  The seldom accessed low latency data requirements occur when a report is only run once a week or end of month activities that is outside of the cache’s pool. Unfortunately, this could mean a report that needs to take less than 10 minutes might take significantly longer. The question is how do we solve this Caching technology challenge and meet the requirements of seldom access low latency data requirements?

 

An all flash drive array like EMC’s newly released XtremIO offers low latency across both types of data: frequently access (cache friendly) and seldom access (cache challenged). Because all the data will reside on Flash drives performance predictability is improved and often is in the sub-millisecond latency range. XtremIO is targeted at mission critical databases and applications that require extreme performance as the loss of milliseconds can mean loss of profit to the business. I’m including a link to the XtemIO data sheet for Oracle DBAs and everyone who have mission critical applications requiring all Flash performance.

 

The XtremIO storage does take into account the difference in performance between reads and writes based on flash drives. If you are interested in how this works please comment below or want another topic I’ll be interested to read your ideas.

 

References:

  1. Moore’s Law on Wikipedia
  2. Amdahl’s Law on Wikipedia
  3. An Efficient Schema for Cloud Systems Based on SSD Cache Technology, a research article by Jinjiang Liu, Yihua Lan, Jingjing Liang, Quanzhou Cheng, Chih Cheng Hung, Chao Yin, and Jiadong Sun
  4. Optimal Multistream Sequential Prefetching in a Shared Cache, ACM Transactions on Storage, vol. 3, no. 3 article 10, 2007

 

XtremIO Links:

There seems to be a trend with the Oracle database community: The use of current commodity hardware enables a large System Global Area (SGA) and a correspondingly large database buffer cache. The SGA is a group of shared memory structures that contain data and control information for an Oracle database. The database buffer cache is a component of the SGA that holds copies of frequently used data blocks. For more information use Oracle’s documentation library. In this study by Principled Technologies each virtualized RAC node was allocated 148 GB of virtual memory. Why did we include so much memory for a virtualized Oracle RAC node?

  1. This acknowledges a trend in servers having more main memory (RAM), as the additional cost is relatively minimal. This additional memory is therefore available for Database Administrators (DBA) to allocate to the SGA.
  2. VMware’s vMotion is a foundational technology that enables the business to non-disruptively move a Virtual Machine (VM) from one server to another. As the study indicates, this can include vMotion of a virtualized production Oracle RAC node.
  3. The goal of the study was to demonstrate the simultaneous vMotion of multiple large virtual memory VMs which are running an OLTP workload. We proved that these vMotion operations could be done efficiently, quickly and without data loss or other instability. Since Oracle RAC is specifically designed to fail a node and fence it from the cluster in the event of the slightest hint of instability, this is an amazing accomplishment.

 

DBAs manage the enterprise’s most mission critical applications. Thus, DBAs want clear and convincing evidence of the benefits virtualization offers before adding another layer to the Oracle stack. In this recent study by IOUG entitled, “From Database Clouds to Big Data: 2013 IOUG Survey on Database Manageability” the 4th greatest challenge of database administrators surveyed was, “Managing a larger number of databases with the same resources.” The value of the Principled Technologies research study is that it demonstrates that Oracle DBAs can virtualize their Oracle database servers including production RAC nodes, and then easily move those VMs without any loss of availability. Oracle RAC is a cluster of many servers which access one database, a many-to-one architecture. This provides resiliency: If one server fails the surviving servers continue to provide database availability. Another interesting finding from this study is, “We have already consolidated our database and infrastructure onto a single technology platform for our critical business applications”. The study adds more color by noting, “A developing trend may be taking hold, as many IT organizations move towards database consolidation onto a shared and/or cloud environment.” Virtualization is the platform which enables both consolidation and manageability for Oracle DBAs. In terms of manageability, virtualization offers the DBA:

  • The ability to proactively move database servers in preparation for planned outages: If a server, network switch, or other hardware within the infrastructure requires maintenance the virtualized database server / RAC node can be moved off of that hardware.
  • Consolidation: Multiple databases can efficiently and securely share the same hardware as they are isolated to individual VMs.
  • Improved Service Level Agreements (SLA): Improvements in proactive and automated reactive manageability will improve the SLAs for the business by improving database uptime.
  • Strong Storage Integration: VMware integrates with EMC storage and other vendors for increased manageability, performance and availability.

 

There is another part of manageability picture that is equally important. That is quality of service (QoS). For example, if a given vMotion event takes too long and during that time database performance is significantly degraded then the QoS (Quality of Service) is low. The Principled Technologies study punches this up by saying, “Quick and successful virtual machine migrations are a key part of running a virtualized infrastructure.” So, if vMotion adds value in terms of manageability, then QoS must provide the following functionality:

  • vMotion windows should be kept to a minimum
  • The vMotion end-user performance impact should be minimal

 

The Principled Technologies study was designed to validate both ease of manageability and QoS. In three different tests, multiple virtualized Oracle RAC nodes under a significant OLTP workload are simultaneously moved with vMotion. Diving into the details of the architecture:

 

Software technology stacks includes

  • Oracle 11g Release 2 (11.2.0.3)
  • Oracle Enterprise Linux 6.4 x86_64 Red Hat compatible kernel
  • VMware vSphere 5.1
    • VMware vCenter Server 5.1

MonsterVM.png

 

  • Cisco UCS 5108 Blade Server Chassis
    • 3 x Cisco UCS B200 M3 Blade server
      • Each with two Xeon E5-2680 processors
      • Each with 384 GB RAM
  • 2 x Cisco UCS 6248UP Fabric Interconnects
  • 2 x Cisco Nexus 5548UP switches
  • EMC VMAX Cloud Edition
    • 2,994 GB in the Diamond level
    • 18,004 GB in the Gold level
    • 15,308 GB in the Silver level

 

MonsterVMArch.png

 

Workload Software

 

Most of the above software and hardware above is self-explanatory but more detail is needed for the EMC VMAX Cloud Edition.  This storage array is unique, as it enables self-service provisioning of storage based upon performance levels. The concept is simple and powerful: Create a storage catalog of pre-engineered storage levels designed to provide different levels of performance. For example, the fastest storage level is Diamond offering a greater amount of Flash drives to ensure low latency and high IOPS. Below Diamond are the Platinum, Gold, Silver and Bronze storage performance levels. Let’s look at how the Oracle RAC database was easily configured on the EMC Cloud Edition:

newletterslide-Jeff.png

The picture above shows how simple it was to provision the Oracle 11gR2 database used in this study. The datafiles and tempfiles were placed on the Diamond level, as this level offered the best latency and IOPS. In the study, the goal was to place a significant workload on the virtualized Oracle RAC nodes and configure the storage to support that workload. One notable metric: The average core utilization prior to vMotion of the RAC nodes was approximately 33% with no significant I/O waits. This is important: Core utilization was used to define the workload parameters, not storage performance. As noted in the study, “In the configuration we tested, Transactions Per Second (TPS) from the hardware and software stack reached a maximum of 3,654 TPS.” In terms of storage performance, there was plenty of TPS to spare and by no means was the VMAX CE maxed out.

 

Everything else, i.e. the VM’s guest operating system, Oracle home, redo and undo, were placed on the Gold catalog level as these files didn’t require the same level of performance. Does this database layout seem relatively simple and straight forward? The strength of the VMAX CE is it enables a simplified database layout, as performance has been pre-engineered for the business. But there is another key consideration that some DBAs might not consider when working with storage administrators: Shared disk performance. It is entirely possible for the storage administrator to have multiple applications share the same disk meaning shared latency and IOPS. This has the potential for one application to impact another application using the same disk, and this is sometimes difficult to diagnose. The EMC VMAX CE isolates logical resources to secure the appropriate performance for each tenant. It is this isolation that prevents applications from impacting each other on the same storage array. For a DBA managing multiple mission critical Oracle database servers, this means he or she can reserve resources for CPU, RAM and now storage. This in turn provides an improved SLA to the business.

 

A very recent video by AAR Corp. has been posted at EMC in which Vice President of IT Operations Jim Gross discusses, “Why AAR is using the VMAX 10K for improved availability and decreased maintenance windows for the company.” The core of the VMAX CE storage array is the VMAX 10K, so the AAR video is a good example of how performance, availability and maintenance can be improved using EMC VMAX. For Oracle DBAs interested in an EMC VMAX / virtualized Oracle customer case study, Jim Gross reviews their use of VMware vSphere and Oracle on EMC VMAX.

 

Getting back to the Principled Technologies study, we tested the ability to use vMotion to move multiple VMs configured with Oracle Enterprise Linux (OEL), 156 GB of virtual RAM, and 16 vCPUs. For the Oracle DBA reading this article, The SGA was sized at 145 GB with 3,686 MB for the Process Global Area (PGA). The PGA is a memory area which stores user and control data for the exclusive use of a given Oracle server process. There is thus a PGA memory area for each server process. The sum of all the PGA memory is referred to as the total instance PGA. Several best practices techniques were used (e.g., huge pages) in the study to optimize database performance. In appendix A called, “System Configuration Information” the study provides great detail on most of the steps used to configure the tests, including the implementation of huge pages. The first of the three tests involved a single vMotion of one virtualized RAC node on server A to Server B.

 

Figure 1: Diagram of our single vMotion test, where we migrated a single VM from host A to host B

PTStudyTest1.png

 

As you can see from the Figure 1 above, the final state was server A with no virtual machines and Server B with two virtual machines. The results for this test included:

 

Core Utilization for VMware vSphere host:

MonsterVMTable1.png

Looking at the above table we can see the vMotion of the RAC node took 130 seconds and added approximately 14% for Host A and 10% more CPU utilization for Host B. Quick definition: Bandwidth is the maximum throughput of a logical or physical communication path. During the 130 seconds of vMotion, Host A transmitted 10,618 Megabits per second (Mbps) over the network to Host B.  This test showed that a single very large virtualized RAC node supporting significant workload completed in just over 2 minutes with no data loss, and CPU utilization peaking at around 14%. There was an additional load on the server during the vMotion window, but this settled back down 90 seconds after the vMotion had completed.

 

This test is a good example of using a rolling maintenance window to work on servers. The scenario involves moving a virtualized RAC node to a standby server so that maintenance can be performed on the production server. Once completed, the same process would be followed for the other two RAC nodes. Using the rolling maintenance window approach would mean approximately 260 seconds of vMotion, from the standby server and back to production server. Across all three servers, this totals to 780 seconds or 13 minutes of vMotion time without loss of availability of the RAC nodes to the business. Very impressive!

 

The next test in the study vMotioned two of the three virtual machines as shown in the Figure below.

 

Figure 2: Diagram of our double vMotion test, where we swapped two VMs from host B to host C

PTStudyTest2.png

 

As shown in the above picture, the VM on server B was moved to server C and the VM on server C was moved to server B. Then final state was server B had the VM originally from server C and server C had the VM from server B. The results from the second test include:

 

Core Utilization for VMware vSphere host:

MonsterVMChart2a.png

The above table shows that the simultaneous vMotion of the two VMs took 155 seconds and added approximately additional CPU utilization of 22% for Host B and 21% for Host C. Let’s use a table to look at the network throughput in Mbps.

MonsterVMChart2b.png

There seems to be a trend between test one and test two in that the throughput is peaking at approximately 10,500 Mbps. More bandwidth will increase throughput and hopefully reduce the amount of time for these vMotion tests. This test showed that moving two very large virtualized RAC nodes supporting significant OLTP workloads could be completed in just over 2.5 minutes with no data loss or unavailability. CPU utilization peaked at 22% additional load and settled back down 90 seconds after the vMotion had completed.

 

Moving two virtualized Oracle RAC nodes would reduce a rolling window maintenance process from three steps down to two. In a two-step rolling window upgrade the following process is proposed:

  • Move two VMs simultaneous to the standby servers and back to product servers which would total approximately 310 seconds
  • Move the final VM to the standby server and back for 260 seconds

 

Using this two-step rolling window approach reduces the total vMotion time to 570 seconds or 9.5 minutes. Compare this to the three step rolling window approach of moving each virtual machine which was estimated to take 13 minutes. Moving multiple virtual machines could thus offer substantial time savings to the business.

 

The final test involves moving all three virtual machines as shown below.

 

Figure 3: Diagram of our triple vMotion test, where we migrated the VMs from one host to another

PTStudyTest3.png

 

As Figure 3 shows, The VM on server C moves to server A and the VM on server A moves to server B and the VM on server B moves to server C. We have nicknamed this the “merry-go-round” scenario. The results of this test are below:

 

Core Utilization for VMware vSphere host:

MonsterVMChart3a.png

The above table shows that simultaneous vMotion of the three VMs took 180 seconds, and added a maximum of 21% more CPU utilization across all the servers. Let’s use a table to look at the network throughput in Mbps.

 

MonsterVMChart3b.png

The trend continues as the final triple vMotion test did not break the 10,500 Mbps ceiling. Moving all the VMs at once could apply to proactively avoiding a disaster or doing maintenance without deferring to the rolling window approach. In this case moving all three virtualized RAC nodes to three standby servers and back to production would require approximately 360 seconds of vMotion time or just 6 minutes. The advantage to the business is very little time is spent using vMotion and there is no loss of availability or data.

 

Summary

 

Managing a larger number of databases with the same resources continues to be a challenge for Oracle database administrators and for other DBAs too. Selection of a virtualization and storage platform that is open, supports any application, unifies manageability, availability and performance for the business. The EMC Cloud Edition adds a new dimension to virtualization study as it enables the DBA to provision storage and isolates logical resources meaning one database will not impact another on the storage array. Using the VMAX Cloud Edition (CE) means acceleration of database provisioning as this task is transformed into a self-service and DBA managed function. Also performance protection as the CE has pre-engineered catalog levels simplifying database layout and isolating latency and IOPS. Overall the VMAX Cloud Editions offers a faster-time-to-market and better SLAs to the business.

 

The cornerstone of this study was proving how VMware virtualization can offer greater agility to DBAs managing Oracle RAC databases by adding vMotion capabilities that improve application availability. Using a table we can see the agility virtualization adds to Oracle RAC databases.

 

MonsterVMChart4.png

As the number of simultaneous vMotion increases the number of minutes to move the RAC nodes in a rolling window scenario decreases without additional core utilization beyond 22%.

PTStudySummarySlide.png

This research study is critically important as it demonstrated that three significantly loaded and very large virtualized Oracle RAC nodes can be vMotioned to other servers in 180 seconds. In an emergency situation with little time to evaluate options a DBA can move all three RAC nodes in 3 minutes with no loss of availability to the business. When minutes translate into thousand or perhaps tens of thousands of dollars the ability proactively to vMotion Oracle RAC nodes justifies the investment in moving mission critical applications to a virtualized infrastructure. From the perspective of the DBA managing the mission critical database this means no actionable work on your part as the vMotion of the database is transparently done in the background. Certainly, monitoring the move and watching the database is warranted but other than taking those precautions not much is left to do.

 

After the original production servers have been upgraded or replaced then another scheduled vMotion can be completed taking 3 minutes for a total of 6 minutes of vMotion time. Acknowledging the trend of larger databases and having more of them to manage this research study by Principled Technologies shows VMware, Cisco, and EMC can effectively address the challenges DBAs and business have in maintaining availability, improving performance and most importantly enabling the teams responsible for the mission critical application.

The new Vblock Specialized Systems for High Performance Databases (SSHPD) was announced on September 17, 2013 and gives customers interested in open performance architectures a strong choice for Oracle databases. As I’m a Oracle DBA I’ll focus on Oracle but this Vblock system will accelerate all kinds of other databases too. Open hardware architectures like the Vblock have advantages over specialized systems as they support the acceleration of most every database configuration. In very specialized systems the Oracle DBA and correspondingly the business can boost the performance for a narrow range of vendor only databases. To contrast a converged open architecture like the Vblock enables the Oracle DBA and business to accelerate a wide range of databases with mission critical applications. The importance of an open architecture extends to considerations like integrating into existing systems, attaching 3rd party systems and reusing existing operational procedures. A good example of reusing operational procedures might be the use of storage based snaps and clones: many Oracle DBAs depend upon automated snaps and clones of production databases for off-loading backups and creating test environments. Using one more example, the importance of creating a consistent clone of a mission critical application means the business can recover the databases, applications and integrated 3rd party systems faster as everything is consistent meaning at the same point in time.

 

In a recent study published by IOUG and produced by Unisphere Research the top customer challenge was, “As demand for IT services and data volumes grow, so do the challenges with managing databases. Diagnosing ongoing performance issues remains the topmost reported challenge, while lifecycle changes are starting to be dealt with through automation. Overall data environments are not consolidated – enterprises are running many separate databases for applications.” This is an important statement as customers are communicating having to manage performance across many separate databases indicating a need for a high performance open architecture to support their heterogeneous database and application environments. Following up on the theme the study mentions, “The largest segment of respondents, 43%, report they have many separate and individual databases for each of their critical business applications.” Not surprising as many critical business applications have dedicated infrastructure for deterministic performance and prevention of extraneous problems sometimes linked to consolidated environments. Virtualization has been a disruptive technology wave as it enables the business to dedicate infrastructure (limits, reservations) and isolate applications (virtual machine) to achieve consolidation. The Vblock SSHPD delivers a transformational path by supporting heterogeneous physical database environments and migrating to virtualization on the same open consolidation infrastructure.

 

In the top customer challenge quote above “diagnosing ongoing performance issues remains the topmost reported challenge” but to gain more context we find the study says, “Diagnosing performance and identifying SQL statements that may be taxing systems remains and ongoing challenge.” It is very true that tuning SQL can significantly improve overall system performance. I have been in consulting engagements in which SQL tuning improved overall system performance by 30% and other engagements in which tuning stabilized database performance both outcomes considered accomplishments by customers. The advantages of SQL tuning improving includes retaining investment in existing infrastructure or gaining significant improvements in upgrading to new infrastructures. Unfortunately, there are situations in which the SQL tuning might not be ideal: too few developers and too much SQL, no time and too much SQL, and/or a 3rd party application that prohibits modifying SQL except through approved patches. High performance open infrastructures can significantly improve performance for a diverse portfolio of database and applications. Exploring the Vblock SSHPD on of the key performance components is the XtremSF (Server Flash) cards and XtremSW (Server softWare). The XtremSF card fits into the PCIe slots on servers and works with the XtremSW cache to buffer reads. Writes bypass the Xtrem card and cache traveling to the storage array to ensure data integrity. The XtremSF cards have two performance characteristics: capacity and type of memory. Here are some of the cards available 350 GB SLC, 700 GB SLC, 550 GB MLC, 700 GB MLC, 1.4 TB MLC and 2.2 TB MLC for servers. Using Wikipedia to define two terms:

 

SLC or Single-level Cell stores data in individual memory cells. Traditionally, each cell had two possible states, so one bit of data was stored in each cell in so-called single-level cells. SLC memory has the advantage of faster wire speeds, lower power consumption and higher cell endurance. However, because it stores less data per cell, it costs more per megabyte of storage. [2]

 

MLC or Multi-level Cell is a technology using multiple levels per cell to allow more bits to be stored using the same number of transistors. The primary benefit of MLC flash memory is its lower cost per unit of storage due to the higher data density. [2]

 

I did summarize the SLC and MLC explanations and encourage you to explore the differences using the Wikipedia link below. The table pictured below is from the “EMC VSPEX with EMC XtremSF and EMC StremSW Cache Design Guide” and shows the differences in memory capacity and latency between MLC and SLC cards. The abbreviation eMLC means “enterprise MLC” and indicates the memory is designed for low error rates. Using XtremSF cards in servers with Oracle databases means the DBA can accelerate a great deal of reads (capacity) at very low latency (microseconds).

 

Table 3. Performance characteristics of selected XtremSF cards [3]

flashtable.png

 

Microseconds, time equal to one millionth of a second, is exponentially faster than the millisecond latency, time equal to one thousandth of a second, offered by storage arrays. Depending upon the capacity of the XtremSF card the majority of reads IOPS will be satisfied by the combination of the database buffer cache and the memory on the PCIe card(s). Not to be understated all database reads fulfilled by the XtremSF card will have microsecond latencies. Recent advances in the XtremSW cache means these cards will work with Oracle RAC.[4] While not a Oracle RAC example this quote from a recent case study shows the extreme performance databases gain with this technology, “The addition of XtremSW Cache deployed to the SQL servers (half of our test servers) provided a total of more than 950,000 total IOPS, an increase of 30% over the same configuration without XremSW Cache.” [5] Other findings in the same study:

 

  • “XtremSW Cache reduced the storage processor utilization from 98% without XtremSW Cache to 84% with XtremSW Cache while providing the increased IOPS performance noted above” [5] This is good for the business and makes the DBA a team player as the XtremSW Cache offloads work from the storage processor.
  • “SQL Server average transaction latency was reduced from 1.26 milliseconds (1260 microseconds) without XtremSW Cache to 0.44 milliseconds (440 μs) with XtremSW Cache deployed in the SQL Server hosts – a 65% improvement” [5] As a DBA this is what we are most interested in performance! Having millisecond performance for our physical reads can mean reducing our control files sequential read, db file sequential read and similar system I/O wait events in our Automatic Workload Repository (AWR) reports.

 

Recent proven solutions and case studies have been communicating a performance trend of combining acceleration technologies. The Vblock SSHPD combines the XtremSF cards with the storage array Flash in accelerating all physical reads and writes. As we noted above all writes pass through the XtremSW cache to preserve data integrity however within the EMC VNX8000 writes can be immediately acknowledged by the RAM (4 x Intel Xeaon E5-2600 8-core 2.7 GHz / 256 GB) [6] in the storage array and if using FAST Cache up to 4.2 TB[6] can be supported. Writes to the database will range in latency between 1 to 4 m and with the large amount of both physical RAM and FAST Cache large amounts of writes will be supported. In the picture below we see the Megabits Per Second (MBPS) for an Oracle Data Warehouse (DW) only configuration is very good at 11,074 MBPS. As we been discussing adding XtremSW Cache lets writes bypass the XtremSF card so minor performance gain in this configuration. [5]

 

demartek.png

 

On a side note the Demarket case study used only one EMC XtremSF card per server however multiple cards are supported. For example, two 2.2TB XtremSF cards (4+ TB read cache) might depending upon your data warehouse accelerate those very large read operations. The Vblock for High Performance Databases is a performance driven infrastructure based on open standards that supports heterogeneous applications at extreme speeds and feeds.

 

I hope you enjoyed this blog and found some of the information useful. I’m including additional links to make finding the information you need easy.

 

[1] IOUG Study “FROM DATABASE CLOUDS TO BIG DATA, 2013 IOUG Survey on Database Manageability” by Joseph McKendrick, Research Analyst. Produced by Unisphere Research.

 

[2] Wikipedia: article entitled, “Multi-level cell

 

[3] EMC VSPEX with EMC XtremSF and XtremSW Cache

 

[4] Press Release “New EMC VNX Shatters the Definition and Economics of Midrange Storage

 

[5] Demartek “Evaluation of EMC VNX8000 and EMC XtremSW Cache”, September 2013

 

[6] VNX8000 Specifications

 

Other supporting links:

In this blog I’m going to highlight a session at Oracle Openworld 2013. Everyone attending session CON10911 - Oracle Database 12c Unplugged: Rapid Provisioning and Cloning of Oracle Databases: Collaboration will get an insider’s view into the collaboration between Oracle and EMC. You might be interested to know that Jeff is co-presenting with Margaret Susairaj of Oracle on how EMC is integrating to off-load Pluggable Database (PDB) Snaps to the VNX storage arrays for very fast clones. Oracle 12c enables the DBA more control in provisioning databases as they now have the capability to very quickly create database using simple SQL statements.[1] See the picture below from Jeff Browning’s presentation for an idea of how this is going to work.

Slide20.PNG

Without integration between EMC and Oracle the process of creating PDBs would be host based meaning the copy process would be managed from the server operating system. When the copy process is managed from the server extra load is created over the network or HBAs. This challenge is not new to Oracle DBAs and storage administrators who understand it could take hours copy large database(s). For many years storage administrators have used array based technology to create snaps and clones of databases as this functionality enables quick copies with little to no impact to the source database. The integration between 12c and EMC VNX storage arrays opens storage based functionality to the Oracle DBA as they are using the same technology to quickly clone from source or from seed a PDB. This is so cool as the Oracle DBA using SQL is off-loading the copy to the EMC storage array the same way the storage administrator did prior to 12c. The DBA is now empowered to create database using the same storage snap and clone as the storage administrators.

 

I hope you join us for this session and learn how EMC and Oracle are working together on 12c.

 

[1] Oracle® Database Administrator's Guide 12c Release 1 (12.1) E17636-18: http://docs.oracle.com/cd/E16655_01/server.121/e17636/cdb_plug.htm#ADMIN13556

Filter Blog

By date:
By tag: