Find Communities by: Category | Product

Bitly URL:

 

http://bit.ly/1cPNf5N

 

Tweet this document:

 

Perform online disk-to-disk copy of HP 3PAR to @EMCStorage, with #EMC Open Replicator for Oracle Database http://bit.ly/1cPNf5N

 

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

Visit the EMC store:

store_open-218x245.png

Click here to learn more about EMC’s Solutions for Oracle

Enterprises today need to create a competitive advantage for their IT infrastructure by placing increasing value on both location and availability of their Oracle infrastructure.  The data needs to be highly available, in the right place, at the right time, and at the right cost to the enterprise.

 

As part of overall Oracle data movement process one can use many migration tools EMC VMAX offers including but not limited to SRDF, PPME, Open Replicator solutions etc.

 

SRDF/Data Mobility (DM) permits operation in SRDF adaptive copy mode only and is designed for data replication or migration between two or more VMAX3 arrays. SRDF/DM transfers data from primary volumes to secondary volumes permitting information to be shared, content to be distributed, and access to be local to additional processing environments. Adaptive copy mode enables applications using that volume to avoid propagation delays while data is transferred to the remote site. SRDF/DM can be used for local or remote transfers.

 

 

PowerPath Migration Enabler (PPME) is a host-based migration product that lets you migrate data between storage systems. It works in conjunction with underlying technologies, such as Open Replicator, TimeFinder/Clone, and Host Copy. To use PPME, you'll need to install a licensed copy of PowerPath Multipathing on the host machine.

 

 

One of the rapidly growing data migration demands is to perform disk-to-disk copy for third-party arrays to EMC VMAX3 for optimizing their database with better availability, performance, and manageability.  EMC Solution Enabler Open Replicator Software (ORS) can be used to perform this disk-to-disk copy for all third-party arrays listed in the EMC Support Matrix to EMC storage arrays.

 

In VMAX3, ORS has undergone an architectural change to be sensitive to host IO, which will result in minimal impact on overall response time.  ORS has been structured to be more flexible and dynamic on the port assignments.  This new underlying support has resulted in changes to Solution Enabler (SE) ORS functionality in several areas.

 

ORS enables remote point-in-time copies to be used for data mobility, remote vaulting, and migration between VMAX3 arrays and qualified storage arrays with full or incremental copy capabilities.  ORS can:

 

  • Pull from source volumes on qualified remote arrays to a VMAX volume
  • Perform online data migrations from qualified storage to VMAX with minimal disruption to host applications

 

Oracle Database migration solution from HP 3PAR to EMC VMAX3

 

This solution uses an HP 2PAR E200 array as the source to ORS.  Oracle database residing on a 3PAR array can be transferred to VMAX3 using the latest ORS.  ORS is capable of dynamic movement of data between 3PAR and VMAX3 by using the Pull capabilities of ORS.

 

 

Open Replicator Hot Pull migration from HP 3PAR to VMAX3 steps:

 

 

Step 1:  Create VMAX target LUNs

Step 2:  Create VMAX device file

Step 3:  Create ORS hot pull sessions with donor update

Step 4:  Connect to the Oracle database on source server and create a table in user schema

Step 5:  Stop Oracle database and ASM instance on source server

Step 6:  Activate ORS sessions and query sessions

Step 7:  Bring up the Oracle database on the target server

Step 8:  Connect to the Oracle database on target server and check the test table data

Step 9:  Turn off donor update using –consistent option

Step 10:  Terminate ORS session

Step 11:  Bring up the Oracle database on the source server

 

This migration is fully completed without interruption or manual operation from the customer’s application perspective.  All commands were issued using Solution Enabler ORS.  This type of data relocation offers a significant advantage in 24x7 production level environments.

 

For more detail information of this solution, please refer to :

EMC Open Replicator Migration from HP 3PAR to EMC VMAX3 Using Oracle Database

In the past I was kind of depending on the service times reported for db file parallel write waits via AWRs. The reason for looking at db file parallel write service times was to help in determining if there were write I/O service time issues.

 

I brought up looking at db file parallel write times in a conversation with Kevin Closson and he gave me his         “Two Cents Worth” on the topic and suggested I read Frits Hooglands blog accessed via:   https://fritshoogland.wordpress.com/2013/09/06/oracle-io-on-linux-database-writer-io-and-wait-events/ 

 

So, I read Frits' blog which was nicely written!


The bottom line, from Frits’ blog, for sync I/O:  basically, db file parallel write waits are not timed at all.  And I/Os are executed sequentially.

 

The bottom line, from Frit’s blog, for async I/O:  db file parallel write waits shows the response time for the minimal number of I/O requests from the I/O completion queue. For example, via io_submit system call, 32 async write I/Os are submitted. Via io_getevents system call, it tries to reap/read at least a minimum number of I/O events (ie. 2) from the completion queue, but not necessarily the 32 write requests. So if it reads/reaps completion times of only 2 of the write I/Os, what is the value of the I/O timing provided? Well, really none.   

 

Yet another lesson learned.

A recent conversation with an Oracle ERP customer led to some interesting use-cases for EMC XtremIO.

 

When running Oracle databases on XtremIO it is typical to see 2:1 database compression, so, for example, a ten TB database sitting on XtremIO will only require five TB of storage. Slightly higher and slightly lower ratios have been reported.

 

There is about a 25% storage overhead with XtremIO, used for XtremIO’s XDP RAID (8%), metadata destaging and vault space giving 75% usable capacity. Certainly much lower than the 67% overhead on Exadata three-way RAID mirroring giving 33% usable capacity.

 

Plus a snapshot of a source database taken on the same XtremIO cluster is initially simply a set of in-memory metadata pointers requiring no storage space, only generating storage usage as deltas are generated against the read/write snapshot. (OK, the metadata has to be stored somewhere and is part of the 25% overhead mentioned above.)

 

So this customer is facing two challenges with their 15TB Oracle R12 ERP environment.

 

  1. They are in the middle of an upgrade to the system integrating new business units after some M&A activities.
  2. They are looking for a more robust DR strategy.

 

The upgrade is generating the multiple environments for dev/test/QA/patch/training typical of ERP upgrades and implementations. But here note with a 15 TB database source database that is three-way mirrored on Exadata each environment is consuming 45 TB of storage on Exadata. Not pretty.

 

What is proposed is to deploy a few appropriately sized commodity servers for their Oracle RAC hosts and run a Dataguard physical standby residing on XtremIO from their prod R12 ERP residing on Exadata. Immediately they reduce the footprint from 15 TB (45 TB) to about 7.5 TB with XtremIO’s compression had they used another Exadata appliance for DR. But here is the kicker: once on XtremIO snapshots can be taken off the Dataguard standby to provision the dev/test/QA/patch/training environments requested by the implementation team. And they are “free” from a storage footprint perspective except for the deltas generated on the read/write snapshots and some additional metadata. But even the deltas and metadata are compressed. Pretty.

 

And should they need to failover to DR, with an IOPs sizing and design exercise based off their prod implementation, they are assured the XtremIO storage subsystem will support the IOPs requested with production usage. Pretty Polly.

Filter Blog

By date:
By tag: