This week I handled an issue which is related to the configuration of Oracle standalone DR database server. The DR database was replicated from a 2-Node RAC Database. In this blog, I will talk about the EMC Recoverpoint initially and then I will talk about the Oracle configuration changes that need to be done after the Recoverpoint replication to make an Oracle DR box.
EMC RecoverPoint is an enterprise-scale solution designed to protect application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data both locally and remotely.
Innovative data-change journaling and application integration capabilities enable organizations to address their pressing business, operations, and regulatory data protection concerns. Organizations that implement RecoverPoint see dramatic improvements in application protection and recovery times compared with traditional host and array snapshots or disk-to-tape backup products.
RecoverPoint provides the following benefits:
- Provides continuous remote replication (CRR) with block-level remote replication between LUNs in two different SANs using technology that journals groups of writes for recovery to a significant point in time.
- Provides the ability to map RTOs and RPOs to policies assigned to consistency groups (CGs).
- Allows a granular and flexible approach to assigning different levels of priority to the data being replicated.
- Is a formidable solution to protect the environment. RecoverPoint protects and supports replication of data that applications are writing to local SAN-attached storage.
- Uses existing FC infrastructure to integrate seamlessly with existing host applications and SATA storage subsystems. For long distances, RecoverPoint uses either FC for metropolitan area networks (MANs) or IP for wide area networks (WANs) to send data.
The Recoverpoint provides local and remote replications with any point-in-time recovery using CDP, CRR, or CLR RecoverPoint technology. EMC Recoverpoint provides DVR-like point in-time recovery with three topologies :
- Local Continuous Data Protection (CDP)
- Synchronous or Asynchronous Continuous Remote Replication (CRR).
- Combination of Both (CLR)
Recoverpoint is the offering that simplifies continuous data protection and replication by using EMC Symmetrix VMAX with an Enginuity-based write splitter. Recoverpoint is an appliance-based, out-of-band data protection solution designed to ensure the integrity of production data at local and/or remote sites. It enables customers to centralize and simplify their data protection management and allows for the recovery of data to nearly any point in time. Recoverpoint provides continuous replication of every write between a pair of local volumes residing on one or more arrays. RecoverPoint also provides remote replication between pairs of volumes residing in two different sites. For local replication and remote synchronous replication , every write is collected and written to a local and remote journal and then distributed to target volumes. The below figure depicts the Recoverpoint configuration for local and remote replication.
RecoverPoint appliance (RPA)
RPA is a server that runs RecoverPoint software and includes four 4 Gb FC connections and two 1 Gigabit Ethernet connections. For fault tolerance a minimum of two RPAs are needed per site that can be extended up to eight RPAs. RPAs are connected to the SAN and for updating the journal volumes. RPA ports are zoned to the same Symmetrix VMAX front end-adapters (FAs) that are zoned to a production host which has access to all writes originating from the production host.
Symmetrix VMAX write splitter for RecoverPoint
Symmetrix VMAX write splitter for RecoverPoint is an enhanced implementation of Open Replicator that sends all incoming host writes from the VMAX array to a local RPA cluster for use in CDP local replication, CRR-based remote replication, or CLR, which is a combination of CDP and CRR.
RecoverPoint source volumes
RecoverPoint source volumes are the production volumes that are protected using RecoverPoint.
RecoverPoint replica volumes
RecoverPoint replica volumes are the target RecoverPoint volumes on any heterogeneous storage array containing a full copy of the production volumes. The replica volumes are normally write-disabled volumes but by providing image access functionality RecoverPoint enables direct read/write access on the replica volume to a secondary or standby host by allowing easy access to data at any point in time, in conjunction with the available journal. This any-point-in-time image of the production data can be used for test/development systems, reporting, backup, or many other use cases. Optional features include the ability to swap the roles of secondary (or standby) and primary host, and the direction of replication can be reversed.
RecoverPoint journal volumes
RecoverPoint journals store block-level changes to the source volumes and they are used in conjunction with the replica volumes to enable any-point-in-time recovery. RecoverPoint journal volumes are the Symmetrix devices visible only to the RPA cluster. Because all writes are journaled, the size of the journal depends on the desired period of protection and change rate at the production site.
RecoverPoint repository volumes
Repository volumes are very small devices visible to the RPA cluster. They store management information required for RecoverPoint replication operations.
RecoverPoint Consistency Groups
RecoverPoint consistency groups allow the creation of a write-order consistent copy of the set of production volumes. The consistency group(s) can be disabled at any time for maintenance operations on production volumes, and RecoverPoint will resynchronize the replica volumes once the consistency group(s) is re-enabled. A consistency group consists of one or more replication sets. Each consists of a production volume and the replica volumes to which it is replicating. The consistency group ensures that updates to the replicas are always consistent and in correct write order; that is, the replicas can always be used as a working set of data, or to restore the production source, in case it is damaged or destroyed. The consistency group monitors all the volumes added to it to ensure consistency and write-order fidelity. If two data sets are dependent on one another (for example, a database and a database log) they must be included in the same consistency group. A RecoverPoint consistency group consists of settings and policies, a replication set and journals that receive changes to data. For the purposes of this white paper project, a RecoverPoint consistency group was created that contained 16 replication sets. Below is a sample diagram of RecoverPoint replication sets
Below diagram depicts the replication that happens between Primary and the D/R box via Recoverpoint.
We will discuss the step that will be required to configure the Oracle non-RAC D/R database after the replication gets over and the Database needs to be started again. Below steps are applicable to both ASM and non-ASM storage configurations of database.
- Log into the Primary DB box and generate the pfile from the spfil.
- Transfer the pfile from the primary to the DR box via. some transfer utility like scp(in Unix).
- Set *.cluster_database=false.
- Change the instance alias to appropriate name.
- Put the pfile in appropriate location.
- Start the grid by crs_stat –all.
- Set the listener to both the DB and the grid(if not already started by the previous step) .
- Check if the ASM instance is up and mounted.
- If ASM, is up , then start the DB using the above modified pfile.
- If the mode needs to be in Dr then the snapshot control file needs to be copied from primary and the DB will be started in mount state.
- Change into managed recovery mode for DR DB.
- Check the archived redo logs are in sync between the primary and DR.
1I If the steps are ok and if still DR doesn’t start or do automatic apply then further troubleshooting is required based on the alert log file of the DR DB.