Find Communities by: Category | Product

Just announced the new VMAX3 is delivers more performance, simplifies storage management, and is the optimum route to the hybrid cloud. In today’s Redefine Possible  mega event you might have heard about the new capabilities of the VMAX3 but perhaps you didn’t catch the part about Oracle application awareness. In the interest of full disclosure I missed the part about how using the VMAX3 BOTH the Oracle DBA and storage administrator can monitor database and tune database performance. That means it’s time to blog!


Driving Awareness

At the time of writing this blog it might be a challenge to find information on VMAX3 application awareness with the exception of this 6 minute video: Application Awareness Demo. In the video we learn that there is a “dedicated URL for DBAs” to access DBclassify in Unisphere 8.0. This means the DBA can independently access the DBclassify web page without having to contact the storage administrator and can gain immediate insight into database storage performance.


Picture 1: DBclassify Dashboard

Unisphere8dbclassify.png

In the above picture above we see the DBclassify dashboard and several statics: IO Wait vs. Non IO Wait, average active sessions wait, response time, IOPS, and throughput. The solid lines in the graph denote performance as reported by the database and the dashed lines show storage performance. In this way it is very easy to investigate physical reads, writes and redo writes and see the delta between database and storage metrics. This eliminates the blame storms that sometimes occur between the database and storage administrators.


Analytics

Clicking on the ‘Analytics’ table up top brings the DBA to a management page that shows IO Wait over time and which database objects actively used during that time. This provides the capability for the DBA to investigate historical performance and find which database objects were used during high IO wait times.


Picture 2: DBclassify Analytics

Unishpere8dbclassify2.png


Looking to the right in the list of database objects you will see a bar that indicates the type of activity for the object: random read, sequential read, direct I/O, system I/O, commit I/O, and other I/O. This is important because moving database objects to enterprise flash drives is best for objects with that are weighted towards random reads. For example, given a choice between an object with mostly random reads (purple color) and another object with direct I/O (green color) the best opportunity to improve performance is with the object that has the purple bar.


Picture 3: DBclassify Hinting

Unisphere8dbclassify3.png

Sometimes it’s not what you hear that matters but what you see. This picture was taken at approximately 3 minutes and 24 seconds into the video and the objects selected are very important: all three objects show a healthy amount of random reads. The selected objects then become part of a hinting wizard in which the DBA can move the objects for the flash drives.

 

Picture 4: DBclassify Hinting Wizard

Unisphere8dbclassify5.png

In the hinting wizard the DBA can:

  • Assign a name to the hint: for example, “Billing” to identify objects related to billing activity
  • Priority: hint priority represents how important the object are
    • 1: Top
    • 2: Very high
    • 3: High
    • 4: Above average
  • Hint Scheduling
    • One time: a onetime promotion of database objects
    • Ongoing: keep the databases objects on the faster tier of storage
    • Recurrence: schedule a time to promote database objects to accelerate cycles of workloads


Once a hint has been created the DBA can then monitor the effectiveness of the hint. There is also a hint management tab that show all the hints created (not shown) and allows the DBA to enable, disable, edit, and remove hints. As you can see using the hinting wizard the DBA can at a very granular level improve database / application performance by selecting database objects to be promoted to flash drives. EMC is truly enabling the Oracle DBA to use VMAX storage arrays!

 

Stay tuned more coming!

In my last blog I was mentioning some of the best features and benefits of deploying and running Oracle in XtremIO array. In this blog, I would like to discuss some of the best practices which can be implemented to exploit the great features and benefits of using Oracle in XtremIO arrays discussed earlier.

 

Tuning I/O Block Size , DB_FILE_MULTIBLOCK_READ_COUNT

 

Oracle’s default block size of 8k works just fine with XtremIO.  This setting provides a great balance between IOPS and bandwidth, but can be improved on in the right conditions. If the data rows fit into a 4k-block size one can see an IOPS improvement of over 20% by using a 4KB request size. If the rows don’t fit nicely in 4k block sizes, it will be better to stick with the default setting. For data files, I/O requests will be in a multiple of the database block size – 4k, 8k, 16k, etc. If the starting addressable sector is aligned to a 4k boundary, optimal condition is met.

 

The default block size for Oracle Redo Logs is 512 bytes. This default block size will cause redo log entries encapsulated in large-block I/O requests that (likely) do not align to the 4k boundary. For Redo logs, the default block size is 512 bytes. The I/O request to the redo log is a multiple of the block size. Redo entries encapsulated in large-block I/O requests are very likely not to start and end on a 4k aligned boundary resulting in extra computational work and I/O sub-routines on the XtremIO back-end.  In order to avoid extra computational work and I/O subroutines on the array back-end we need to set redo log block size to 4k. In order to create redo log with a non-default block size you’ll need to add the option ”_disk_sector_size_override=TRUE” to the parameter file of the database instance. It is recommended to create a separate, stand-alone disk group for data files.The default database block size for an Oracle database is 8k. The XtremIO Array performs well with this particular setting.

 

Oracle controls the maximum number of blocks read in one I/O operation during a full scan using the “DB_FILE_MULTIBLOCK_READ_COUNT” parameter.  This parameter is specified in blocks and is, generally, defaulted to 1MB.  Generally we set this value to the maximum effective I/O block size divided by the database block size.  If there are a lot of tables with parallel degree set, we may want to drop this to 64k or 128k.  If we are running with the default block size of 8k, this DB_FILE_MULTIBLOCK_READ_COUNT will need to be 8 or 16.

 

 

During performance benchmarks, the XtremIO Array has proven capable of over 200K absolute-random RIOPS based on OLTP-performance benchmarks performed against a single X-brick induced through SQL.  Moreover, the application has reported sub-millisecond for latency. During Bandwidth testing, the XtremIO Array, again SQL induced, showed to sustain 2.5 GB/s against a single X-brick. As more hosts were added to the mix and tested against an expanded XtremIO array with two X-bricks, performance doubled – recording over 400K IOPS during the OLTP testing and over 5GB/s for the Bandwidth test.

 

Arguably, the 8k DB block size is ideal for most workloads. It strikes a very good balance between IOPS and Bandwidth. However, a very strong case can be made for 4K in extreme circumstance where rows fit nicely in 4k block size, the buffer cache is not effective due to the randomness of the application access, and the speed of the storage now becomes the most determining factor for successful deployment. When using a 4KB request size, the XtremIO array can service approximately 20-30% more I/O’s per second than 8KB requests.

 

ASM Disk Group layout and ASM Disks per Disk group

 

Oracle recommends separating disk groups into three parts: Data, FRA/Redo, and System. Due to the nature of Redo a disk group can be dedicated to it. While the XtremIO array will perform great using a single LUN in a single disk group, it will be better to use  multi-threading and parallelism to maximize performance for our database. Here, it is best to use 4 LUNs for the data disk group allowing the host to use simultaneous threads at different queuing points.  That means the RAC system will have 4 LUNs dedicated for control files and data files; 1 for Redo; 1 for archive logs, flashback logs, and RMAN backup; and one for your system files. The best practice is to use 4 LUNS for the Data disk group. This allows the hosts/applications to use simultaneous threads at various queuing point to extract the maximum performance from the XtremIO array. The number of disk groups should be 10 or less for optimum performance.

 

Modify /etc/sysconfig/oracleasm

 

# ORACLEASM_ENABELED: 'true' means to load the driver on boot.

ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.

ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.

ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.

ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning

ORACLEASM_SCANORDER="dm"

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan

ORACLEASM_SCANEXCLUDE="sd"

 

Allocation Unit Size

 

The default AU size (1 MB) for coarse grained and 128KB for fine grained striping works well on XtremIO Array for various database files. There is no need to modify striping recommendations provided by default templates for various Oracle DBMS file types.

 

File Type          Striping

CONTROLFILE  FINE

DATAFILE         COARSE

ONLINELOG     FINE

ARCHIVELOG   COARSE

TEMPFILE        COARSE

PARAMETER    COARSE

FLASHBACK    FINE   

 

In order for ASM disk groups with various values associated to the attribute sector size (512, 4096) to be mounted by an ASM instance, the parameter “_disk_sector_size_override=TRUE” has to be set in the parameter file of the ASM instance. Consider setting ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true in /etc/sysconfig/oracleasm. Setting this to true sets the logical block size to what is being reported by the disk and that is 512 bytes.

 

The minimum I/O request size for database files residing in an ASM disk group is dictated by the sector size (ASM disk attribute). For ease of deployment, the recommendation is to keep the logical sector size to 512 bytes to ensure that the minimum I/O block size can be met for all types of database files.

 

Consider skipping (not installing) ASMLIB entirely. By skipping ASMLIB, you can now create an ASM DG with 512 bytes per sector. By creating an ASM DG with 512 bytes for the sector size, you can direct the default redo log files (512 bytes block size) to this DG at least in the interim just for DBCA to complete the DB creation.

 

HBA Settings, Multipath Software Settings

 

A single XtremIO X-Brick has two storage controllers. Each storage controller has two fiber channel ports. Best practices are to have two HBAs in the host and it will be good to zone each initiator all targets in the fabric. The maximum number of paths recommended is 16 to the storage ports per host. For the storage ports, the recommendation is to utilize all storage ports evenly amongst all hosts and clusters to ensure balanced utilization of XtremIO’s resources.

 

The recommended setting for LUN-queue depth is the maximum per supported HBA given one host connecting to the XtremIO Array – 256 for QLOGIC HBA, 128 for Emulex HBA. With 2 hosts, it is recommended to reduce that to half of the maximum – 128 for QLOGIC HBA, 64 for Emulex HBA. As the number of hosts increase, it is recommended to decrease the LUN-queue depth setting proportionately. The minimum recommended setting as the number of hosts increase in the SAN is 32 for both QLOGIC and Emulex HBAs.

 

Great performance has been recorded using XtremIO arrays with just one LUN in a single disk group. However, maximizing performance from a single host calls for multi-threading and parallelism, and the best practice is to use 4 LUNS for the Data disk group. This allows the hosts/applications to use simultaneous threads at various queuing point to extract the maximum performance from the XtremIO array.

 

It is a good practice to use a dynamic multi-pathing tool, such as Power Path, to help distribute the IO across the multiple HBAs, as well as help to assuage in HBA failures. As more systems are added to the shared infrastructure, performance of static multi-pathing tools may be affected, causing additional management overhead and possible application availability implications as data needs to be re-distributed across other paths. Dynamic multi-pathing will continue to adjust to changes in I/O response times from the shared infrastructure as the needs of the application and usage of the shared infrastructure change over time.

 

References:-

 

 

 

 

Socialize this blog using bitly URL: http://bit.ly/1lcHCvQ


Click to Tweet:

New blog on tuning #Oracle databases on #XtremIO: http://bit.ly/W70Q0i Learn I/O blocksize, db_file_multiblock_read_count & other stuff #EMC

In any Production environment, Oracle DBAs are invariably concerned about the transaction processing speed, Database performance, backup and restore time and efficacy.  Oracle Database if used on XtremIO array will be a strong answer to the above challenges. I am consolidating some of the great benefits and features of using Oracle DB in XtremIO array :-

 

Cost Savings

 

XtremIO is a all-flash storage with deduplication technology that is extremely cost competitive with performance disk, 10X faster, 10X more space & power efficient, more reliable and 10X simpler. It has lowest database TCO. The entire gamut of cost savings can be consolidated under following heads: 

  1. Database consolidation- with unmatched performance, and free space-efficient cloning
  2. License savings- XtremIO solves the single biggest drag on CPU: I/O. When your expensive database is not I/O-bound, dramatically more host CPU is available for transactions, lowering your database licensing costs so you get more productivity out of fewer processor cores
  3. Storage capacity- with inline data reduction, thin provisioning, and the lowest data protection overhead in the industry
  4. OPEX- with the simplest management, deployment, and test/dev agile data management in the industry

 

Much Improved I/O Performance and other benefits

 

XtremIO is a scale-out, multi-controller design, capable of delivering linear performance increases as capacity in the array grows. It has sub-millisecond latency and high IOPS.The I/O performance is also stable and predictable. The array is “inherently balanced” and optimized to within a fraction of a percent across all SSDs and array controllers.  In order to balance across a multitude of controllers, and avoid hot spotting issues and back-end rebalancing processes, XtremIO uses a new mechanism of determining where to place data by looking at the data itself with a sophisticated content fingerprinting process.  This not only delivers better performance, better flash endurance, and makes the array easier to use but also makes the array aware to know to deduplicate data inline in the data path at full speed. XtremIO leverages both the random access nature of flash and the unique XtremIO dual-stage metadata engine to deliver much lower capacity overhead, better data protection and much better flash endurance and performance compared with RAID.  XtremIO delivers consistent and predictable performance because all metadata is stored in-memory.  XtremIO arrays are designed to be impervious to changes in workload thereby maintaining a reliable DB I/O performance. Every volume on an XtremIO storage system inherently gets the full performance of the array. We need not  to think about how to map indexes, tables, redo logs, journals, temporary space, and archives to appropriate RAID levels and spindle counts in order to drive performance. Just creating a volume with just a few clicks and put the entire database structure in it with good I/O throughput and performance.

 

Fast and Free Dev/Test Copies

 

In XtremIO environment SCSI EXTENDED COPY (XCOPY) can be used to copy data in the LUNs of the production database to other newly created LUNs that were used for test/dev environment provisioning. The XCOPY utility is a small tool developed by EMC to send read/write SCSI commands from the host to the XtremIO storage array without increasing space utilization. The XtremIO array automatically deduplicated data as the database was copied. This reduced the amount of data written to flash, which extended the flash lifetime. XtremIO data reduction will not negatively affect performance of the array at any time and will be always inline. Data deduplication occurs across the array. There is no performance difference between accessing the primary volumes versus accessing the cloned volumes. The metadata resolves to the same set of actual physical data blocks. This is preferred in certain situations when performance validation is required against the replica.

 

Simplified Database Layout

 

With traditional storage design for Oracle Database, multiple RAID groups of different drive types are created, each with different levels of protection and distributed across multiple controllers. With XtremIO, all drives are under XDP protection, and data in the array is distributed across the X-Bricks to maintain consistent performance and equivalent flash wear levels. In XtremIO both random and sequential I/O is treated equally as data is randomized and distributed in a balanced fashion throughout the array. In ASM nomenclature, this feature has effectively blurred the importance of Fine grained and Coarse Grained striping. In short, DBA need not worry about the ASM striping type as it will be taken care by XtremIO.

 

XtremIO Snapshots

 

XtremIO snapshots are implemented in a unique way that for the first time maintains space efficiency on writeable snapshots for both metadata and user data. In combination with XtremIO's unique in-memory metadata architecture, XtremIO snapshots allow for large numbers of high performance, low latency, read/writeable snapshots. XtremIO snapshots are easy to use, and appear and are managed as standard volumes in the cluster.

 

aa1.bmp.png

XtremIO snapshots are efficient in metadata and physical space, can be created instantaneously, have no performance impact, and have the same data services as any other volume in the cluster (for example, thin provisioning and Inline Data Reduction).

 

XtremIO snapshots can be used in a variety of use cases, including:

  • Near Continuous Data Protection (CDP) to protect against Logical corruption
  • Backup

It is possible to create snapshots to be presented to a backup server/agent. This can be used in order to offload the backup process from the production server.

 

  • Development and test

It is possible to create snapshots of the production data, create multiple (spaceefficient, high-performance) copies of the production system and present them for development and testing purposes.

 

  • Offload processing

It is possible to use snapshots as a means to offload the processing of data from the production server. For example, if there is a need to run a heavy process on the data (which can affect the production server's performance), it is possible to use snapshots to create a recent copy of the production data and mount it on a different server. This process can then be run on the other server without consuming the production server's resources.

  • Bulk provisioning of VMs
  • Logical corruption protection

It is possible to create frequent snapshots (based on the desired RPO intervals) and use them to recover from any logical data corruption. A snapshot can be kept in the system for as long as it is needed. If a logical data corruption occurs, it is possible to use a snapshot of an earlier application state (prior to the logical data corruption occurrence) to recover the application to a known good point in time.

 

XtremIO snapshots are easy to use and manage, and leverage a sophisticated metadata management engine that provides superior support for flash media, enabling high performance snapshotting. XtremIO snapshots can be used for Development and Test, backups, protection against logical corruption, offload processing (real-time analytics, SAP landscape,and more) and consolidation of testing and development with production.

 

In the next blog, I am going to discuss some best practises   for deploying and running Oracle DB in an XtremIO array in order to avail the above features and benefits.

 

References :-

  • For getting details regarding XtremIO array, pls. visit the site here.
  • Pls. visit the EMC Store for the datasheet of XtremIO array here.

 

 

Socialize this blog with bitly URL: http://bit.ly/W70Q0i

 

 

Click to Tweet:

#XtremIO is an all flash storage array with inline data reduction and great speed providing the business a great TCO: http://bit.ly/W70Q0i

 

ProtectPoint-150.png


Introducing EMC ProtectPoint, with Oracle RMAN Integration!


 


Data Protection


 

Bitly URL:

http://bit.ly/1oEwADt


 

Tweet this document:

Introducing EMC ProtectPoint, with Oracle RMAN Integration!

 

Follow us on Twitter:

EMCOracle.jpeg.jpg


You may also be interested in:

In my previous blog post Storage Integrated Data Protection, for Oracle Databases too!, I mentioned that on Tuesday, July 8th, EMC announced ProtectPoint, an industry-first data protection offering that provides direct backup from primary storage to protection storage. Delivers the performance of snapshots with the functionality of backups, while eliminating the impact of backup on the application environment and ensuring consistent application performance.

 

Further, EMC ProtectPoint enables fast recovery and instant access to protected data for simplified granular recovery, while removing the need for excess backup infrastructure, reducing overall cost and complexity.

 

ProtectPoint will initially support backing up the new VMAX3 storage systems to EMC Data Domain systems DD4500, DD7200, and DD990 running DD OS 5.5.

 

 

In this blog post, let's examine how EMC ProtectPoint performs backup and recovery operations; 
 

At a high-level, this is how EMC ProtectPoint works to backup directly from primary storage to Data Domain. The first step is for the Storage Administrator to make a point in time copy of the LUN(s) to be protected and seeds the initial blocks on the Data Domain system, the environment is then ready for its first full backup via ProtectPoint.

 

 

slide1.PNG.png

 

  1. As shown above, an application owner, such as an Oracle DBA using the new ProtectPoint script integration for Oracle RMAN, triggers a backup at an application consistent checkpoint
  2. This triggers the primary storage, leveraging new primary storage changed block tracking, to send only the changed blocks (since the last backup/initial copy) directly to Data Domain
  3. The Data Domain system will receive and deduplicate the changed blocks, using them to create an independent full backup in native format, which enables greatly simplified recovery


With ProtectPoint, you perform a full RMAN backup every time, but primary storage only sends unique blocks, so the full backup comes at the cost of an incremental.

Let’s take a look at how a recovery works with ProtectPoint – first we’ll review a full recovery, which would be recovering an entire LUN. 

 

 

slide2.PNG.png

 

  1. First, the app owner will trigger the recovery …
  2. Then, the primary storage reads the full backup image from Data Domain
  3. The primary storage will then replace the production LUN with the recovered copy

 

In comparison, here’s how a granular recovery works – for an Oracle database environment this might be recovering a specific database, table or record – as opposed to the entire LUN.

 

 

slide3.PNG.png

 

 

  1. First, the app owner or DBA triggers the recovery
  2. Then, primary storage connects to the backup image on the Data Domain system
  3. This gives the DBA instant access to their protected data, which will still be on the Data Domain, but it will appear like it’s on primary storage.


At this point, the DBA can use the LUN as they would a snapshot and, for example, perform normal steps to open a database and recover a specific object to the production database.

 

 

slide4.PNG.png

 

 

Overall with EMC ProtectPoint, you can reduce the time, cost and complexity of managing application backups, by eliminating the impact of backups on application servers with non-intrusive data protection since no data will flow through the application server. This ensures you will maintain consistent application performance, but still gain application consistent backups for simple recovery.


Further, you’ll finally be able to meet stringent protection SLAs, since only changed blocks are sent directly across the network and as all backups are stored in native formats, you’ll gain much faster backup, faster recovery and instant access to protected data for simplified granular recovery.


EMC ProtectPoint is simple and efficient and requires no additional infrastructure, above your new VMAX3 and Data Domain Appliance.

 


In a future blog post, I will examine how EMC ProtectPoint provides script integration with Oracle Recovery Manager (RMAN) for Oracle 11g and 12c databases running on Unix (Solaris, AIX, HP-UX) and Linux.


Comments / Questions?


 

 

EMC-XtremIO-promotion-banner2.png

ProtectPoint-150.png

Storage Integrated Data Protection, for Oracle Databases too!

 


Data Protection

 

Bitly URL:

http://bit.ly/1pVe3Xs

 

 

Tweet this document:

Storage Integrated Data Protection, for Oracle Databases too!

 

Follow us on Twitter:

EMCOracle.jpeg.jpg


You may also be interested in:

BsBHsOgCYAQOGKl.png

 

 

On Tuesday, July 8th, EMC announced XtremIO 3, VMAX 3 and GA for ViPR 2 and ECS; along with the VMAX3 announcement came EMC ProtectPoint – an industry-first data protection offering that provides direct backup from primary storage to protection storage.

 

Let's first consider traditional backups, below you can see a good summary of the issues faced - ranging from the complexity of the required backup infrastructure, unpredictable impact on application and database servers, to lack of control by Application Owners and DBAs;

 

 

Screen Shot 2014-07-08 at 12.03.12.png

 

EMC ProtectPoint provides a new and revolutionary approach, delivering the performance of snapshots with the functionality of backups, eliminating the impact of backup on the application environment, ensuring consistent application performance. Further, ProtectPoint enables fast recovery and instant access to protected data for simplified granular recovery, while removing the need for excess backup infrastructure, reducing overall cost and complexity.

 

 

Screen Shot 2014-07-08 at 12.27.44.png

 

 

For more background and a high-level view of where ProtectPoint fits in EMC’s Data Protection Continuum, EMC’s AshishYanik, Senior Director of Product Management, Data Protection and Availability Division, has written an EMC Pulse blog “Best of Both Worlds Data Protection? Now it’s possible…”

 

As mentioned towards the end of Ashish’s blog, ProtectPoint will initially support backing up the new VMAX3 storage systems to EMC Data Domain systems DD4500, DD7200, and DD990 running DD OS 5.5, we should first mention the new VMAX3!

 

Introducing VMAX3: The industry’s first purpose-built data platform for hybrid clouds that delivers up to 3X performance. Redefining the Data Center to deliver the Agility and Economics of Cloud with Control and Trust

 

The new VMAX3 delivers always on six 9s enterprise availability with fault isolation, data integrity, and non-disruptive upgrades, completely protected with the industry’s premier disaster recovery / business continuity architecture, providing;

  • Rapid, reliable recovery: Directly back up to Data Domain with new EMC ProtectPoint to eliminate backup impact on application and database servers.
  • Cloud agility and economics: Using simplified provisioning with automated service level delivery across mixed workloads.
  • Workload mobility: Moving enterprise and modern application data within the system and across hybrid cloud to optimize performance and cost.

 

Redefining Data Protection for Oracle, with EMC ProtectPoint

 

By itself, although very compelling, enabling direct backup from primary storage to protection storage didn't sound like something DBAs could take advantage of. My own first thought was “that sounds great for Storage Administrators, but what good will it do for me as a DBA?”

 

I could not have been more wrong! From its first release, EMC ProtectPoint provides script integration with Oracle Recovery Manager (RMAN) for Oracle 11g and 12c databases running on Unix (Solaris, AIX, HP-UX) or Linux, eliminating backup impact from Oracle database servers and enabling the fastest possible backup and recovery.

 

EMC ProtectPoint does not require a database to be in ‘backup mode’ for the length of time it takes to protect the data. In addition, with ProtectPoint, backups are significantly faster since only unique blocks are sent from the primary storage to Data Domain. Recovery is also faster because every backup is stored as a full backup, which removes the overhead of recovering from full backups and their incrementals.

 

Much like Data Domain Boost for Enterprise Applications, ProtectPoint empowers application owners and database administrators to control their own backup and recovery. To achieve this, all cataloging and policy management is done via the application’s native utilities. For Oracle RMAN, EMC ProtectPoint provides the same control to Oracle DBAs as DD Boost for RMAN – being full control of backup, recovery and replication natively from RMAN.

 

To complete the picture of what was announced on July 8th for EMC ProtectPoint, the initial release is unique and exclusive to the new VMAX3 systems (VMAX 100K, 200K and 400K), with plans to provide support on VNX and XtremIO in 2015.

 

In a follow-up blog post (Introducing EMC ProtectPoint, with Oracle RMAN Integration!), I describe how EMC ProtectPoint works and explain what is meant by “enables … instant access to protected data for simplified granular recovery”. Intrigued?

 

Comments / Questions?

 


EMC-XtremIO-promotion-banner2.png

Filter Blog

By date:
By tag: