Find Communities by: Category | Product

AppSync is a GUI driven management interface to EMC array-based cloning, snapshot, and remote replication technologies and RecoverPoint local and remote replication technologies. With release 2.0 of AppSync intelligence around Oracle databases has been added. For those of you familiar with EMC Replication Manager (RM) and its use for Oracle databases AppSync now supports similar functionality.

 

Some GUI-driven scenarios:

1) For Oracle backups and restores users can subscribe their databases to a customizable service plan that say i. creates both local and remote clones, ii. creates only a local clone or iii. creates only a remote clone. The service plans can operate through AppSync's scheduler where the database is placed in hot backup mode before the clone is snapped off, and possibly mounted to a backup host from which to run a RMAN backup job. Alternatively the clone can be mounted to the production host and registered with the RMAN catalog to be used for RMAN restore operations at the level of granularity that RMAN supports.

 

2) Repurposing copies of a production source or copies off copies can be used for life cycle management of test, development, training, patching, reporting, etc environments. Mounting of these copies can be scheduled, mounted on alternative hosts, with changes, for examples, to the database name and SID. On-demand refreshes of the copies are also possible, so that control of the refresh can be passed to the relevant teams using the clones or snaps.

 

At the more general level AppSync allows for:

  • Mounting copies to alternative hosts
  • Mounting a copy with a new database name and SID, for example renaming from prod to test
  • Mounting to alternative mount points or renamed ASM disks
  • Making clones or snaps where the database is either placed in hot backup mode or shutdown first, or created as a write-ordered restartable image
  • Running pre- or post-clone/snap scripts, for example post-clone RMAN scripts
  • Scheduling clone and snapshot jobs

 

Supports Oracle database.

  • Oracle 11.2.x, Oracle 12c (does not include new 12c container features)
  • Supported on Linux - physical, virtual (pRDM, vDisk)
  • Supported on AIX- physical, NPiV
  • Supported with VMAX, VNX, RecoverPoint
  • Standalone, RAC, ASM

 

Support for VMAX

  • Oracle, File-Systems, Microsoft Exchange, Microsoft SQL Server, VMware DataStores
  • Local snaps, clones
  • Remote snaps, clones
  • Local/remote snaps of clones, clones of clones

 

Repurposing for Oracle databases residing on VMAX or VNX, AppSync supports the ad-hoc creation of Oracle database copies, followed by the creation of copies of those copies. This practice is referred to as Repurposing. The repurposed copies can serve many useful functions including test-dev, break-fix, data mining and reporting.

  • VMAX -- Local/remote snaps of clones, clones of clones
  • VNX -- Snap of snap

 

Support for generic filesystems includes:

  • Dynamic discovery of filesystems during Service Plan run
  • Protection, Mount and Restore
  • Supported on Windows, Linux, AIX


AppSync Backup Assistant - Backup and restore (rollback) the AppSync database.

AppSync host plug-in can coexist with NetWorker or Replication Manager agents.

Understanding that snapshots are instantaneous copy images of volume data with the state of the data captured exactly as it appeared at the specific point in time that the snapshot was created, enabling users to save the volume data state and then access the specific volume data whenever needed, including after the source volume has changed.

 

Creating EMC XtremIO snapshots, which can be done at any time, does not affect system performance, and a snapshot can be taken either directly from a source volume or from other snapshots within a source volume’s group (Volume Snapshot Group).

 

The original copy of the data remains available without interruption, while the snapshot can be used to perform other functions on the data. Changes made to the snapshot’s source do not change or impact on the snapshot data.

 

XtremIO's snapshot technology is implemented by leveraging the content-aware capabilities of the system (Inline Data Reduction), optimized for SSD media, with a unique metadata tree structure that directs I/O to the right timestamp of the data. This allows efficient creation of snapshots that can sustain high performance, while maximizing the media endurance, both in terms of the ability to create multiple snapshots and the amount of I/O that a snapshot can support.

 

When creating a snapshot, the system generates a pointer to the ancestor metadata (of the actual data in the system). Therefore, creating a snapshot is a very quick operation that does not have any impact on the system and does not consume any capacity. Snapshot capacity consumption occurs only if a change requires writing a new unique block.

 

XtremIO snapshots are space-efficient both in terms of additional metadata consumed and physical capacity. Snapshots are implemented using a redirect-on-write methodology, where new writes to the source volume (or snapshot) are redirected to new locations, and only metadata is updated to point to the new data location. This method guarantees no performance degradation while snapshots are created.

 

Snapshots can be accessed like any other volume in the cluster in read write mode to enable a wide range of uses, including:

  • Logical corruption protection — Taking frequent snapshots that are based on a defined recovery point objective (RPO) interval enables you to utilize snapshots for recovery of logical data corruption. The snapshots are saved in the system for as long as deemed necessary, and remain available for recovery use, should logical data corruption occur, thus enabling recovery of an earlier application state (prior to logical data corruption occurrence) to a known point in time prior to the corruption of the data.
  • Backups — Presenting snapshots to a backup server (or agent) enables offloading the backup process from the production server.
  • Development and testing — Taking snapshots of production data enables you to create multiple space-efficient, high-performance copies (snapshots) of the production system for the purpose of development and testing.
  • Clones — Using persistent writable snapshots enables achieving clone-like capabilities. The snapshots can act as clones of the production volume to multiple servers. Clone performance is identical to that of the production volume.
  • Offline processing — Use snapshots as a means of offloading data processing from the production server. For example, if you need to run a heavy process on data (which may be detrimental to the production server's performance), you can use snapshots to create a recent copy of the production data and then mount it on a different server. The process can then be run (on the other server), without consuming the production server's resources.

 

In summary, it is the highly efficient use of shared in-memory metadata that makes creating an EMC XtremIO snapshot a very quick operation, with no impact on production systems and without consuming any capacity. Snapshots can then be mounted to other servers to enable offline processing, such as analytics and reporting.

 

Further details can be found in the EMC XtremIO Storage Array User Guide, available at the EMC Support.


Questions / Comments?

OZFSSS.jpg

 

I know I should not be surprised by the recent unbelievable 5x / 10x claims comparing Oracle’s Sun ZFS Storage Appliance with EMC and other vendors “legacy” products, but they really were not comparing similar purpose products to their own.

 

I firmly believe EMC prides itself on providing informed customer choice and a breadth of products to most often exceed customer requirements. With that in mind, the right way for technical people like us to respond to this form of old-style marketing is to either present our strengths and leave the customer to decide, or preferably become sufficiently familiar with Oracle’s competing products to be fair and compare them with our own, ensuring the right questions are being asked of both our competitors and our own specialists to the benefit of our customers.

 

Having recently come across references on EMC Support and ECN describing VNX, Isilon and XtremIO Simulators available for download, I began to think how cool it would be if Oracle had a simulator for their ZFS appliance that everyone can use for testing and after a quick search on Google, below is an introduction to what I found.

 

Oracle’s Sun ZFS Storage Appliance Simulator

 

You must be registered with an account on Oracle’s Technology Network (OTN) and then perform the following steps to download, install and configure the Sun ZFS Storage Appliance Simulator.

 

Hardware requirements for the Oracle VM are a healthy 2GB RAM, 1 CPU and 125GB HD space, this is a storage simulator with a 50GB system disk and 15 x 5GB disks for testing purposes. In the GUI provided, you actually get a view of the storage array and can click on individual disks for information.

 

Follow these steps to get the Oracle ZFS Storage Simulator:

    

1. Install and start VirtualBox 3.1.4 or later.

 

2. Download the simulator and uncompress the archive file.


3. Select "File - Import Appliance" in VirtualBox or simply double-click the file Oracle_ZFS_Storage.ovf


4. Select the VM labeled "Oracle_ZFS_Storage” and check the Network: Adapter 1, I had to change this to “Bridged Adapter”.

 

5. Start the VM.

 

When the simulator initially boots you will be prompted for some basic network settings, this apparently is exactly the same as if you were using an actual Oracle ZFS Storage Appliance;

 

         •        Host Name: oraclezfs1

         •        DNS Domain: "localdomain"

         •        Host IP Address: 192.168.0.101

         •        Netmask: 255.255.255.0

         •        Default Router: 192.168.0.1

         •        DNS Server: 192.168.0.1

         •        Password: @@@@@@@@

         •        Password: @@@@@@@@

 

After you enter this information, press ESC-1, then wait and you will see two URLs to use for subsequent configuration and administration, one with an IP address and one with the hostname you specified.

 

Use the URL with the IP address (for example https://192.168.0.101:215/) to complete appliance configuration.

 

The login name is 'root' and the password is what you entered as @@@@@@@@ above.

 

It is best now to download the Oracle ZFS Storage Appliance Simulator Quick Start Guide and starting on page 5, follow the steps to confirm the settings screens, check the values entered and click COMMIT to continue.

 

Page 14 gives an Example Filesystem Setup, with further information available in the Product Documentation;

 

Sun ZFS Storage Appliance Product Documentation

Oracle Snap Management Utility for Oracle Database User Guide

Oracle Enterprise Manager Plug-in for Oracle ZFS Storage Appliance User Guide

 

With this simulator, you can get a feel for the GUI and storage features available in a Sun ZFS Storage Appliance, and have a better-informed conversation with our customers about the features and functionality available.

 

Comments/Questions?

Bitly URL: http://bit.ly/1yjArex

 

Tweet this document:

The question of whether a given Oracle application is mission critical is not interesting. http://bit.ly/1yjArex

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

As an attorney, I am frequently asked this question: "Is blah, blah, blah illegal?" To which I have a standard response: "The question of whether a given activity is illegal is uninteresting. A more interesting question is: What bad thing happens to you when you do blah, blah, blah?"


The question of whether a given application is mission critical is similarly uninteresting. A more interesting question is: "What bad thing happens to the business if the application fails?" And another interesting question: "How can we protect the business from the bad consequence of the application failing?"


In my experience, this varies dramatically based upon the nature of the application. For example, the failure of a typical Oracle application will result in severe consequences to the business. This is because of the nature of the beast: Oracle is typically used to manage the primary business data of the enterprise. Thus, loss of even a single Oracle transaction (say the trading instructions of a customer of a stock broker) would result in hard, severe legal consequences.


In this context (i.e. a traditional 2nd platform application like Oracle), concepts like backup, clustering, and remote replication all make perfect sense, and EMC has exceptional products to supply those needs.


A 3rd platform application is typically very different. Take MongoDB, an application with which I am fairly familiar. Mongo folks will consistently tell you: "You are going to lose some data. Get over it!" Thus, Mongo is not used for any purpose where transactional consistency is required. Usually, the customer will implement Mongo for an intermediate stage, scratchpad type of function.


Also, Mongo datastores are often astronomically large. (Petabytes are common.) It is simply not possible to back up something that big.


Further, Mongo implements sharding, a geographically dispersed form of redundant replication. For this reason, the loss of a single Mongo server is simply uninteresting. No consequences occur at all from this, other than possibly a minor, temporary performance blip.


For these reasons, clustering, backup and remote replication are not very interesting for a 3rd platform application like Mongo (although there is some variability in that).


And therein lies the challenge for a company like EMC, which has traditionally dominated in the mission-critical 2nd platform types of applications, similar to Oracle. But then again, EMC has a long and storied tradition of reinvention. I have no doubt that EMC will eventually become one of the dominant players in the 3rd platform.

In the previous blog, I discussed about two important features of vSphhere viz. RDM and vMotion Migration. In this blog I will be discussing couple more interesting features:-

 

vSphere Data Protection (VDP) is a robust, simple to deploy, disk-based backup and recovery solution. VDP is fully integrated with the VMware vCenter Server and enables centralized and efficient management of backup jobs while storing backups in deduplicated destination storage locations.

 

The VMware vSphere Web Client interface is used to select, schedule, configure, and manage backups and recoveries of virtual machines. During a backup, VDP creates a quiesced snapshot of the virtual machine. Deduplication is automatically performed with every backup operation.

At every level of the datacenter, from individual components all the way up to the entire site, VMware vSphere 5.x provides protection against both planned and unplanned downtime. All these features combine to provide greater availability to all supported operating systems and applications as you can see in the below illustration.

zz4.png

 

Benefits

 

  • Protection against hardware failures.
  • Planned maintenance with zero downtime
  • Protection against unplanned downtime and disasters
  • Fast, efficient backup and recovery for vSphere virtual machines
  • Significantly reduced backup data disk space requirements, with a patented, variable-length deduplication technology across all backup jobs
  • Use of vSphere Storage APIs – Data Protection and Changed Block Tracking (CBT) to reduce load on the vSphere host infrastructure and minimize backup window requirements
  • Full virtual machine restore—or “image-level” restore—and File Level Restore (FLR), without the need for an agent to be installed in every virtual machine
  • Simplified deployment and configuration using a virtual appliance form factor
  • Administration through vSphere Web Client
  • Appliance and data protection via a checkpoint-and-rollback mechanism

 

 

vSphere HA and Fault Tolerance

Many methods ensure highly availability in a virtualized environment. vSphere 5.x uses technologies like the following to ensure that virtual machines running in the environment remain available:

  • Virtual machine migration
  • Multiple I/O adapter paths
  • Virtual machine load balancing
  • Fault tolerance
  • Disaster recovery tools

 

High availability and fault tolerance offerings are different from other business continuity offerings because:

  • They exist in a single physical datacenter. Other solutions, such as VMware vCenterTM Site Recovery ManagerTM (SRM), can operate across physical locations.
  • They use shared storage for holding the data of the machines. Other solutions use multiple copies of the data, which are regularly replicated.

VMware vSphere vMotion and VMware vSphere Storage vMotion keep virtual machines available during a planned outage, for example, when hosts or storage must be taken offline for maintenance. System recovery from unexpected storage failures is simple, quick and reliable with the encapsulation property of virtual machines. Storage vMotion can be used to support planned storage outages resulting from upgrades to storage arrays to newer firmware or technology and VMware vSphere® VMFS upgrades.

 

Let us look into the basic HA architecture

 

zz5.png

 

 

To configure high availability, some numbers of ESXi hosts are grouped into an object called a cluster. When vSphere HA is enabled, the Fault Domain Manager (FDM) service starts on the member hosts. After the FDM agents have started, the cluster hosts are said to be in a fault domain. Hosts cannot participate in a fault domain if they are in maintenance mode, standby mode, or disconnected from vCenter Server. A host can be in only one fault domain at a time.

vSphere HA provides a base level of protection for your virtual machines by restarting virtual machines in a host failure. Fault Tolerance provides a higher level of availability, allowing users to protect any virtual machine from a host failure with no loss of data, transactions, or connections. Fault Tolerance provides zero downtime, zero data loss, and continuous availability for your applications.

Fault Tolerance is used for mission-critical applications that can tolerate no downtime or data loss. Fault Tolerance can be used for applications that must be available at all times, especially those that have long-lasting client connections. Fault Tolerance can also be used for custom applications that have no other way of doing clustering because, for example, they are not running in an operating system that has cluster capabilities. Fault Tolerance is a less complex alternative to using third-party applications for failover clustering. Fault Tolerance can be used with DRS when Enhanced vMotion Compatibility (EVC) is enabled. When a cluster has EVC enabled, DRS:

 

  • Makes the initial placement recommendations for fault-tolerant virtual machines
  • Moves them during cluster load rebalancing
  • Allows you to assign a DRS automation level to the primary virtual machine The secondary virtual machine assumes the same setting

 

When DRS is used with fault-tolerant virtual machines, a primary virtual machine can be placed on any suitable hosts in the cluster. The host chosen for the secondary virtual machine must have the same processor type as the primary virtual machine’s host.

In the previous blog, I talked about different features of VMware vSphere. In this blog, I would like to discuss a couple of features of vSphere as below :-

 

  • Raw Device Mapping (RDM)
  • vMotion Migration

 

Raw Device Mapping (RDM):- Introduced with ESX Server 2.5, raw device mapping allows a special file in a VMFS volume to act as a proxy for a raw device.

 

An RDM is a file stored in a VMFS volume that acts as a proxy for a raw physical device.Instead of storing virtual machine data in a virtual disk file stored on a VMFS datastore, we can store the guest operating system data directly on a raw LUN. Storing the data this way is useful if we are running applications in our virtual machines that must know the physical characteristics of the storage device. And mapping a raw LUN allows us to use existing SAN commands to manage storage for the disk. The mapping file contains metadata used to manage and redirect disk accesses to the physical device. The mapping file gives us some of the advantages of a virtual disk in the VMFS file system, while keeping some advantages of direct access to physical device characteristics. In effect, it merges VMFS manageability with raw device access.

 

An RDM is recommended when a virtual machine must interact with a real disk on the SAN. This condition is the case when we make disk array snapshots or have a large amount of data that we do not want to move onto a virtual disk as part of a physical to virtual conversion.

 

RDM can be used in the following scenario:-

 

  • As a backup drive only.
  • When VMFS virtual disk would become too large.
  • Utilize native SAN tools – such as SAN snapshots
  • Disaster Recovery – Connect RDM to another physical host
  • vMotion activity à VM is registered to destination host

   

In below diagram we can observe that

 

  • RDM is enabled us to store virtual machine data directly on a number of LUNs.
  • The mapping file is stored on a VMware vSphere VMFS datastore that points to the raw LUN.

  zz1.png

 

vMotion Migration:- VMware® VMotion™ enables the live migrationof running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity. VMotion is a key enabling technology for creating the dynamic, automated, and self-optimizing datacenter.

zz3.png

vMotion migrates running virtual machines from one server to another with no disruption or downtime. vMotion enables VMware vSphere® Distributed Resource SchedulerTM (DRS) to migrate running virtual machines from one host to another to balance the load.

With vMotion, the entire state of the virtual machine is moved from one host to another while the data storage remains in the same datastore.

The state information includes the current memory content and all the information that defines and identifies the virtual machine. The memory content includes transaction data and whatever bits of the operating system and applications are in memory. The definition and identification information stored in the state includes all the data that maps to the virtual machine hardware elements, such as:

  • BIOS
  • Devices
  • CPU
  • MAC addresses for the Ethernet cards

   

The source and destination host must meet the below requirements for a vMotion migration to be successful

  • SAN visibility of virtual disks.
  • Gigabit Ethernet (or greater) interconnection
  • Consistent network configuration, both physical and virtual
  • Source and destination server CPUs from the same compatibility group

 

  The VM requirements for vMotion migration are the following :-

 

  • RDM must be accessible to the destination host if the former is used by VM.
  • vMotion must be able to create a swap file if the VM’s swap file is not accessible to the destination host.
  • A VM must not have CPU affinity configured.
  • A VM must not have connection to an internal standard virtual switch or to a virtual device like CD-ROM or floppy drive with a local image mounted.

   

Working Mechanism of vMotion Migration

 

Live migration of a virtual machine from one physical server to another with VMware VMotion is enabled by three underlying technologies.

 

First, the entire state of a virtual machine is encapsulated by a set of files stored on shared storage such as Fibre Channel or iSCSI Storage Area Network (SAN) or Network Attached Storage (NAS). VMware vStorage VMFS allows multiple

installations of VMware ESX® to access the same virtual machine files concurrently.

 

Second, the active memory and precise execution state of the virtual machine is rapidly transferred over a high speed network, allowing the virtual machine to instantaneously switch from running on the source ESX host to the destination ESX host. VMotion keeps the transfer period imperceptible to users by keeping track of on-going memory transactions in a bitmap. Once the entire memory and system state has been copied over to the target ESX host, VMotion suspends the source virtual machine, copies the bitmap to the target ESX host, and resumes the virtual machine on the target ESX host. This entire process takes less than two seconds on a Gigabit Ethernet network.

 

Third, the networks being used by the virtual machine are also

virtualized by the underlying ESX host, ensuring that even after the migration, the virtual machine network identity and network connections are preserved. VMotion manages the virtual MAC address as part of the process. Once the destination machine is activated, VMotion pings the network router to ensure that it is aware of the new physical location of the virtual MAC address. Let us now understand the technical implication with below diagram:-

 

zz2.png

 

In the above diagram, the source host is esx01 and the target host is esx02. The source host and the target host have access to the shared datastore holding the virtual machine’s files.

 

A vMotion migration consists of the following steps:

  1. The virtual machine’s memory state is copied over the vMotion network from the source host to the target host. Users continue to access the virtual machine and, potentially, update pages in memory. A list of modified pages in memory is kept in a memory bitmap on the source host.
  2. After most of the virtual machine’s memory is copied from the source host to the target host, the virtual machine is quiesced. No additional activity occurs on the virtual machine. In the quiesce period, vMotion transfers the virtual machine device state and memory bitmap to the destination host.
  3. Immediately after the virtual machine is quiesced on the source host, the virtual machine is initialized and starts running on the target host. A Reverse Address Resolution Protocol (RARP) request notifies the subnet that virtual machine A’s MAC address is now on a new switch port.
  4. Users access the virtual machine on the target host instead of the source host.
  5. The memory pages that the virtual machine was using on the source host are marked as free.

  

Benefits of using vMotion Migration

  • Improve availability by conducting maintenance without disrupting business operations
  • Move virtual machines within server resource pools to continuously align the allocation of resources to business priorities

 

In the next blog, I will try to discuss some more features of vSphere.

Just a quick note to let all know that EMC is hosting a webinar on XtremIO for databases. Both are on Thursday June 12th, one at 8 am PST and the other at 7 pm PST. The webinar will be posted after for on-demand viewing.

 

To register go to http://xtremio.com/webinars

 

The posted description is:

 

Transform Your Database Infrastructure with EMC XtremIO

EMC XtremIO all-flash arrays are significantly changing the way database applications are deployed. This webinar will provide insight into how flash can be used beyond boosting performance- things like reducing over-provisioning and reducing overall costs. You will hear about different options to deploy streamlined, on-demand and virtualized data warehouse/analytics and test/dev infrastructure, all while maintaining OLTP/other production SLAs by harnessing the capabilities of an XtremIO all-flash array. Learn how our customers are accelerating and scaling databases while consolidating the broader database infrastructure to achieve efficiency, agility and flexibility- all with XtremIO.

Filter Blog

By date:
By tag: