Welcome back! Today it's part 7 of 'An Inside Look' and we are going to be looking at volume replication using SRDF with VMAX & OpenStack Ocata. Before we get stuck in, if you would like to revisit last week's article on Consistency Groups using VMAX & OpenStack click here. This week's article will get fairly heavy and technical in places, so I will do my best to break it down into logical, easy to digest chunks of information so as not to overwhelm you too much!

 

What is SRDF?

Symmetrix Remote Data Facility (SRDF) is the gold standard for providing disaster recovery (DR) and data mobility solutions for VMAX arrays.  SRDF provides external protection of data at a remote site over synchronous and asynchronous distances and also in an Active/Active model.  SRDF has been around a long time continuously improving and protecting the worlds most mission critical systems providing disaster recover and disaster avoidance solutions. There is a lot of reading material out there if you are not familiar and the SRDF Family CLI User Guide goes through all this in detail so I won't attempt to cover off here, only just what is relevant to us using VMAX with OpenStack.  The latest 8.4 SRDF Family CLI User Guide guide can be found here.

 

VMAX, SRDF & OpenStack

The VMAX drivers for OpenStack use synchronous SRDF (SRDF/S) as the replication type, at present VMAX Cinder drivers can support a single replication target per back-end, concurrent SRDF or cascaded SRDF are unsupported. Also, in order for replication to work with VMAX & OpenStack you will need the correct licenses added to your VMAX, namely the Advanced Suite and Remote Replication Suite (more information can be found in the introductory article of this series).

 

What is Synchronous SRDF (SRDF/S)

SRDF synchronous (SRDF/S) mode maintains a real-time copy at arrays generally located within 200 kilometres (dependent upon application workload, network latency, and block size). Writes from the production host are acknowledged from the local array when they are written to cache at the remote array creating a real time mirror of the primary devices.

 

SRDF disaster recovery solutions, including SRDF synchronous, traditionally use active, remote mirroring and dependent-write logic to create consistent copies of data. Dependent-write consistency ensures transactional consistency when the applications are restarted at the remote location.

 

An SRDF device is a logical device paired with another logical device that resides in a second array. The arrays are connected by SRDF links. R1 devices are the member of the device pair at the primary (production) site. R1 devices are generally read/write accessible to the host. R2 devices are the members of the device pair at the secondary (remote) site. During normal operations, host I/O writes to the R1 device are mirrored over the SRDF links to the R2 device.

 

 

SRDF Links.png

 

Traditionally, data on R2 devices are not available to the host while the SRDF relationship is active. In SRDF synchronous mode, an R2 device is typically in read only mode (write disabled) that allows a remote host to read from the R2 devices. In a typical open systems host environment, the production host has read/write access to the R1 device. A host connected to the R2 device has read only access to the R2 device. To access the R2 device of a traditional synchronous relationship, a manual failover command must be performed to write enable the R2 site to accept host writes.

 

If you would like to find out about this process in a lot finer detail, the SRDF CLI user guide linked earlier has great sections on full & incremental SRDF pairings and the failover procedure from a VMAX perspective. This info can be found in the section 'Basic SRDF Control Operations'. There is too much to include here in this article so I would recommend there as a good starting point to read up more on the topic.

 

Disaster Recovery Considerations with VMAX & OpenStack Environments

When preparing your environment for SRDF replication and DR there are a number of things which must be taken in to consideration if the situation arises where your local environment becomes inaccessible.

  • For full failover functionality, the source and target VMAX arrays must be discovered and managed by the same SMI-S/ECOM server, locally connected for example. This SMI-S/ ECOM server cannot be embedded - it can be installed on a physical server or a VM hosted by an ESX server only.
  • With both arrays being managed by the one SMI-S server, it is the cloud storage administrators responsibility to account for a DR scenario where the management (SMI-S) server goes down as well as the primary array. In that event, the details and credentials of a back-up SMI-S server can be passed in to the XML file, and the VMAX cinder driver can be rebooted. It would be advisable to have the SMI-S server at a third location (separate from both arrays) if possible.
  • If the source and target arrays are not managed by the same management server (that is, the target array is remotely connected to server), in the event of a full disaster scenario (for example, the primary array is completely lost and all connectivity to it is gone), the SMI-S server would no longer be able to contact the target array. In this scenario, the volumes would be automatically failed over to the target array, but administrator intervention would be required to either; configure the target (remote) array as local to the current SMI-S server, or enter the details to the XML file of a second SMI-S server, which is locally connected to the target array, and restart the cinder volume service.

 

Configuring your Environment for Volume Replication

The first step in configuring your OpenStack environment with volume replication abilities is to setup the associated SRDF group that will be used by OpenStack to replicate volumes between source and target VMAXs. To create an SRDF group:

  1. Select your source VMAX in Unisphere and navigate to 'Data Protection' > 'Create SRDF Group'
  2. Select your communication protocol type and give the group a label (name)
  3. It is possible to configure multiple SRDF groups on the same source and target, so to differentiate between each a group number is assigned. Select an unused group number and enter it into the dialogue box
  4. Select the director ports on the primary VMAX, hold CTRL whilst selecting to select multiple director:port combinations
  5. In the remote section, click 'scan' to show all available VMAXs and pick your target
  6. Enter your SRDF group number and select the target director ports
  7. Click 'OK' to create the SRDF group

SetupSRDF_U4V.PNG.png

 

Once the SRDF group has been created that is it, there is no more configuration required on the VMAX side of things. The VMAX Cinder drivers will handle the rest of the setup on the first volume replication task.

 

With your SRDF group configured, the next step is create a replication enabled volume type in OpenStack. As with any volume type, creating them involves configuring a back end stanza in /etc/cinder/cinder.conf for it and an associated XML configuration file. When configuring your replication back end stanza in cinder.conf the following rules apply:

  • The source array is defined in the XML configuration file, this file uses the same format as any other VMAX back end
  • The target array is defined in the replication_device parameter
  • Only one target array can be defined per back end
  • If SSL is required it will also need to be configured in the back end stanza, along with any other back end specific configurations as standard

 

[DEFAULT]
enabled_backends = VMAX_ISCSI_REPLICATION
[VMAX_FC_REPLICATION]
volume_driver = cinder.volume.drivers.dell_emc.vmax.fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_REPLICATION.xml
volume_backend_name = VMAX_ISCSI_REPLICATION
replication_device = target_device_id:000197811111, remote_port_group:os-failover-pg,\
remote_pool:SRP_1, rdf_group_label: 28_11_07, allow_extend:False











 

Note: The replication_device parameter must be defined all in one line, it is only split across two lines in the example above for ease of reading.

 

  • target_device_id is a unique VMAX array serial number of the target array. For full failover functionality, the source and target VMAX arrays must be discovered and managed by the same SMI-S/ECOM server. That is, locally connected. Follow the instructions in the SMI-S release notes.
  • remote_port_group is the name of a VMAX port group that has been pre-configured to expose volumes managed by this backend in the event of a failover. Make sure that this port group contains either all FC or all iSCSI port groups (for a given back end), as appropriate for the configured driver (iSCSI or FC).
  • remote_pool is the unique pool name for the given target array.
  • rdf_group_label is the name of a VMAX SRDF group (Synchronous) that has been pre-configured between the source and target arrays.
  • allow_extend is a flag for allowing the extension of replicated volumes. To extend a volume in an SRDF relationship, this relationship must first be broken, both the source and target volumes are then independently extended, and then the replication relationship is re-established. As the SRDF link must be severed, due caution should be exercised when performing this operation. If not explicitly set, this flag defaults to False.

 

The last step is to create a replication-enabled volume type. Once the replication_device parameter has been entered in the VMAX backend entry in the cinder.conf, a corresponding volume type needs to be created replication_enabled property set.

$ openstack volume type create VMAX_FC_REPLICATION

$ openstack volume type set --property volume_backend_name=Replication_backend VMAX_FC_REPLICATION

$ openstack volume type set --property replication_enabled='<is> True' VMAX_FC_REPLICATION

 

CreateVT.PNG.png

 

With the volume type created for use in OpenStack that is all the steps complete, any volumes created using this new replication enabled volume type will be automatically replicated between your source and target VMAXs. If you look in Unisphere at the SRDF group you created in the first step, all volumes created using the replication volume type will show the source volume and the replicated target volume.

 

SyncedU4V.PNG.png

 

An attempt will be made to create a storage group on the target array with the same service level (SL) and workload combination as the primary. However, if this combination is unavailable on the target (for example, in a situation where the source array is a Hybrid, the target array is an All Flash, and an All Flash incompatible SL like Bronze is configured), no SL will be applied.

 

Volume replication interoperability with other features

Most features are supported except for the following:

  • There is no OpenStack Consistency Group support for replication-enabled VMAX volumes.
  • Storage-assisted retype operations on replication-enabled VMAX volumes (moving from a non-replicated type to a replicated-type and vice-versa. Moving to another SLO/workload combination, for example) are not supported.
  • The image volume cache functionality is supported (enabled by setting image_volume_cache_enabled = True), but one of two actions must be taken when creating the cached volume:  This is because the initial boot volume is created at the minimum required size for the requested image, and then extended to the user specified size.
    • The first boot volume created on a back end (which will trigger the cached volume to be created) should be the smallest necessary size. For example, if the minimum size disk to hold an image is 5GB, create the first boot volume as 5GB.
    • Alternatively, ensure that the allow_extend option in the replication_device parameter is set to True.

 

Failover host

In the event of a disaster, or where there is required downtime, upgrade of the primary array for example, the administrator can issue the failover host command to failover to the configured target. When making these changes to/from failover backends using the CLI you will get no confirmation back that the switch was successful, however, if you monitor the Cinder volume logs you will see the change being carried out there and the remote array becoming the source and vice-versa.

$ cinder failover-host cinder_host@VMAX_FC_REPLICATION

switch1.PNG.png

 

If the primary array becomes available again, you can initiate a fail back using the same command and specifying --backend_id default:

$ cinder failover-host cinder_host@VMAX_FC_REPLICATION --backend_id default 

switch2.PNG.png

 

Troubleshooting issues with VMAX Replication in OpenStack

As with the potential issues faced with features discussed up until this point, the most likely place to look when something goes wrong is in the configuration and setup of replication for use in OpenStack. There are a few moving parts which must come together in order for replication to work as intended, I will not cover the ECOM related considerations surrounding DR as they are discussed earlier in this article.

 

If you are having issues with a VMAX replication enabled volume type in OpenStack check the following:

  • Is the backend stanza in cinder.conf configured correctly?
  • Is the OpenStack volume type correctly configured and extra spec for replication added?
  • Is the RDF group correctly configured in Unisphere or SE?
  • Are you trying an unsupported operation?
    • There is no CG support for replicated volumes
    • There is no volume retype support

 

Coming up next time...

Next time in 'VMAX & Openstack Ocata: An Inside Look' we will be looking at storage assisted volume migration, or othewise known as volume retype!  As always thanks for reading and if you have any comments, suggestions, document fixes, or questions, feel free to contact me directly or via the comments section below!