The VMAX drivers for OpenStack Cinder support the ability to migrate volumes between multiple VMAX back-ends (volume types). Migrating a volume in this way will move the volume from its current back-end to a new one. As noted in the official OpenStack documentation, 'this is an administrator function, and can be used for functions including storage evacuation (for maintenance or decommissioning), or manual optimizations (for example, performance, reliability, or cost)'.


This feature is supported by VMAX-3 series arrays and gives the user the option to switch volumes between various Service Level & Workload combinations. For example, if a VMAX backed volume in OpenStack has a Diamond & OLTP combination, it can be migrated to a volume type with a Silver & OLTP combination.


There are 3 workflows for volume migration in OpenStack:

  1. If the VMAX is able to, it will migrate the volume between volume types on its own. If not, it will follow one of the two following workflows:
    1. If the volume is not attached to an instance, Cinder will create a volume using the target volume type and copies the data from the source volume to the new volume
    2. If the volume is attached to an instance, Cinder will create a volume using the target volume type but this time calls Nova (the Compute Service) to copy the data from the source volume to the new volume

 

Configuring your environment for volume retype

To support volume retype in your VMAX & OpenStack environment, there are some changes which need to be made to the Cinder configuration file and the back-end XML configuration file.

 

1.  Add the parameter multi_pool_support to the configuration group in the /etc/cinder/cinder.conf file and set it to True

 

[CONF_GROUP_VMAX]
volume_driver = cinder.volume.drivers.dell_emc.vmax.fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name = VMAX_backend
multi_pool_support = True






 

The next step is to configure a single back-end per VMAX Storage Resource Pool (SRP). This is different from the regular configuration where one back-end is configured per service level/workload combination.


2. Create the file /etc/cinder/cinder_emc_config_CONF_GROUP_VMAX.xml (as linked to by the cinder_emc_config_file parameter in step 1) and add the following lines. The difference from the previous XML file created during initial setup is the removal of the <ServiceLevel> and <Workload> tags, with retype these values are no longer required here. Note: The XML filename is still in the naming convention of cinder_emc_config_[CONF_GROUP].xml

 

<?xml version = "1.0" encoding = "UTF-8" ?>
<EMC>
   <EcomServerIp>1.1.1.1</EcomServerIp>
   <EcomServerPort>00</EcomServerPort>
   <EcomUserName>user1</EcomUserName>
   <EcomPassword>password1</EcomPassword>
   <PortGroups>
      <PortGroup>OS-PORTGROUP1-PG</PortGroup>
      <PortGroup>OS-PORTGROUP2-PG</PortGroup>
   </PortGroups>
   <Array>111111111111</Array>
   <Pool>SRP_1</Pool>
</EMC>






 

3. With cinder.conf updated and the new VMAX back-end config file created, restart the Cinder volume service for the changes to take effect:

Ubuntu: $ sudo service cinder-volume restart

RedHat/CentOS/SLES/openSUSE: $ sudo systemctl restart cinder-volume

4. Now the VMAX back-end is configured for volume retype, we can proceed with creating our new retype supported volume types. The example below demonstrates creating a volume type for the Diamond Service Level and OLTP workload

$ openstack volume type create VMAX_FC_DIAMOND_OLTP

$ openstack volume type set --property volume_backend_name=FC_backend VMAX_FC_DIAMOND_OLTP

$ openstack volume type set --property pool_name=Diamond+OLTP+SRP_1+111111111111

Repeat step 4 to create as many volume types with the various Service Level/Workload combinations that you need for your environment to provision volumes with.  The additional property set in the last command, pool_name, is where you define your Service Level/Workload combination. It uses the following format: <ServiceLevel>+<Workload>+<SRP>+<Array ID>

 

5. Once you have created all of the volume types required, you can check they were added successfully with the command:

$ cinder type-list

 

CLI Pool.PNG.png

 

Retyping your volume from one volume type to another

For migrating a volume from one Service Level or Workload combination to another, use volume retype with the migration-policy set to on-demand. The target volume type should have the same volume_backend_name configured and should have the desired pool_name to which you are trying to retype to.

Command Structure

$ cinder retype --migration-policy on-demand <volume> <volume-type>

Command Example

$ cinder retype --migration-policy on-demand vmax_vol1 VMAX_SILVER_OLTP

 

Retype.PNG.png

 

The example above shows a volume which has just been retyped to a new volume type. You will notice that the host attribute still retains the name of the back end used which all the service level/workload pools are associated with, and the volume type uses multi-pool now so multiple volume types can be created to all use the same back end. This differs from previous examples as one back end used to be associated only with one volume type in a 1-to-1 relationship, this relationship using multi-pool allows us to use a 1-to-many relationship for back ends to volume types.

 

Troubleshooting retyping VMAX volumes

There are only a number of checks which need to be carried out if you find that VMAX volume retype is not working as expected:

  • Is the back end stanza in cinder.conf for your VMAX backend correctly configured and with the additional 'multi_pool_support = True' parameter included?
  • Did you restart all Cinder services after making the change in the previous point?
  • Is the XML configuration file associated with the back end in cinder.conf correct? Did you remove the service level from it?
  • Are the OpenStack volume types correctly set up? Is the pool_name parameter correct?

 

Coming up next time on 'VMAX & Ocata: An Inside Look'...

Next time around we are going to take a look at live migration in OpenStack, where running instances using VMAX block storage volumes are migrated from one compute host to another without any breaks to operations or services, pretty nifty! As always, if you have any comments, recommendations, or questions, leave them in the comments section below or send me a private message! See you next time!