Welcome back to part 5 of VMAX & OpenStack Ocata: An Inside Look! In my last post I covered snapshots and backups in Cinder using VMAX, if you would like to see that blog article again please click here. Today we will look at managing and unmanaging VMAX volumes in OpenStack Ocata.

 

Manage Volumes

Managing volumes in OpenStack is the process whereby a volume which exists on the storage device is imported into OpenStack to be made available for use in the OpenStack environment.  For a volume to be valid for managing into OpenStack, the following prerequisites must be met:

  • The volume exists in a Cinder managed pool
  • The volume is not part of a Masking View
  • The volume is not part of an SRDF relationship
  • The volume is configured as a TDEV (thin device)
  • The volume is set to FBA emulation

 

For a volume to exist in a Cinder managed pool, it must reside in in the same Storage Resource Pool (SRP) as the back end which is configured for use in OpenStack. For the purposes of this article, my configured back end will be using the Diamond service level with no workload type specified. Specifying the pool correctly can be entered manually as it follows the same format each time:

Pool format: <service_level>+<workload_type>+<srp>+<array_id>

Pool example 1: Diamond+DSS+SRP_1+111111111111

Pool example 2: Diamond+SRP_1+111111111111

Values

service_level - The service level of the volume to be managed

workload - The workload type of the volume to be managed

srp - The Storage Resource Pool configured for use by the back end

array_id - The numerical VMAX ID

 

It is also possible to get the pool name using the Cinder CLI command cinder get-pools.  Running this command will return the available pools from your configured back ends. Each of the pools returned will also have the host name and back end name specified, these values are needed in the next step.


CinderGetPools.png


With your pool name defined you can now manage the volume into OpenStack, this is possible with the CLI command cinder manage. The bootable parameter is optional in the command, if the volume to be managed into OpenStack is not bootable leave this parameter out. OpenStack will also determine the size of the value when it is managed so there is no need to specify the volume size.

Command Format:

$ cinder manage --name <new_volume_name> --volume-type <vmax_vol_type> --availability-zone <av_zone> <--bootable> <host> <identifier>

Command Example:

$ cinder manage --name vmax_managed_volume --volume-type VMAX_ISCSI_DIAMOND

--availability-zone nova demo@VMAX_ISCSI_DIAMOND#Diamond+SRP_1+111111111111 031D8

CinderManage.PNG.png


After the above command has been run, the volume will be available for use in the same way as any other OpenStack VMAX volume.


Managing Volumes with Replication Enabled

Whilst it is not possible to manage volumes into OpenStack that are part of a SRDF relationship, it is possible to manage a volume into OpenStack and enable replication at the same time. This is done by having a replication enabled VMAX volume type (discussed in pt. 7 of this series), during the manage volume process you specify the replication volume type as the chosen volume type. Once managed, replication will be enabled for that volume.


Unmanaging a Volume

Unmanaging a volume is not the same as deleting a volume. When a volume is deleted from OpenStack, it is also deleted from the VMAX at the same time. Unmanaging a volume is the process whereby a volume is removed from OpenStack but it remains for further use on the VMAX. The volume can also be managed back into OpenStack at a later date using the process discussed in the previous section. Unmanaging volume is carried out using the cinder unmanage CLI command:

Command Format:

$ cinder unmanage <volume_name/volume_id>

Command Example:

$ cinder unmanage vmax_test_vol

CinderUnmanage.PNG.png


Once unmanaged from OpenStack, the volume can still be retrieved using its device ID or OpenStack volume ID. Within Unisphere you will also notice that the 'OS-' prefix has been removed, this is another visual indication that the volume is no longer managed by OpenStack.


VolU4V_unmanaged.PNG.png

Troubleshooting issues with Managing & Unmanaging Volumes

When managing & unmanaging volumes there are a number of things which may contribute to the operation not being carried out successfully. Unmanaging volumes is the most straight forward of the two operations so I will cover that first. Basically, when a volume is unmanaged from OpenStack, all that is happening is the volume is removed from the Cinder database and the volume UUID changed to remove the OS- prefix. The only thing that can go wrong here from a VMAX perspective is if the connection to the ECOM server goes down during the operation to rename the volume. Check the status of your ECOM server and try the operation again, restarting the ECOM if necessary. The process of removing the volume from the Cinder database involves just an SQL command being run in the back ground to remove it from the required tables.


Managing a volume into OpenStack is trickier as there are more moving parts and prerequisites which must be met first in order for it to work successfully. The first thing to check is that your volume meets all of the following requirements:

  • The volume exists in a Cinder managed pool
  • The volume is not part of a Masking View
  • The volume is not part of an SRDF relationship
  • The volume is configured as a TDEV (thin device)
  • The volume is set to FBA emulation

 

If all of the above requirements are met then you need to make sure that the host is properly specified in the manage command itself. The host parameter takes the format:

Format: <hostname>@<volume_type>#<service_level>+<srp_id>+<array_id>

Example: demo@VMAX_ISCSI_DIAMOND#Diamond+SRP_1+111111111111

 

If the host is defined correctly, is the device identifier correct? You can double check this easily in by checking the value of the volume in Unisphere or through Solutions Enabler. With everything correctly configured and checked up until this point you should have no problems managing the volume in to OpenStack.

 

There may be some confusion over the replication enabled ability for VMAX volumes managed into OpenStack. What is meant here is that the volume being managed into OpenStack is not a part of an SRDF relationship before it is managed, but after being managed it is assigned an OpenStack volume type with replication enabled, so that newly managed volume gets added to an OpenStack replication volume type, meaning it is replicated from that point forward.

 

What's coming up in part 6 of 'VMAX & OpenStack Ocata: An Inside Look'...

In the next in the series of 'VMAX & OpenStack Ocata: An inside Look' we will be looking at OpenStack Consistency Groups. Not to be confused with VMAX Consistency Groups (which is an SRDF feature), OpenStack Consistency Groups allow snapshots of multiple volumes in the same consistency group to be taken at the same point-in-time to ensure data consistency. See you then!