Find Communities by: Category | Product

Michael McAleer's Blog

May 2017 Previous month Next month

Welcome back to VMAX & OpenStack Ocata: An Inside Look! Although we are on to part 2 of our multi-part series, this piece can be seen as more of an extension of what we covered in part 1, where we went through the basic setup of your VMAX & OpenStack environment.  This time we are going to take your environment setup that bit further and talk about the areas of over-subscription, quality of service (QoS), and compression.

 

Again, and as always, if you have any feedback, comments, spot any inconsistencies, want something covered or just a question answer, please feel free to contact me directly or leave a comment in the comments section below!

 

Over-Subscription

OpenStack Cinder enables you to choose a volume back-end based on virtual capacities for thin provisioning using the over-subscription ratio.  To support over-subscription in thin provisioning, a flag max_over_subscription_ratio is introduced into cinder.conf and the existing flag reserved_percentage must be set. These flags are both optional and do not need to be included if over-subscription is not required for the back end.

 

The max_over_subscription_ratio flag is a float representation of the over-subscription ratio when thin provisioning is involved. The table below will illustrate the float representation to over-subscribed provisioned capacity relationship:

 

Float RepresentationOver-subscription multiple (of total physical capacity)
20.0 (Default)20x
10.510.5x
1.0No over-subscription
0.9 or lowerIgnored

 

Note: max_over_subscription_ratio can be configured for each back end when multiple-storage back ends are enabled. For a driver that supports multiple pools per back end, it can report this ratio for each pool.


The existing reserved_percentage flag is used to prevent over provisioning. This flag represents the percentage of the back-end capacity that is reserved. It is the high water mark where by the physical remaining space cannot be exceeded. For example, if there is only 4% of physical space left and the reserve percentage is 5, the free space will equate to zero. This is a safety mechanism to prevent a scenario where a provisioning request fails due to insufficient raw space.

 

Note: There is a change on how reserved_percentage is used. It was measured against the free capacity in the past. Now it is measured against the total capacity.


Example VMAX Configuration Group

The code snippet below demonstrates the settings configured in a VMAX backend configuration group within cinder.conf:

 

[CONF_GROUP_ISCSI]
cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_SILVER.xml
volume_driver = cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver
volume_backend_name = VMAX_ISCSI_SILVER
max_over_subscription_ratio = 2.0
reserved_percentage = 10






















 

Over-subscription considerations and troubleshooting

There is very little required in terms of configuration for over-subscription, if the key/value pairs are correctly set as demonstrated above that is it.  If you run into any problems with over-subscription it will most likely be with running out of space when you hit the reserved percentage, if this happens you will need to either decrease the reserved percentage or look at creating a new volume type with new limits.


Quality of Service (QoS)

Quality of Service (QoS) is the measurement of the overall performance of a service, particularly the performance see by the users of a given network.  To quantitatively measure QoS, several related aspects of the network service are often considered, but for QoS for VMAX & OpenStack environments we are going to focus on three:

  • I/O limit per second (IOPs) - This is the amount of read/write operations per second, in the context of QoS setting this value specifies the maximum IOPs, valid values range from 100 IOPs to 100,000 IOPs (in 100 increments)
  • Throughput per second (MB/s) - This is the amount of bandwidth in MB per second,similar to IOPs setting this will designate the value as the maximum allowed MB/s, valid values range from 1 MB/s to 100,000 MB/s.
  • Dynamic Distribution - Dynamic distribution refers to the automatic load balancing of IO across configured ports. There are two types of Dynamic Distribution; Always & Failure:
    • Always - Enables full dynamic distribution mode. When enabled, the configured host I/O limits will be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demands
    • OnFailure - Enables port failure capability. When enabled, the fraction of configured host I/O limits available to a configured port will adjust based on the number of ports currently online.

 

For more information on setting host IO limits for VMAX please refer to the 'Unisphere for VMAX Online Guide' section called 'Setting Host I/O Limits'.

 

Configuring QoS in OpenStack for VMAX

In OpenStack, we create QoS settings for volume types so that all volumes created with a given volume type has the respective QoS settings applied. There are two steps involved in creating the QoS settings in OpenStack:

  • Creating the QoS settings
  • Associating the QoS settings with a volume type

 

When specifying the QoS settings, they are added in key/value pairs. The (case-sensitive) keys for each of the settings are:

  • maxIOPS
  • maxMBPS
  • DistributionType


As with anything in Openstack, there are two ways to do anything, there is no difference with QoS. You have the choice of either configuring QoS via the CLI or using the the Horizon web dashboard. Obviously using the CLI is the much quicker of the two, but if you do not understand CLI commands, or even QoS, I would recommend sticking the web dashboard method.You can find the CLI example below, but if you would like the UI step-by-step guide with screenshots, you can read the DECN hosted document created for this article 'QoS for VMAX on OpenStack - A step-by-step guide'.

 

Setting QoS Spec

1. Create QoS specs. It is important to note that here the QoS key/value pairs are optional, you need only include them if you want to set a value for that specific key/value pair. {QoS_Spec_Name} is the name which you want to assign to this QoS spec:

Command Structure:

$ cinder qos-create {QoS_spec_name} maxIOPS={value} maxMBPS={value} DistributionType= \ {Always/OnFailure}

Command Example:

$ cinder qos-create FC_NONE_QOS maxIOPS=4000 maxMBPS=4000 DistributionType=Always

QoS - CLI Confirm.PNG.png

 

2. Associate the QoS spec from step 1 with a pre-existing VMAX volume type:

Command Structure:

$ cinder qos-associate {QoS_spec_id} {volume_type_id}

Command Example:   

$ cinder qos-associate 0b473981-8586-46d5-9028-bf64832ef8a3 7366274f-c3d3-4020-8c1d-\ c0c533ac8578

QoS - CLI Confirm 2.PNG.png

QoS Use-Case Scenarios

When using QoS to set specs for your volumes, it is important to know how the specs behave when set at Openstack level, Unisphere level, or both. The following use-cases aims to clarify the expected behaviour, leaving you in complete control over your environment when done!

 

Use-Case 1 - Default Values

Settings:

SG QoS Specs in Unisphere

(before change)

QoS Specs set in Openstack

Host I/O Limit (MB/Sec) = No Limit

Host I/O Limit (IO/Sec) = No Limit

Set Dynamic Distribution = N/A

maxIOPS = 4000

maxMBPS = 4000

DistributionType = Always

 

Outcome:

SG QoS Specs in Unisphere

(after change)

Outcome - Block Storage (Cinder)

Host I/O Limit (MB/Sec) = 4000

Host I/O Limit (IO/Sec) = 4000

Set Dynamic Distribution = Always

Volume is created against volume type and

QoS is enforced with the parameters specified

in the OpenStack QoS Spec.

 

Use-Case 2 - Preset Limits

Settings:

SG QoS Specs in Unisphere

(before change)

QoS Specs set in Openstack

Host I/O Limit (MB/Sec) = 2000

Host I/O Limit (IO/Sec) = 2000

Set Dynamic Distribution = Never

maxIOPS = 4000

maxMBPS = 4000

DistributionType = Always

 

Outcome:

SG QoS Specs in Unisphere

(after change)

Outcome - Block Storage (Cinder)

Host I/O Limit (MB/Sec) = 4000

Host I/O Limit (IO/Sec) = 4000

Set Dynamic Distribution = Always

Volume is created against volume type and

QoS is enforced with the parameters specified

in the OpenStack QoS Spec.

 

Use-Case 3 - Preset Limits

Settings:

SG QoS Specs in Unisphere

(before change)

QoS Specs set in Openstack

Host I/O Limit (MB/Sec) = No limit

Host I/O Limit (IO/Sec) = No limit

Set Dynamic Distribution = N/A

DistributionType = Always

 

Outcome:

SG QoS Specs in Unisphere

(after change)

Outcome - Block Storage (Cinder)

Host I/O Limit (MB/Sec) = No limit

Host I/O Limit (IO/Sec) = No limit

Set Dynamic Distribution = N/A

Volume is created against volume type and

there is no volume change

 

QoS considerations and troubleshooting

When associating QoS in OpenStack to a volume type, the QoS specs are applied to the group of volumes as a whole as QoS is set at the storage group level in Unisphere. If you set maxIOPS for a volume type of say 5000, and have only one volume of that volume type, the maxIOPS achievable by that volume is 5000. If at a later point you have 10 volumes using the same volume type and they are all performing the same workload, each volume of the 10 will be able to achieve a maxIOPS of 500 each, not exceeding the maxIOPS upper limit of 5000 on the volume type itself using QoS.

 

To determine if the QoS specs are being set correctly in Unisphere when they are applied to a volume type in OpenStack you only need to check the properties of the associated storage group in Unisphere. The QoS specs set in OpenStack for the volume type should be reflected exactly in the QoS properties of the storage group. If these settings are not correct you can check the following:

  1. If the QoS values are incorrect, check the values you have set in OpenStack for the QoS specification, these values are passed directly to Unisphere so shouldn't be different
  2. If there are no QoS specs on the storage group in Unisphere, check the spelling and case of the key in the key/value pair entered for each spec in the QoS specification in OpenStack

 

3. Compression

If you are using a VMAX All-Flash (250F, 450F, 850F, 950F) in your environment, you can avail of inline compression in your OpenStack environment. By default compression is enabled, so if you want it right now you don't even have to do a thing!

 

VMAX All Flash delivers a net 4:1 overall storage efficiency benefit for typical transactional workloads when inline compression is combined with snapshots and other HYPERMAX OS space saving capabilities. VMAX inline compression minimizes footprint while intelligently optimizing system resources to ensure the system is always delivering the right balance of performance and efficiency. VMAX All Flash inline compression is:

  • Granular: VMAX All Flash compression operates at the storage group (application) level so customers can target those workloads that provide the most benefit.
  • Performance optimized: VMAX All Flash is smart enough to make sure very active data is not compressed until it becomes less active. This allows the system to deliver maximum throughput leveraging cache and SSD technology, and ensures that system resources are always available when required.
  • Flexible: VMAX All Flash inline compression works with all data services such as including SnapVX & SRDF

 

Compression, VMAX & OpenStack

As mentioned previously, on an All Flash array the creation of any storage group has a compressed attribute by default and compression is enabled by default also.  Setting compression on a volume type does not mean that all the devices associated with that type will be immediately compressed. It means that for all incoming writes compression will be considered. Setting compression off on a volume type does not mean that all the devices will be uncompressed. It means all the writes to compressed tracks will make these tracks uncompressed.

 

Controlling compression for VMAX volume types is handled through the extra specs of the volume type itself. Up until now, the only extra spec we set for a volume type is the volume_backend_name, compression requires an additional extra spec to be applied to the volume type called storagetype:disablecompression=[True/False].

 

Note: If extra spec storagetype:disablecompression is set on a VMAX-3 Hybrid array, it is ignored because compression is not a feature on a VMAX3 Hybrid array.

 

Using Compression for VMAX

Compression is enabled by default on all All-Flash arrays so you do not have to do anything to enable it for storage groups created by OpenStack. However, there are occasions whereby you may want to disable compression or retype (don't worry, retype will be discussed in detail later in this article!) a volume from an uncompressed to a compressed volume type.  Before each of the use-cases outlined below, please complete the following steps for each use-case:

  1. Create a new volume type called VMAX_COMPRESSION_DISABLED
  2. Set an extra spec volume_backend_name
  3. Set a new extra spec storagetype:disablecompression=True
  4. Create a new volume with the VMAX_COMPRESSION_DISABLED volume type

$ openstack volume type create VMAX_COMPRESSION_DISABLED

$ openstack volume type set --property volume_backend_name=VMAX_COMPRESSION_DISABLED \ VMAX_COMPRESSION_DISABLED

$ openstack volume type set --property storagetype:disablecompression=True

 

Use-Case 1: Compression disabled - create, attach, detach, and delete volume

  1. Check in Unisphere or SYMCLI to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group
  2. Attach the volume to an instance. Check in Unisphere or symcli to see if the volume exists in storage group OS-<shorthostname>-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group
  3. Detach volume from instance. Check in Unisphere or symcli to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group.
  4. Delete the volume. If this was the last volume in the OS-<srp>-<servicelevel>-<workload>-CD-SG storage group, it should also be deleted.

 

Use-Case 2: Compression disabled - create, delete snapshot and delete volume

  1. Check in Unisphere or SYMCLI to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group
  2. Create a snapshot. The volume should now exist in OS-<srp>-<servicelevel>-<workload>-CD-SG
  3. Delete the snapshot. The volume should be removed from OS-<srp>-<servicelevel>-<workload>-CD-SG
  4. Delete the volume. If this volume is the last volume in OS-<srp>-<servicelevel>-<workload>-CD-SG, it should also be deleted.

 

Use-Case 3: Retype from compression disabled to compression enabled

Note: Retype will be discussed in more detail in another article later in this series

  1. Create a new volume type. For example VMAX_COMPRESSION_ENABLED
  2. Set extra spec volume_backend_name as before
  3. Set the new extra spec’s compression as storagetype:disablecompression = False or DO NOT set this extra spec
  4. Retype from volume type VMAX_COMPRESSION_DISABLED to VMAX_COMPRESSION_ENABLED
  5. Check in Unisphere or SYMCLI to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-SG, and compression is enabled on that storage group

 

Whats coming up in part 3 of 'VMAX & OpenStack Ocata: An Inside Look'...

With the setup out of the way and extra functionality taken into consideration, we can now begin to get into the fun stuff, block storage functionality! Next time we will be starting at the start in terms of functionality, going through all of the basic operations that the VMAX driver supports in OpenStack.

In my last post I went over what you should consider before setting up VMAX with Openstack, if you would like to see that blog article again click here (TLDR: We assume that everything is set-up; hardware, networking, base operating system, Openstack, Cinder etc. meaning we can concentrate on the VMAX specific tasks with Openstack). Otherwise lets keep moving forward!

 

Today we are going to be looking at the actual setup and installation of VMAX storage arrays with Openstack, namely the VMAX-3 series & Openstack Ocata. I have numbered each section individually here so as to represent the order in which they should be carried out during configuration.

 

1. Sourcing the VMAX Openstack Drivers

VMAX drivers for Openstack Ocata are currently hosted upstream in the official Openstack repository, meaning that when you install Cinder for Ocata, the most recent drivers as of the date of download will be included as standard. To view the drivers online you can follow this link, to download the drivers you will need to download the entire Cinder repository from here and extract the drivers from there.

 

It is always recommended to make sure that you are using the most up-to-date version of the VMAX drivers to ensure you don't miss out on any new features or bug fixes.  If you are updating the VMAX drivers for Cinder, apart from the '__init__.py' file, delete all others including those with the '.pyc' extension, afterwards copy the new drivers into the VMAX specific folder, it's can be found as follows:

${installation_directory}/cinder/cinder/volume/drivers/dell_emc/vmax/

Once you have copied the new drivers into the VMAX folder, make sure you restart all Cinder services (volume, scheduler, api) to make the new drivers take effect.

 

2. Installing PyWBEM

The Cinder drivers perform volume operations by communicating with the back-end VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP. PyWBEM is a WBEM (Web-Based Enterprise Management) client, written in pure Python. It supports Python 2 and Python 3. A WBEM client allows issuing operations to a WBEM server, using the CIM operations over HTTP (CIM-XML) protocol. The CIM/WBEM infrastructure is used for a wide variety of systems management tasks supported by systems running WBEM servers.

 

The required PyWBEM version varies depending on the version of Python you are using, if you are using Python 2 in your environment, please install PyWBEM 0.7.0 natively using the command:

Ubuntu: $ sudo apt-get install python-pywbem==0.7.0

RHEL/CentOS/Fedora: $ sudo yum install pywbem==0.7.0

OpenSUSE: $ sudo zypper install python-pywbem==0.7.0

If you are using Python 3, please install PyWBEM versions 0.8.4 or 0.9.0 using pip or 0.7.0 using native package installation:

All: $ sudo pip install python-pywbem=={0.9.0/0.8.4}

Ubuntu: $ sudo apt-get install python-pywbem==0.7.0

RHEL/CentOS/Fedora: $ sudo yum install pywbem==0.7.0

OpenSUSE: $ sudo zypper install python-pywbem==0.7.0


Known issues surrounding PyWBEM

On occasion when installing PyWBEM, you may encounter an issue where your system will tell you that PyWBEM isn't installed when you know for a fact you have installed it. The main cause for this problem is with a dependency of PyWBEM called 'm2crypto', if it is missing from the installation then even though PyWBEM installs it is marked as incomplete/uninstalled. Luckily the fix is a simple one, all that is required is a complete removal of the previous PyWBEM/m2crypto packages, and reinstalling natively:

 

DistroCommand
Ubuntu

$ sudo apt-get remove --purge -y python-m2crypto

$ sudo pip uninstall pywbem

$ sudo apt-get install python-pywbem

RHEL/CentOS/Fedora

$ sudo yum remove python-m2crypto

$ sudo sudo pip uninstall pywbem

$ sudo yum install pywbem

OpenSUSE

$ sudo zypper remove --clean-deps python-m2crypto

$ sudo pip uninstall pywbem

$ sudo zypper install python-pywbem

 

3. Install iSCSI Utilities (for iSCSI environments only!)

Internet SCSI (iSCSI) is a network protocol s that allows you to use of the SCSI protocol over TCP/IP networks. It is good alternative to Fibre Channel-based SANs. You can easily manage, mount and format iSCSI Volume under Linux. It allows access to SAN storage over Ethernet. If iSCSI is chosen to be the chosen transport media for your environment it is necessary to install the supporting utilities for iSCSI.

 

The Open-iSCSI package provides the server daemon for the iSCSI protocol, as well as the utility programs used to manage it. This package is available under multiple Linux distributions:

Ubuntu: $ sudo apt-get install open-iscsi

RHEL/CentOS/Fedora: $ sudo yum install scsi-target-utils.x86_64

OpenSUSE: $ sudo zypper install open-iscsi

 

4. Solutions Enabler, SMI-S & ECOM Set-up

For this section I will give a brief overview on the steps that should be taken in order to install Solutions Enabler (SE) 8.3.0.11 or newer along with the SMI-S & ECOM components. There are already comprehensive guides for installation & configuration of these components so I will redirect you to those if you want any further information. For detailed installation & configuration instructions please see the ‘Solutions Enabler 8.3.0 Installation & Configuration Guide’ and the ‘ECOM Deployment and Configuration Guide’.  However... If there is enough demand or requests to have this area covered in more detail with regards to its integration into Openstack environments, let me know in the comments or via mail and I will see what I can put together for you for another article in the VMAX & Openstack Blog.

 

Download Solutions Enabler (SE) 8.3.x from support.emc.com and install it, SE comes with the SMI-S & ECOM components included by default. If you have already installed SE on the target system, you will be prompted with the following option during installation:

What would you like to do: install a new Feature [F|f], or eXit [X|x]?: F

From here select ‘F’ for install a new feature. One of the subsequent options during the installation process allows the addition of the SMIS & ECOM components to the environment:

Install EMC Solutions Enabler SMIS Component? [N]: Y

You can install SMI-S on a non-OpenStack host. Supported platforms include different flavours of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only.

 

The ECOM is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure the ECOM, go to that directory and type TestSmiProvider.exe for windows and ./TestSmiProvider for Linux.

 

Use addsys in TestSmiProvider to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.

 

Note: You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.  For detailed installation & configuration instructions please see the ‘Solutions Enabler 8.3.0 Installation & Configuration Guide’ and the ‘ECOM Deployment and Configuration Guide’.


5. Add VMAX details to Cinder Configuration

To use the VMAX Cinder Block Storage Drivers it is necessary to make changes to the Cinder configuration file within the Cinder install directory, by default this is set to /etc/cinder/cinder.conf

 

Firstly, it is necessary to add the configuration groups for each back-end configurations, in the example below there are two configuration groups, one for FC and the other iSCSI. These configuration groups can be placed anywhere within the cinder.conf file:

 

[CONF_GROUP_ISCSI]
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name = ISCSI_backend

[CONF_GROUP_FC]
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name = FC_backend





















 

In this example, two back-end configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Repeat this process for every back-end type required for Cinder volumes.  Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the specific configuration file containing additional settings. Note that the XML file name must be in the format /etc/cinder/cinder_emc_config_[conf_group].xml.


With the backend configuration groups defined, these backends need to be enabled through the ‘enabled_backends’ setting in the [DEFAULT] configuration group also within cinder.conf:

enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC

To change the default Cinder volume type, change the setting ‘volume_driver’ also in the [DEFAULT] configuration group within cinder.conf:

default_volume_type = CONF_GROUP_ISCSI

 

6. Create Volume Types & Associate Back-End Names

Once you have created & edited the cinder.conf file and have added the most up-to-date VMAX drivers to you installation, the next step is to create the Openstack volume types that will be selected when you provision any VMAX storage from within VMAX.

 

These commands are entered using the Openstack Cinder CLI, so it will be necessary to authenticate yourself as an Openstack user. The most common way for this to happen is using the 'openrc' file supplied by Openstack which is specific to your installation. As there are multiple ways to do this, we are going to skip it here and assume that you are already authenticated for Openstack CLI usage.

 

The Openstack commands need to be issued in order to create and associate the Openstack volume types with the declared volume_backend_name:

$ openstack volume type create VMAX_ISCSI

$ openstack volume type set --property volume_backend_name=ISCSI_backend VMAX_ISCSI

$ openstack volume type create VMAX_FC

$ openstack volume type set --property volume_backend_name=FC_backend VMAX_FC

 

Breaking the above commands down, the first 'openstack volume type create VMAX_ISCSI' creates our volume type within Openstack. The second command 'openstack volume type set --property volume_backend_name=ISCSI_backend VMAX_ISCSI' applies an extra property to our volume type called volume_backend_name which is associated with the back-end name specified in cinder.conf in step 5. This is how Openstack will know to use the properties specified in our configuration files when we select VMAX_ISCSI as the volume type for a given volume. We do the same for our FC backend, create the volume type and associate it with a volume_backend_name specified in cinder.conf from step 5 also.

 

7. Create your VMAX volume type XML configuration file

For each VMAX volume type created for use in Openstack it is necessary to create an accompanying XML configuration file.  Create the /etc/cinder/cinder_emc_config_[CONF_GROUP].xml file where [CONF_GROUP] is the same name as the configuration group specified in cinder.conf - in this case or two configuration groups are called 'CONF_GROUP_ISCSI' & 'CONF_GROUP_FC'.


The following example is for the CONF_GROUP_ISCSI back-end, and will be named 'cinder_emc_config_CONF_GROUP_ISCSI.xml'

 

<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
  <EcomServerIp>1.1.1.1</EcomServerIp>
  <EcomServerPort>00</EcomServerPort>
  <EcomUserName>user1</EcomUserName>
  <EcomPassword>password1</EcomPassword>
  <PortGroups>
    <PortGroup>OS-PORTGROUP1-PG</PortGroup>
    <PortGroup>OS-PORTGROUP2-PG</PortGroup>
  </PortGroups>
  <Array>111111111111</Array>
  <Pool>SRP_1</Pool>
  <ServiceLevel>Diamond</ServiceLevel>
  <Workload>OLTP</Workload>
</EMC>






















 

Where...

 

XML TagDescription
EcomServerIpIP address of the ECOM server which is packaged with SMI-S.
EcomServerPortPort number of the ECOM server which is packaged with SMI-S.
EcomUserName and EcomPasswordCredentials for the ECOM server.
PortGroupsSupplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. Port Groups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the Port Group list, to evenly distribute load across the set of groups provided. Make sure that the Port Groups set contains either all FC or all iSCSI port groups (for a given back end), as appropriate for the configured driver (iSCSI or FC).
ArrayUnique VMAX array serial number.
PoolUnique pool name within a given array. For back ends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For back ends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
ServiceLevelVMAX All Flash and Hybrid only. The Service Level manages the underlying storage to provide expected performance. Omitting the ServiceLevel tag means that non FAST storage groups will be created instead (storage groups not associated with any service level).
WorkloadVMAX All Flash and Hybrid only. When a workload type is added, the latency range is reduced due to the added information. Omitting the Workload tag means the latency range will be the widest for its SLO type.

 

Note: VMAX Hybrid supports Optimized, Diamond, Platinum, Gold, Silver, Bronze, and NONE service levels. VMAX All Flash supports Diamond and NONE. Both support DSS_REP, DSS, OLTP_REP, OLTP, and NONE workloads.

 

Interval and Retries

By default, Interval and Retries are 10 seconds and 60 retries respectively. These determine how long (Interval) and how many times (Retries) a user is willing to wait for a single SMIS call, 10*60=300seconds. Depending on usage, these may need to be overridden by the user in the XML file. For example, if performance is a factor, then the Interval should be decreased to check the job status more frequently, and if multiple concurrent provisioning requests are issued then Retries should be increased so calls will not timeout prematurely.

 

In the example below, the driver checks every 5 seconds for the status of the job. It will continue checking for 120 retries before it times out.

Add the following lines to the XML file:

 

<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
  <EcomServerIp>1.1.1.1</EcomServerIp>
  <EcomServerPort>00</EcomServerPort>
  <EcomUserName>user1</EcomUserName>
  <EcomPassword>password1</EcomPassword>
  <PortGroups>
    <PortGroup>OS-PORTGROUP1-PG</PortGroup>
    <PortGroup>OS-PORTGROUP2-PG</PortGroup>
  </PortGroups>
  <Array>111111111111</Array>
  <Pool>SRP_1</Pool>
  <Interval>5</Interval>
  <Retries>120</Retries>
</EMC>














 

8. SSL Configuration

SSL (Secure Sockets Layer) is the standard security technology for establishing an encrypted link between a server and a host. This link ensures that all data passed between the server and host remains private and integral. SSL is an industry standard and is used by millions of servers/hosts in the protection of their communications.

 

Prior to Unisphere 8.3.0.1 SSL was optional for users, but with the newer versions the ECOM component in Solutions Enabler enforces SSL in 8.3.0.1 or later, by default this port is set to 5989.

 

To be able to create an SSL connection a web server requires an SSL Certificate. We will walk through the steps required in order to obtain your certificate from your ECOM server and how to either add it to your host to automatic inclusion in all future requests, or how to manually specify it's location on your host.

 

1. Get the CA certificate of the ECOM server. This pulls the CA cert file and saves it as .pem file. The ECOM server IP address or host name is my_ecom_host. The sample name of the .pem file is ca_cert.pem:

$ openssl s_client -showcerts -connect {my_ecom_host}:5989 </dev/null 2>/dev/null|openssl x509 -outform PEM >ca_cert.pem

2. Copy the pem file to the system certificate directory:

Ubuntu: $ sudo cp ca_cert.pem /usr/share/ca-certificates/ca_cert.crt

RedHat/CentOS/SLES/openSUSE:  $ sudo cp ca_cert.pem /etc/pki/ca-trust/source/anchors/ca_cert.crt

3. Update CA certificate database with the following commands (note: check that the new ca_cert.crt will activate by selecting ask on the dialog. If it is not enabled for activation, use the down and up keys to select, and the space key to enable or disable):

Ubuntu/SLES/openSUSE: $ sudo update-ca-certificates

RedHat/CentOS: $ sudo update-ca-trust extract

4. Update /etc/cinder/cinder.conf to reflect SSL functionality by adding the following to the back end block:

driver_ssl_cert_verify = True

driver_use_ssl = True

5. (Optional) If you skipped steps 2 & 3, you must add the location of your .pem file to cinder.conf also:

driver_ssl_cert_verify = True

driver_use_ssl = True

driver_ssl_cert_path = /my_location/ca_cert.pem

6. If your EcomServerIP value in your volume type XML file is set to an IP address, you will need to change this to the ECOM host-name. Additionally, the EcomServerPort must also be set to the secure port (by default this is 5989)

 

9. FC Zoning with VMAX (Optional)

Zone Manager is required when there is a fabric between the host and array. This is necessary for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns. Setting up Zone Manager is outside of the scope of this blog as it is networking related, but for further information you can refer to the Official Openstack Ocata documentation on Zone Manager.

 

10. iSCSI Multipath with VMAX (Optional)

With iSCSI storage you can take advantage of multipathing support that having an IP based network offers.  Openstack can use iSCSI multipathing through dynamic discovery, allowing it's iSCSI initiators to obtain a list of target addresses that they can use as multiple paths to iSCSI LUNs for fail-over/redundancy purposes.  Before setting up multipathing in your environment, there are some requirements which must be met in advance:

  • Install open-iscsi on all nodes on your system
  • Do not install EMC PowerPath as they cannot co-exist with native multi-path software
  • Multipath tools must be installed on all nova compute nodes

 

In addition to the open-iscsi package required for iSCSI support, if iSCSI multipathing support is required in your environment a number of additional packages are required. Install the below packages on all nodes in your environment (including nova compute).  The list of iSCSI required packages  are installed natively i.e. sudo apt-get install <package>, sudo yum install <package>:

 

PackageUbuntuRHEL/CentOS/FedoraOpenSUSE/SLES
Open-iscsiopen-iscsiiscsi-initiator-utilsopen-iscsi
Multipath modulesmultipath-toolsdevice-mapper-multipathmultipath-tools
File system utilssysfutils sg3-utilssysfutils sg3-utilssysfutils sg3-utils
SCSI utilsscsitoolsscsitoolsscsitools

 

Multipath Configuration File

The multi-path configuration file may be edited for better management and performance. Log in as a privileged user and make the following changes to /etc/multipath.conf on the Compute (Nova) node(s).

 

devices {
# Device attributed for EMC VMAX
       device {
            vendor "EMC"
            product "SYMMETRIX"
            path_grouping_policy multibus
            getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n"
            path_selector "round-robin 0"
            path_checker tur
            features "0"
            hardware_handler "0"
            prio const
            rr_weight uniform
            no_path_retry 6
            rr_min_io 1000
            rr_min_io_rq 1
       }
}


















 

Openstack Multipath Configuration Settings

On the Compute (Nova) node, add the following flag in the [libvirt] section of /etc/nova/nova.conf:

iscsi_use_multipath = True

On cinder controller node, set the multipath flag to true in the [default] section of /etc/cinder/cinder.conf:

use_multipath_for_image_xfer = True

 

Restarting Services

With all of the necessary changes made to both the environment and configuration files, the last step is to restart the necessary services:

 

DistroCommands
Ubuntu

$ service open-iscsi restart

$ service multipath-tools restart

$ service nova-compute restart

$ service cinder-volume restart

RHEL/CentOS/SLES/openSUSE

$ systemctl restart open-iscsi

$ systemctl restart multipath-tools

$ systemctl restart nova-compute

$ systemctl restart cinder-volume

 

11.  Workload Planner - WLP (Optional)

Workload Planner (WLP) is a FAST component used to display performance metrics to calculate VMAX component utilization and storage group Service Level (SL) compliance. It allows for more informed workload monitoring and planning by up-stream components with respect to current VMAX performance capacity. When a storage group (workload) is said to be compliant, it means that it is operating within the associated response time band.

 

VMAX-3 series arrays allow you to manage application storage by using SLs using policy based automation rather than the tiering in the VMAX2. The VMAX Hybrid comes with up to 6 SL policies defined, VMAX All-Flash with 2 SL policies (policies for both VMAXs detailed in part 7 of this blog). Each has a set of workload characteristics that determine the drive types and mixes which will be used for the SL.

 

The SL capacity is retrieved by interfacing with Unisphere's Workload Planner (WLP). If you do not set up this relationship then the capacity retrieved is that of the entire Storage Resource Pool (SRP). This can cause issues as it can never be an accurate representation of what storage is available for any given SLO and Workload combination.

 

Enabling WLP in Unisphere

  1. To enable WLP in Unisphere, from the main Unisphere dashboard select Performance>Settings>System Registrations
  2. Click to highlight the VMAX for which you want to enable WLP for and click 'Register'
  3. In the dialogue box which opens, select both 'Real Time' and 'Root Cause Analysis' then click OK

 

Note: This should be set up ahead of time (allowing for several hours of data collection), so that the Unisphere for VMAX Performance Analyzer can collect rated metrics for each of the supported element types.


Using TestSmiProvider to add statistics access point

After enabling WLP you must then enable SMI-S to gain access to the WLP data:

1. Connect to the SMI-S Provide using TestSmiProvider

Linux: /opt/emc/ECIM/ECOM/bin

Windows: C:\Program Files\EMC\ECIM\ECOM\bin

2. Navigate to the active menu

3. Type reg and enter the noted responses to the questions:

(EMCProvider:5989) ? reg

Current list of statistics Access Points:

Note: The current list will be empty if there are no existing Access Points.

Add Statistics Access Point {y|n} [n]: y

HostID [openstack_host.localdomain]: [enter]

Note: Enter the Unisphere for VMAX location using a fully qualified Host ID.

Port [8443]: [enter]

Note: The Port default is the Unisphere for VMAX default secure port. If the secure port

is different for your Unisphere for VMAX setup, adjust this value accordingly.

User [smc]: [enter]

Note: Enter the Unisphere for VMAX username.

Password [smc]: [enter]

Note: Enter the Unisphere for VMAX password.

4. Type reg again to view the current list:

(EMCProvider:5988) ? reg

Current list of statistics Access Points:

HostIDs:

openstack_host.localdomain

PortNumbers:

8443

Users:

smc

Add Statistics Access Point {y|n} [n]: n


Troubleshooting your setup & configuration

The majority of the time when something isn't working as expected in OpenStack with VMAX as the storage back end the cause can be found in misconfiguration of the drivers themselves. The first port of call when determining if your configuration is correct is in cinder volume logs, by default this log file is located in /var/log/cinder/. If your configuration is correct you should see output similar to the sample below.

 

CapacityStats.PNG.png


I will go over indicators of the various problems in the configuration in the troubleshooting article later in this series but for now this checklist of aspects of your configuration will give you an idea of what to look for.

  1. Is your back end stanza in cinder.conf correctly configured?
  2. Is your XML file for the back end stanza correctly configured?
  3. Is your VMAX volume type in OpenStack correctly set up with required associations to the back end?
  4. Is your SSL certificate valid and loaded into your system? If not, you can specify the path to the certificate in the back end stanza in cinder.conf under driver_ssl_cert_path
  5. Is your SMI-S/ECOM server correctly configured?
  6. Is PyWBEM working as intended?
  7. Is your FC or iSCSI networking setup correctly configured?
    1. If FC, is the environment correctly zoned?
  8. If you are using iSCSI multipath, are all path valid and active? Is the additional multipath dependencies installed and configuration correct?
  9. Did you restart all required OpenStack services (Cinder/Nova) after making changes?


Next time in 'VMAX & Openstack Ocata: An Inside Look'...

Next time in 'VMAX & Openstack Ocata: An Inside Look' we will be looking in-depth at over-subscription, QoS, compression, and retype! We will go through each area and how to set them up with your VMAX & Openstack environment. As always thanks for reading and if you have any comments, suggestions, document fixes, or questions, feel free to contact me directly or via the comments section below!

Hi There!

 

Welcome to the first in the series of 'VMAX & Openstack Ocata: An Inside Look' where we will be taking a in-depth look at the setup and configuration of VMAX drivers for Openstack Ocata.  Before we begin looking at setting up the VMAX drivers, there are some requirements which must be met in advance (and in advance of working through these guides in general) and a few finer details to be aware of:

 

  • You have all required hardware set up and networks (FC/iSCSI) configured in advance of this working through this guide
  • Openstack has already been deployed with a properly configured Cinder block storage service
  • You meet the base system requirements (outlined below)
  • You have the required VMAX software suites necessary for Openstack to run (outlined below)

 

It is assumed that there are no other back-ends configured at this stage and you have credentials for an admin account for the Openstack deployment. Apart from that we can take it from here ourselves!

 

Supported VMAX & Openstack Versions

VMAX has been supported in Openstack going as far back as the Grizzly release for the VMAX-2 Series. As time has progressed, we have added supported for VMAX-3 Series (Hybrid) and VMAX-3 Series (All-Flash). The table below outlines the more recently released different Openstack distributions:

 

Openstack DistroVMAX-2 SeriesVMAX-3 Series (Hybrid)
VMAX-3 Series (All-Flash)
LibertyYesYesNo
MitakaYesYesNo
NewtonYesYesYes
OcataNoYesYes

 

Supported Openstack Distributions

As of the Ocata release we support Ubuntu, RHEL/CentOS/Fedora, and OpenSUSE. As mentioned previously, for these set of guides I will be using the Ubuntu Openstack Ocata distro, but where the commands very dependent on the OS, I will also provide the alternatives.

 

A Brief Overview

Both of our VMAX drivers (FC & iSCSI) support the use of VMAX storage arrays, both providing equivalent functions and only differing in their supported host attachment methods.


The drivers perform volume operations by communicating with the back-end VMAX using a CIM client in Python called PyWBEM. This client performs all CIM operations over HTTP in Ocata.  The CIM server that enables CIM clients to perform CIM operations over HTTP is called the ECOM server (EMC CIM Object Manager), the ECOM server is packaged along with the DellEMC SMI-S provider when installing Solutions Enabler.  The Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management.


System Requirements

There are a number of requirements which must be taken into consideration when preparing to set up VMAX with Openstack Ocata:

  • The Cinder driver for Openstack Ocata supports the VMAX-3 series storage arrays
  • Solutions Enabler 8.3.0.11 or later is required (note: this is SSL only, please refer to section below 'SSL Support')
    • You can download Solutions Enabler 8.3.0.11 here (login is required)
  • When installing Solutions Enabler, make sure you explicitly add the SMI-S component
  • Ensure that there is only one SMI-S (ECOM) server active on any given VMAX


Minimum Required VMAX Software Suites for Openstack

There are five Software Suites available for the VMAX All Flash and Hybrid:

  • Base Suite
  • Advanced Suite
  • Local Replication Suite
  • Remote Replication Suite
  • Total Productivity Pack

 

Of the five suites listed above, Openstack requires either:

  • The Advanced Suite, Local Replication Suite, and Remote Replication Suite, or
  • The Total Productivity Pack (it includes the suites listed in previous point)


Each are the software suites are licensed separately.  To activate your software suites and obtain your VMAX license files, visit the Service Center on https://support.emc.com/ as directed on your License Authorisation Code (LAC) email. For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your DellEMC account rep or authorised re-seller.  For any help with errors applying license files through Solutions Enabler, contact the Dell EMC's Customer Support Center.   If you are missing a LAC letter or require further instructions on license activation, contact Dell EMC's Licensing team at licensing@emc.com or alternatively call:

  • North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.
  • EMEA: +353 (0) 21 4879862 and follow the voice prompts.

 

 

VMAX & Openstack Supported Operations

To keep things simple and straight forward, a list of all supported features and operations can be found below, if it isn't below then it isn't supported!


VMAX & Openstack supported features:

  • Create, list, delete, attach, and detach volumes
  • Manage & unmanage volumes
  • Create, list, and delete volume snapshots
  • Copy an image to a volume
  • Copy a volume to an image
  • Clone a volume
  • Extend a volume
  • Retype a volume (Host and storage assisted volume migration)
  • Create a volume from a snapshot
  • Create and delete consistency group
  • Create and delete consistency group snapshot
  • Modify consistency group (add and remove volumes)
  • Create consistency group from source
  • Create and delete generic volume group
  • Create and delete generic volume group snapshot
  • Modify generic volume group (add and remove volumes)
  • Create generic volume group from source
  • Over-subscription
  • Live Migration
  • Attach and detach snapshots
  • Volume replication
  • Dynamic masking view creation
  • Dynamic determination of the target iSCSI IP address
  • iSCSI multipath support
  • Service Level support
  • SnapVX support
  • Compression support


It should be pointed out here that VMAX All Flash arrays with Solutions Enabler 8.3.0.11 or later running have compression enabled by default when associated with 'Diamond' Service Level. This means volumes added to any newly created storage groups will be compressed.


Up Next...

OK! That's the formalities and necessities out of the way, now we can get into the fun part, using VMAX with Openstack! The first in-depth guide to feature in this series is the setup & installation of VMAX with Openstack, click here to go there straight away.