In my last post I went over what you should consider before setting up VMAX with Openstack, if you would like to see that blog article again click here (TLDR: We assume that everything is set-up; hardware, networking, base operating system, Openstack, Cinder etc. meaning we can concentrate on the VMAX specific tasks with Openstack). Otherwise lets keep moving forward!

 

Today we are going to be looking at the actual setup and installation of VMAX storage arrays with Openstack, namely the VMAX-3 series & Openstack Ocata. I have numbered each section individually here so as to represent the order in which they should be carried out during configuration.

 

1. Sourcing the VMAX Openstack Drivers

VMAX drivers for Openstack Ocata are currently hosted upstream in the official Openstack repository, meaning that when you install Cinder for Ocata, the most recent drivers as of the date of download will be included as standard. To view the drivers online you can follow this link, to download the drivers you will need to download the entire Cinder repository from here and extract the drivers from there.

 

It is always recommended to make sure that you are using the most up-to-date version of the VMAX drivers to ensure you don't miss out on any new features or bug fixes.  If you are updating the VMAX drivers for Cinder, apart from the '__init__.py' file, delete all others including those with the '.pyc' extension, afterwards copy the new drivers into the VMAX specific folder, it's can be found as follows:

${installation_directory}/cinder/cinder/volume/drivers/dell_emc/vmax/

Once you have copied the new drivers into the VMAX folder, make sure you restart all Cinder services (volume, scheduler, api) to make the new drivers take effect.

 

2. Installing PyWBEM

The Cinder drivers perform volume operations by communicating with the back-end VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP. PyWBEM is a WBEM (Web-Based Enterprise Management) client, written in pure Python. It supports Python 2 and Python 3. A WBEM client allows issuing operations to a WBEM server, using the CIM operations over HTTP (CIM-XML) protocol. The CIM/WBEM infrastructure is used for a wide variety of systems management tasks supported by systems running WBEM servers.

 

The required PyWBEM version varies depending on the version of Python you are using, if you are using Python 2 in your environment, please install PyWBEM 0.7.0 natively using the command:

Ubuntu: $ sudo apt-get install python-pywbem==0.7.0

RHEL/CentOS/Fedora: $ sudo yum install pywbem==0.7.0

OpenSUSE: $ sudo zypper install python-pywbem==0.7.0

If you are using Python 3, please install PyWBEM versions 0.8.4 or 0.9.0 using pip or 0.7.0 using native package installation:

All: $ sudo pip install python-pywbem=={0.9.0/0.8.4}

Ubuntu: $ sudo apt-get install python-pywbem==0.7.0

RHEL/CentOS/Fedora: $ sudo yum install pywbem==0.7.0

OpenSUSE: $ sudo zypper install python-pywbem==0.7.0


Known issues surrounding PyWBEM

On occasion when installing PyWBEM, you may encounter an issue where your system will tell you that PyWBEM isn't installed when you know for a fact you have installed it. The main cause for this problem is with a dependency of PyWBEM called 'm2crypto', if it is missing from the installation then even though PyWBEM installs it is marked as incomplete/uninstalled. Luckily the fix is a simple one, all that is required is a complete removal of the previous PyWBEM/m2crypto packages, and reinstalling natively:

 

DistroCommand
Ubuntu

$ sudo apt-get remove --purge -y python-m2crypto

$ sudo pip uninstall pywbem

$ sudo apt-get install python-pywbem

RHEL/CentOS/Fedora

$ sudo yum remove python-m2crypto

$ sudo sudo pip uninstall pywbem

$ sudo yum install pywbem

OpenSUSE

$ sudo zypper remove --clean-deps python-m2crypto

$ sudo pip uninstall pywbem

$ sudo zypper install python-pywbem

 

3. Install iSCSI Utilities (for iSCSI environments only!)

Internet SCSI (iSCSI) is a network protocol s that allows you to use of the SCSI protocol over TCP/IP networks. It is good alternative to Fibre Channel-based SANs. You can easily manage, mount and format iSCSI Volume under Linux. It allows access to SAN storage over Ethernet. If iSCSI is chosen to be the chosen transport media for your environment it is necessary to install the supporting utilities for iSCSI.

 

The Open-iSCSI package provides the server daemon for the iSCSI protocol, as well as the utility programs used to manage it. This package is available under multiple Linux distributions:

Ubuntu: $ sudo apt-get install open-iscsi

RHEL/CentOS/Fedora: $ sudo yum install scsi-target-utils.x86_64

OpenSUSE: $ sudo zypper install open-iscsi

 

4. Solutions Enabler, SMI-S & ECOM Set-up

For this section I will give a brief overview on the steps that should be taken in order to install Solutions Enabler (SE) 8.3.0.11 or newer along with the SMI-S & ECOM components. There are already comprehensive guides for installation & configuration of these components so I will redirect you to those if you want any further information. For detailed installation & configuration instructions please see the ‘Solutions Enabler 8.3.0 Installation & Configuration Guide’ and the ‘ECOM Deployment and Configuration Guide’.  However... If there is enough demand or requests to have this area covered in more detail with regards to its integration into Openstack environments, let me know in the comments or via mail and I will see what I can put together for you for another article in the VMAX & Openstack Blog.

 

Download Solutions Enabler (SE) 8.3.x from support.emc.com and install it, SE comes with the SMI-S & ECOM components included by default. If you have already installed SE on the target system, you will be prompted with the following option during installation:

What would you like to do: install a new Feature [F|f], or eXit [X|x]?: F

From here select ‘F’ for install a new feature. One of the subsequent options during the installation process allows the addition of the SMIS & ECOM components to the environment:

Install EMC Solutions Enabler SMIS Component? [N]: Y

You can install SMI-S on a non-OpenStack host. Supported platforms include different flavours of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only.

 

The ECOM is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure the ECOM, go to that directory and type TestSmiProvider.exe for windows and ./TestSmiProvider for Linux.

 

Use addsys in TestSmiProvider to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.

 

Note: You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.  For detailed installation & configuration instructions please see the ‘Solutions Enabler 8.3.0 Installation & Configuration Guide’ and the ‘ECOM Deployment and Configuration Guide’.


5. Add VMAX details to Cinder Configuration

To use the VMAX Cinder Block Storage Drivers it is necessary to make changes to the Cinder configuration file within the Cinder install directory, by default this is set to /etc/cinder/cinder.conf

 

Firstly, it is necessary to add the configuration groups for each back-end configurations, in the example below there are two configuration groups, one for FC and the other iSCSI. These configuration groups can be placed anywhere within the cinder.conf file:

 

[CONF_GROUP_ISCSI]
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name = ISCSI_backend

[CONF_GROUP_FC]
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name = FC_backend





















 

In this example, two back-end configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Repeat this process for every back-end type required for Cinder volumes.  Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the specific configuration file containing additional settings. Note that the XML file name must be in the format /etc/cinder/cinder_emc_config_[conf_group].xml.


With the backend configuration groups defined, these backends need to be enabled through the ‘enabled_backends’ setting in the [DEFAULT] configuration group also within cinder.conf:

enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC

To change the default Cinder volume type, change the setting ‘volume_driver’ also in the [DEFAULT] configuration group within cinder.conf:

default_volume_type = CONF_GROUP_ISCSI

 

6. Create Volume Types & Associate Back-End Names

Once you have created & edited the cinder.conf file and have added the most up-to-date VMAX drivers to you installation, the next step is to create the Openstack volume types that will be selected when you provision any VMAX storage from within VMAX.

 

These commands are entered using the Openstack Cinder CLI, so it will be necessary to authenticate yourself as an Openstack user. The most common way for this to happen is using the 'openrc' file supplied by Openstack which is specific to your installation. As there are multiple ways to do this, we are going to skip it here and assume that you are already authenticated for Openstack CLI usage.

 

The Openstack commands need to be issued in order to create and associate the Openstack volume types with the declared volume_backend_name:

$ openstack volume type create VMAX_ISCSI

$ openstack volume type set --property volume_backend_name=ISCSI_backend VMAX_ISCSI

$ openstack volume type create VMAX_FC

$ openstack volume type set --property volume_backend_name=FC_backend VMAX_FC

 

Breaking the above commands down, the first 'openstack volume type create VMAX_ISCSI' creates our volume type within Openstack. The second command 'openstack volume type set --property volume_backend_name=ISCSI_backend VMAX_ISCSI' applies an extra property to our volume type called volume_backend_name which is associated with the back-end name specified in cinder.conf in step 5. This is how Openstack will know to use the properties specified in our configuration files when we select VMAX_ISCSI as the volume type for a given volume. We do the same for our FC backend, create the volume type and associate it with a volume_backend_name specified in cinder.conf from step 5 also.

 

7. Create your VMAX volume type XML configuration file

For each VMAX volume type created for use in Openstack it is necessary to create an accompanying XML configuration file.  Create the /etc/cinder/cinder_emc_config_[CONF_GROUP].xml file where [CONF_GROUP] is the same name as the configuration group specified in cinder.conf - in this case or two configuration groups are called 'CONF_GROUP_ISCSI' & 'CONF_GROUP_FC'.


The following example is for the CONF_GROUP_ISCSI back-end, and will be named 'cinder_emc_config_CONF_GROUP_ISCSI.xml'

 

<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
  <EcomServerIp>1.1.1.1</EcomServerIp>
  <EcomServerPort>00</EcomServerPort>
  <EcomUserName>user1</EcomUserName>
  <EcomPassword>password1</EcomPassword>
  <PortGroups>
    <PortGroup>OS-PORTGROUP1-PG</PortGroup>
    <PortGroup>OS-PORTGROUP2-PG</PortGroup>
  </PortGroups>
  <Array>111111111111</Array>
  <Pool>SRP_1</Pool>
  <ServiceLevel>Diamond</ServiceLevel>
  <Workload>OLTP</Workload>
</EMC>






















 

Where...

 

XML TagDescription
EcomServerIpIP address of the ECOM server which is packaged with SMI-S.
EcomServerPortPort number of the ECOM server which is packaged with SMI-S.
EcomUserName and EcomPasswordCredentials for the ECOM server.
PortGroupsSupplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. Port Groups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the Port Group list, to evenly distribute load across the set of groups provided. Make sure that the Port Groups set contains either all FC or all iSCSI port groups (for a given back end), as appropriate for the configured driver (iSCSI or FC).
ArrayUnique VMAX array serial number.
PoolUnique pool name within a given array. For back ends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For back ends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
ServiceLevelVMAX All Flash and Hybrid only. The Service Level manages the underlying storage to provide expected performance. Omitting the ServiceLevel tag means that non FAST storage groups will be created instead (storage groups not associated with any service level).
WorkloadVMAX All Flash and Hybrid only. When a workload type is added, the latency range is reduced due to the added information. Omitting the Workload tag means the latency range will be the widest for its SLO type.

 

Note: VMAX Hybrid supports Optimized, Diamond, Platinum, Gold, Silver, Bronze, and NONE service levels. VMAX All Flash supports Diamond and NONE. Both support DSS_REP, DSS, OLTP_REP, OLTP, and NONE workloads.

 

Interval and Retries

By default, Interval and Retries are 10 seconds and 60 retries respectively. These determine how long (Interval) and how many times (Retries) a user is willing to wait for a single SMIS call, 10*60=300seconds. Depending on usage, these may need to be overridden by the user in the XML file. For example, if performance is a factor, then the Interval should be decreased to check the job status more frequently, and if multiple concurrent provisioning requests are issued then Retries should be increased so calls will not timeout prematurely.

 

In the example below, the driver checks every 5 seconds for the status of the job. It will continue checking for 120 retries before it times out.

Add the following lines to the XML file:

 

<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
  <EcomServerIp>1.1.1.1</EcomServerIp>
  <EcomServerPort>00</EcomServerPort>
  <EcomUserName>user1</EcomUserName>
  <EcomPassword>password1</EcomPassword>
  <PortGroups>
    <PortGroup>OS-PORTGROUP1-PG</PortGroup>
    <PortGroup>OS-PORTGROUP2-PG</PortGroup>
  </PortGroups>
  <Array>111111111111</Array>
  <Pool>SRP_1</Pool>
  <Interval>5</Interval>
  <Retries>120</Retries>
</EMC>














 

8. SSL Configuration

SSL (Secure Sockets Layer) is the standard security technology for establishing an encrypted link between a server and a host. This link ensures that all data passed between the server and host remains private and integral. SSL is an industry standard and is used by millions of servers/hosts in the protection of their communications.

 

Prior to Unisphere 8.3.0.1 SSL was optional for users, but with the newer versions the ECOM component in Solutions Enabler enforces SSL in 8.3.0.1 or later, by default this port is set to 5989.

 

To be able to create an SSL connection a web server requires an SSL Certificate. We will walk through the steps required in order to obtain your certificate from your ECOM server and how to either add it to your host to automatic inclusion in all future requests, or how to manually specify it's location on your host.

 

1. Get the CA certificate of the ECOM server. This pulls the CA cert file and saves it as .pem file. The ECOM server IP address or host name is my_ecom_host. The sample name of the .pem file is ca_cert.pem:

$ openssl s_client -showcerts -connect {my_ecom_host}:5989 </dev/null 2>/dev/null|openssl x509 -outform PEM >ca_cert.pem

2. Copy the pem file to the system certificate directory:

Ubuntu: $ sudo cp ca_cert.pem /usr/share/ca-certificates/ca_cert.crt

RedHat/CentOS/SLES/openSUSE:  $ sudo cp ca_cert.pem /etc/pki/ca-trust/source/anchors/ca_cert.crt

3. Update CA certificate database with the following commands (note: check that the new ca_cert.crt will activate by selecting ask on the dialog. If it is not enabled for activation, use the down and up keys to select, and the space key to enable or disable):

Ubuntu/SLES/openSUSE: $ sudo update-ca-certificates

RedHat/CentOS: $ sudo update-ca-trust extract

4. Update /etc/cinder/cinder.conf to reflect SSL functionality by adding the following to the back end block:

driver_ssl_cert_verify = True

driver_use_ssl = True

5. (Optional) If you skipped steps 2 & 3, you must add the location of your .pem file to cinder.conf also:

driver_ssl_cert_verify = True

driver_use_ssl = True

driver_ssl_cert_path = /my_location/ca_cert.pem

6. If your EcomServerIP value in your volume type XML file is set to an IP address, you will need to change this to the ECOM host-name. Additionally, the EcomServerPort must also be set to the secure port (by default this is 5989)

 

9. FC Zoning with VMAX (Optional)

Zone Manager is required when there is a fabric between the host and array. This is necessary for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns. Setting up Zone Manager is outside of the scope of this blog as it is networking related, but for further information you can refer to the Official Openstack Ocata documentation on Zone Manager.

 

10. iSCSI Multipath with VMAX (Optional)

With iSCSI storage you can take advantage of multipathing support that having an IP based network offers.  Openstack can use iSCSI multipathing through dynamic discovery, allowing it's iSCSI initiators to obtain a list of target addresses that they can use as multiple paths to iSCSI LUNs for fail-over/redundancy purposes.  Before setting up multipathing in your environment, there are some requirements which must be met in advance:

  • Install open-iscsi on all nodes on your system
  • Do not install EMC PowerPath as they cannot co-exist with native multi-path software
  • Multipath tools must be installed on all nova compute nodes

 

In addition to the open-iscsi package required for iSCSI support, if iSCSI multipathing support is required in your environment a number of additional packages are required. Install the below packages on all nodes in your environment (including nova compute).  The list of iSCSI required packages  are installed natively i.e. sudo apt-get install <package>, sudo yum install <package>:

 

PackageUbuntuRHEL/CentOS/FedoraOpenSUSE/SLES
Open-iscsiopen-iscsiiscsi-initiator-utilsopen-iscsi
Multipath modulesmultipath-toolsdevice-mapper-multipathmultipath-tools
File system utilssysfutils sg3-utilssysfutils sg3-utilssysfutils sg3-utils
SCSI utilsscsitoolsscsitoolsscsitools

 

Multipath Configuration File

The multi-path configuration file may be edited for better management and performance. Log in as a privileged user and make the following changes to /etc/multipath.conf on the Compute (Nova) node(s).

 

devices {
# Device attributed for EMC VMAX
       device {
            vendor "EMC"
            product "SYMMETRIX"
            path_grouping_policy multibus
            getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n"
            path_selector "round-robin 0"
            path_checker tur
            features "0"
            hardware_handler "0"
            prio const
            rr_weight uniform
            no_path_retry 6
            rr_min_io 1000
            rr_min_io_rq 1
       }
}


















 

Openstack Multipath Configuration Settings

On the Compute (Nova) node, add the following flag in the [libvirt] section of /etc/nova/nova.conf:

iscsi_use_multipath = True

On cinder controller node, set the multipath flag to true in the [default] section of /etc/cinder/cinder.conf:

use_multipath_for_image_xfer = True

 

Restarting Services

With all of the necessary changes made to both the environment and configuration files, the last step is to restart the necessary services:

 

DistroCommands
Ubuntu

$ service open-iscsi restart

$ service multipath-tools restart

$ service nova-compute restart

$ service cinder-volume restart

RHEL/CentOS/SLES/openSUSE

$ systemctl restart open-iscsi

$ systemctl restart multipath-tools

$ systemctl restart nova-compute

$ systemctl restart cinder-volume

 

11.  Workload Planner - WLP (Optional)

Workload Planner (WLP) is a FAST component used to display performance metrics to calculate VMAX component utilization and storage group Service Level (SL) compliance. It allows for more informed workload monitoring and planning by up-stream components with respect to current VMAX performance capacity. When a storage group (workload) is said to be compliant, it means that it is operating within the associated response time band.

 

VMAX-3 series arrays allow you to manage application storage by using SLs using policy based automation rather than the tiering in the VMAX2. The VMAX Hybrid comes with up to 6 SL policies defined, VMAX All-Flash with 2 SL policies (policies for both VMAXs detailed in part 7 of this blog). Each has a set of workload characteristics that determine the drive types and mixes which will be used for the SL.

 

The SL capacity is retrieved by interfacing with Unisphere's Workload Planner (WLP). If you do not set up this relationship then the capacity retrieved is that of the entire Storage Resource Pool (SRP). This can cause issues as it can never be an accurate representation of what storage is available for any given SLO and Workload combination.

 

Enabling WLP in Unisphere

  1. To enable WLP in Unisphere, from the main Unisphere dashboard select Performance>Settings>System Registrations
  2. Click to highlight the VMAX for which you want to enable WLP for and click 'Register'
  3. In the dialogue box which opens, select both 'Real Time' and 'Root Cause Analysis' then click OK

 

Note: This should be set up ahead of time (allowing for several hours of data collection), so that the Unisphere for VMAX Performance Analyzer can collect rated metrics for each of the supported element types.


Using TestSmiProvider to add statistics access point

After enabling WLP you must then enable SMI-S to gain access to the WLP data:

1. Connect to the SMI-S Provide using TestSmiProvider

Linux: /opt/emc/ECIM/ECOM/bin

Windows: C:\Program Files\EMC\ECIM\ECOM\bin

2. Navigate to the active menu

3. Type reg and enter the noted responses to the questions:

(EMCProvider:5989) ? reg

Current list of statistics Access Points:

Note: The current list will be empty if there are no existing Access Points.

Add Statistics Access Point {y|n} [n]: y

HostID [openstack_host.localdomain]: [enter]

Note: Enter the Unisphere for VMAX location using a fully qualified Host ID.

Port [8443]: [enter]

Note: The Port default is the Unisphere for VMAX default secure port. If the secure port

is different for your Unisphere for VMAX setup, adjust this value accordingly.

User [smc]: [enter]

Note: Enter the Unisphere for VMAX username.

Password [smc]: [enter]

Note: Enter the Unisphere for VMAX password.

4. Type reg again to view the current list:

(EMCProvider:5988) ? reg

Current list of statistics Access Points:

HostIDs:

openstack_host.localdomain

PortNumbers:

8443

Users:

smc

Add Statistics Access Point {y|n} [n]: n


Troubleshooting your setup & configuration

The majority of the time when something isn't working as expected in OpenStack with VMAX as the storage back end the cause can be found in misconfiguration of the drivers themselves. The first port of call when determining if your configuration is correct is in cinder volume logs, by default this log file is located in /var/log/cinder/. If your configuration is correct you should see output similar to the sample below.

 

CapacityStats.PNG.png


I will go over indicators of the various problems in the configuration in the troubleshooting article later in this series but for now this checklist of aspects of your configuration will give you an idea of what to look for.

  1. Is your back end stanza in cinder.conf correctly configured?
  2. Is your XML file for the back end stanza correctly configured?
  3. Is your VMAX volume type in OpenStack correctly set up with required associations to the back end?
  4. Is your SSL certificate valid and loaded into your system? If not, you can specify the path to the certificate in the back end stanza in cinder.conf under driver_ssl_cert_path
  5. Is your SMI-S/ECOM server correctly configured?
  6. Is PyWBEM working as intended?
  7. Is your FC or iSCSI networking setup correctly configured?
    1. If FC, is the environment correctly zoned?
  8. If you are using iSCSI multipath, are all path valid and active? Is the additional multipath dependencies installed and configuration correct?
  9. Did you restart all required OpenStack services (Cinder/Nova) after making changes?


Next time in 'VMAX & Openstack Ocata: An Inside Look'...

Next time in 'VMAX & Openstack Ocata: An Inside Look' we will be looking in-depth at over-subscription, QoS, compression, and retype! We will go through each area and how to set them up with your VMAX & Openstack environment. As always thanks for reading and if you have any comments, suggestions, document fixes, or questions, feel free to contact me directly or via the comments section below!