Welcome to the first in the series of 'VMAX & Openstack Ocata: An Inside Look' where we will be taking a in-depth look at the setup and configuration of VMAX drivers for Openstack Ocata. Before we begin looking at setting up the VMAX drivers, there are some requirements which must be met in advance (and in advance of working through these guides in general) and a few finer details to be aware of:
- You have all required hardware set up and networks (FC/iSCSI) configured in advance of this working through this guide
- Openstack has already been deployed with a properly configured Cinder block storage service
- You meet the base system requirements (outlined below)
- You have the required VMAX software suites necessary for Openstack to run (outlined below)
It is assumed that there are no other back-ends configured at this stage and you have credentials for an admin account for the Openstack deployment. Apart from that we can take it from here ourselves!
Supported VMAX & Openstack Versions
VMAX has been supported in Openstack going as far back as the Grizzly release for the VMAX-2 Series. As time has progressed, we have added supported for VMAX-3 Series (Hybrid) and VMAX-3 Series (All-Flash). The table below outlines the more recently released different Openstack distributions:
|Openstack Distro||VMAX-2 Series||VMAX-3 Series (Hybrid)||VMAX-3 Series (All-Flash)|
Supported Openstack Distributions
As of the Ocata release we support Ubuntu, RHEL/CentOS/Fedora, and OpenSUSE. As mentioned previously, for these set of guides I will be using the Ubuntu Openstack Ocata distro, but where the commands very dependent on the OS, I will also provide the alternatives.
A Brief Overview
Both of our VMAX drivers (FC & iSCSI) support the use of VMAX storage arrays, both providing equivalent functions and only differing in their supported host attachment methods.
The drivers perform volume operations by communicating with the back-end VMAX using a CIM client in Python called PyWBEM. This client performs all CIM operations over HTTP in Ocata. The CIM server that enables CIM clients to perform CIM operations over HTTP is called the ECOM server (EMC CIM Object Manager), the ECOM server is packaged along with the DellEMC SMI-S provider when installing Solutions Enabler. The Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management.
There are a number of requirements which must be taken into consideration when preparing to set up VMAX with Openstack Ocata:
- The Cinder driver for Openstack Ocata supports the VMAX-3 series storage arrays
- Solutions Enabler 126.96.36.199 or later is required (note: this is SSL only, please refer to section below 'SSL Support')
- You can download Solutions Enabler 188.8.131.52 here (login is required)
- When installing Solutions Enabler, make sure you explicitly add the SMI-S component
- Ensure that there is only one SMI-S (ECOM) server active on any given VMAX
Minimum Required VMAX Software Suites for Openstack
There are five Software Suites available for the VMAX All Flash and Hybrid:
- Base Suite
- Advanced Suite
- Local Replication Suite
- Remote Replication Suite
- Total Productivity Pack
Of the five suites listed above, Openstack requires either:
- The Advanced Suite, Local Replication Suite, and Remote Replication Suite, or
- The Total Productivity Pack (it includes the suites listed in previous point)
Each are the software suites are licensed separately. To activate your software suites and obtain your VMAX license files, visit the Service Center on https://support.emc.com/ as directed on your License Authorisation Code (LAC) email. For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your DellEMC account rep or authorised re-seller. For any help with errors applying license files through Solutions Enabler, contact the Dell EMC's Customer Support Center. If you are missing a LAC letter or require further instructions on license activation, contact Dell EMC's Licensing team at email@example.com or alternatively call:
- North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.
- EMEA: +353 (0) 21 4879862 and follow the voice prompts.
VMAX & Openstack Supported Operations
To keep things simple and straight forward, a list of all supported features and operations can be found below, if it isn't below then it isn't supported!
VMAX & Openstack supported features:
- Create, list, delete, attach, and detach volumes
- Manage & unmanage volumes
- Create, list, and delete volume snapshots
- Copy an image to a volume
- Copy a volume to an image
- Clone a volume
- Extend a volume
- Retype a volume (Host and storage assisted volume migration)
- Create a volume from a snapshot
- Create and delete consistency group
- Create and delete consistency group snapshot
- Modify consistency group (add and remove volumes)
- Create consistency group from source
- Create and delete generic volume group
- Create and delete generic volume group snapshot
- Modify generic volume group (add and remove volumes)
- Create generic volume group from source
- Live Migration
- Attach and detach snapshots
- Volume replication
- Dynamic masking view creation
- Dynamic determination of the target iSCSI IP address
- iSCSI multipath support
- Service Level support
- SnapVX support
- Compression support
It should be pointed out here that VMAX All Flash arrays with Solutions Enabler 184.108.40.206 or later running have compression enabled by default when associated with 'Diamond' Service Level. This means volumes added to any newly created storage groups will be compressed.
OK! That's the formalities and necessities out of the way, now we can get into the fun part, using VMAX with Openstack! The first in-depth guide to feature in this series is the setup & installation of VMAX with Openstack, click here to go there straight away.