This white paper explains some new functionality in ViPR 1.1 that will allow discovery and use of pre-existing Masking Views on a VMAX for hosting the VPLEX volumes.
Thomas Lee Watson
Table of Contents
This document describes some new functionality in ViPR®1.1 that will allow discovery and use of pre-existing masking views on a VMAX®or VNX® for hosting the VPLEX® volumes. This work is most beneficial on the VMAX, which will be described in detail here. There is some benefit to applying a similar technique to the VNX, but that is not the focus of this document. Supporting other arrays as a backend to the VPLEX is not discussed here.
The motivations for providing this support include:
- Some customers desire explicit control of how many masking views are to be used between the VPLEX and a VMAX.
- The existing code automatically generates one Masking View only; this has scalability limits regarding the number of volumes that can be created.
- Some customers desire to configure VPLEX meta-data in a redundant configuration where the same two arrays supply meta-data and logging volumes to both VPLEX clusters.
The details of how to set up such a configuration are described in this document. There is a substantial amount of manual configuration required to enable support for Multiple Masking Views. You should read this document completely and adhere to all its recommendations and requirements to ensure a successful configuration.
This white paper is written for data center administrators and ViPR system administrators.
There are two principal scalability reasons that more than one Masking View is required. These are:
- The VMAX has a limit that only 4096 volumes can exist in an Initiator Group. If cascaded Initiator Groups are used with separate Masking Views, limit this can be avoided.
- There is a limit that only 4096 volumes can be processed by a VMAX CPU. Each pair of ports (such as FA7E:0 and FA7E:1) share a single VMAX CPU. Further this CPU limitation is affected by whether or not an exported volume is a Meta volume; if so each Meta component of the exported volume counts towards the VMAX CPU volume limit. Also, if the same volume is referenced from both ports in a CPU, it only counts once towards the limit.
When ViPR receives a request to provision a VPLEX Local virtual volume, the following operations happen:
- ViPR creates a volume on a storage array to hold the data for the virtual volume. This volume could be termed the “backing volume”, and the array the “backing array”, as they provide the storage for the virtual volume.
- ViPR will read the existing masking views off the backing array that contain the initiators which are the VPLEX back-end ports. (The VPLEX back-end ports are used exclusively to handle traffic to backing arrays. The backing arrays see the VPLEX back-end ports as initiators.)
- If there one or more existing masking view that can be validated to meet ViPR requirements, the volume(s) are added to the validated masking view with the lowest volume count.
- Otherwise ViPR automatically creates a single cascaded Storage Group and Masking view to hold backing volumes for a particular VPLEX on the backing array.
- If ViPR created the Masking View that was used, ViPR ensures that zones are created for the Initiator to Target mappings in the Masking View if the ViPR auto_san_zoning boolean in the Varray is true. If ViPR used an existing Masking View off the backing array that it did not create, no zoning is attempted (because the zoning should have also been manually configured.)
- ViPR discovers and claims the backing volume in the VPLEX cluster and uses it to build a VPLEX virtual volume.
In more complicated VPLEX provisioning cases (as determined by the virtual pool parameters), ViPR may create more than one backing volume per virtual volume. For example, a backing volume on two different arrays within the two VPLEX clusters is used to construct a distributed virtual volume.
If a user wants to pre-define multiple masking views on a VMAX, this must be done before using ViPR to create a VPLEX virtual volume using the specified backend array. If creation of a virtual volume is attempted before the predefined masking views are in place, we know from the above algorithm that ViPR will automatically create a masking view, which will likely conflict with any future predefined masking views the admin would subsequently create.
If a volume is inadvertently created on the VPLEX that automatically creates an undesired VMAX Masking View, our recommendation would be to remove the volume which (when the last volume is deleted) should remove the automatically created masking view. Then proceed with the manual configuration of the desired Masking Views. When complete, you may attempt creation of the VPLEX volume(s) again.
Planning the VMAX Masking Views
You should carefully plan the layout of your VMAX Masking Views that are to be pre-created. The number of Masking Views (MVs) that can be created depends on several factors, including:
- How many VPLEX back ports are on fabrics/vsans that are connected to the VMAX
- How many VMAX ports are on those same fabrics
- How many director CPUs are used for the ports
- Redundancy considerations for the VMAX ports, i.e. we prefer ports on different directors or engines to be used together for a MV.
There are a few basic rules that must be satisfied for a viable VPLEX configuration, according to the VPLEX best practices:
- Every director must have at least two paths to all storage.
- No director should have more than four paths to any storage. Having more than four paths causes issues with timeouts taking too long before switching to alternate directors. This can cause connectivity loss.
The VPLEX back-end ports are used as Initiators to the VMAX. In this document, the term Initiators refers to the VPLEX back end ports that are used for communication with the VMAX.
You must create an Initiator Group consisting of VPLEX initiators (VPLEX back-end ports) from one of the VPLEX clusters on the VMAX. The Initiator Group should consist of at least two initiators from each VPLEX director on either cluster-1 or cluster-2 (but not both clusters). Ideally, the initiators are split across two different networks for redundancy.
If all four back-end ports on every director in a VPLEX cluster can be connected to a specific VMAX, it is possible to split the initiators into two groups, one containing two ports from each VPLEX director, and the other containing the other two ports from each VPLEX director. Within each group, a VPLEX director’s two ports should be on different networks so as to avoid failure caused by a network outage. However, in the example configurations below all the VPLEX initiators from one cluster are included in a single Initiator Group.
For each Masking View you want to create, you want to set up a separate cascaded Initiator Group (parent) that includes as its only member the Initiator Group (child) containing the VPLEX initiators. This parent cascaded Initiator Group is associated with the Masking View. The Initiator Group containing the VPLEX initiators should not be directly associated with any Masking Views. Following this strategy will allow each Masking View to have a separate HLU space of 4096 LUNs.
As an example, with four ports on the VMAX that can be connected to the VPLEX initiators, you could set up the following Masking Views:
For these examples, assume there are two Networks, NetA and NetB, and that all the even VPLEX and VMAX ports are on NetA, and all the odd VPLEX and VMAX ports are on NetB. (Many other valid configurations are possible. This is one example.)
With eight ports, you can up scalability by utilizing additional VMAX director CPUs:
So with eight ports, each MV uses a disjoint set of director CPUs. MV1 uses the CPUs 7E and 10E, and MV2 uses the CPUs 7F and 10F. Now each Masking View can scale to 4096 (non-meta) volumes. Since volumes will be split evenly across the MVs, more total volumes (8192) can be supported.
With even more ports available, and four Cascading Initiator Groups, you can create four Masking Views, while still using a unique set of director CPUs for each MV. Consider sixteen ports:
If you double the number of ports again to thirty two, you could support eight masking views in a similar configuration. The following are some observations from these simple examples:
- You should use a separate Cascading Initiator Group parent for each Masking View. The child Initiator Group(s) (containing the VPLEX initiators) should not be directly associated with a Masking View.
- If you want four paths per director, you need a minimum of four ports in each masking view. This assumes that all VPLEX directors share the same four ports. This says an upper bound on the number of MVs is the number of ports divided by four, assuming they are evenly split across networks and each VMAX CPU has one port connected to each network.
- You could potentially get more overall bandwidth from a MV by assigning more than four ports per MV, but each director can only use a maximum of four ports.
- You can use the two ports from a single VMAX CPU on different networks within the same masking view without suffering any scalability loss.
When ViPR receives a request to create virtual volumes using storage provided by a VMAX, it reads the existing Masking Views on the VMAX and determines if there are any suitable Masking Views in which it can place the volume. This determination is made each time a backing volume is created on the VMAX and needs to be exported to the VPLEX.
Although ViPR reads and checks the Masking View, no attempt is made to read the zoning information that would map the VPLEX back-end ports to the VMAX ports unless ViPR created the Masking View.
ViPR imposes certain restrictions on what it considers a valid Masking View, based on the best practices and the observations above. Here are the validations it performs for each Masking View:
Reason for Restriction
A masking view must contain at least two initiators from each VPLEX director.
VPLEX best practice dictates this for redundancy.
A masking view must have at least two usable array ports. A warning is issued if there are less than four usable ports. For a port to be usable, it must be:
o On a Network that connects the VPLEX initiators and storage array, and
o Assigned to the Virtual Array in which the volume(s) are being created.
If there are fewer than two ports, there is no redundancy. If there are fewer than four ports, then the redundancy is sub-optimal because the MV cannot provide the optimum of four paths per director.
A masking view must not have initiators from both VPLEX clusters. Only initiators from directors on one of the VPLEX clusters are allowed.
If both clusters have initiators in the masking view, then volumes created on the VMAX will be visible to both VPLEX clusters, which will cause ViPR provisioning errors.
The masking view name should not contain the characters “NO_VIPR” in either case. Masking views with NO_VIPR in their name are interpreted to mean that the administrator desires them to be ignored.
This allows the administrator to set up special masking views for cross-connected VPLEX metadata or logging volumes.
There are other restrictions on Masking Views that ViPR does not enforce. These restrictions must be obeyed by the administrator:
Reason for Not Validating
There must not be more than four paths from any director to the backing array. This is VPLEX best practice.
ViPR does not read zoning for manually created Masking Views. This allows the administrator the most freedom in how the zones are created, but places more responsibility for verifying configuration correctness on the administrator.
The Masking View must provide redundant connectivity between every director and the back-end array. If volumes are added to masking views that are manually created but do not provide the required connectivity, provisioning of the virtual volume will fail.
Since ViPR does not read the zoning information for manually created masking views, this cannot be validated. The administrator is responsible for ensuring connectivity before attempting provisioning.
A cascading set of Storage Groups must be created that contains each manually created masking view. The parent Storage Group can be named xxx (where xxx is any acceptable name), but the child Storage Group must be called xxx_SG_NonFast. This allows the ViPR Fast processing logic to put volumes without a fast policy in the “NonFast” Storage Group. ViPR will add additional child Storage Groups for each fast policy that is applied to volumes. If you do not use this naming convention, ViPR may not be able to properly provision FAST virtual volumes using the backing array.
Not validated at this time.
An Initiator should not be in multiple Initiator groups.
Not validated at this time.
The same array port should not be used in multiple masking views.
Not validated at this time.
Example Validation Messages
When ViPR validates existing Export Masks, it logs details about each mask and its validity in the controllersvc.log. Here are some sample messages with explanations:
controllersvc.log.20140208-122359:2014-02-08 10:57:24,346 [pool-5- Searching for existing ExportMasks between VPLEX VPLEX _device (VPLEX +FNM00114300288:FNM00114600001) and Array SYMMETRIX+000195701573 (SYMMETRIX+000195701573) in Varray urn:storageos:VirtualArray:93b108fd-816e-4660-8810-f0ebf64f7a4c: (indication it is searching for existing masks, which are listed just below)
Mask VPLEX 154_no_vipr (urn:storageos:ExportMask:127ac0e3-cf89-465c-a536-903318f8b821:) Externally created
Mask VPLEX 154BadMixedClusters (urn:storageos:ExportMask:0b093342-854d-452d-9e39-620096356f83:) Externally created
Mask VPLEX 154A (urn:storageos:ExportMask:c549d5f9-e172-4a32-b06f-eb6afb15edbc:) Externally created
Mask VPLEX 154C (urn:storageos:ExportMask:8c86bb7e-6326-4a7b-8bf0-14a876461050:) Externally created
Mask Vpex154B (urn:storageos:ExportMask:e8782b18-1894-4c8c-bda6-c8a2da60cdaf:) Externally created
Validating ExportMask VPLEX 154_no_vipr (indicates validating a specific mask)
Warning: ExportMask VPLEX 154_no_vipr has only 2 target ports (best practice is at least four)
ExportMask VPLEX 154_no_vipr disqualified because the name contains NO_VIPR (in upper or lower case) to exclude it (indicates validation failed and why)
Validating ExportMask VPLEX 154BadMixedClusters
Warning: ExportMask VPLEX 154BadMixedClusters has only 2 target ports (best practice is at least four)
ExportMask VPLEX 154BadMixedClusters disqualified because it contains wwns from both VPLEX clusters
Validating ExportMask VPLEX 154A
Warning: ExportMask VPLEX 154A has only 2 target ports (best practice is at least four)
Validation of ExportMask VPLEX 154A passed; it has 3 volumes (indicates validation of an Export Mask succeeded)
Validating ExportMask VPLEX 154C
Warning: ExportMask VPLEX 154C has only 2 target ports (best practice is at least four)
Validation of ExportMask VPLEX 154C passed; it has 0 volumes
Validating ExportMask Vpex154B
Warning: ExportMask Vpex154B has only 2 target ports (best practice is at least four)
Validation of ExportMask Vpex154B passed; it has 1 volumes
Returning new ExportGroup VPLEX _FNM00114300288:FNM00114600001_000195701573_f92d981d
Returning ExportMask VPLEX 154C (urn:storageos:ExportMask:8c86bb7e-6326-4a7b-8bf0-14a876461050:)
(indicates which ExportMask was selected for use)
This section shows how to set up a simple multiple Masking View configuration on a VMAX. Instructions are provided in a Step-by-Step sequence. You must use values and configuration parameters appropriate to the specific VPLEX and VMAX you are configuring.
If you have not planned your configuration of Port Groups, Initiator Groups, and Masking Views, do so before proceeding.
Step 1: Configure the Zoning
In the simple configuration described in this section, there are two VPLEX directors in a VPLEX cluster. Each director has two back-end ports connected to the backing array. There is only a single Network. There are six ports on the backing array in that Network. There is one zone containing the four VPLEX back-end ports and the six array ports:
Zoning must be successfully completed for the VMAX to “see” the Initiators on the fabric, allowing you to easily create the Initiator groups in Unisphere.
Configure Initiator Groups for the VPLEX back-end ports. In this example configuration, there are four ports per VPLEX director connected to the array (using two Networks). Only one Initiator Group is built.
Step 3: Create the Cascaded Initiator Groups
For each Masking View to be created, create a Cascaded Initiator Group parent that holds the above Initiator Group:
Repeat as necessary to have a Cascaded Initiator Group for each Masking View:
Step 4: Configure the Port Groups
This example configuration has a very limited number of ports. The administrator can only afford two ports per Masking View using the six available ports. This does not allow for an ideal level of redundancy.
Create three Port Groups:
This step is repeated once for each Masking View. This procedure shows the creation steps necessary for one Masking View.
This step creates a child Storage Group with an unused, arbitrary volume. (Unisphere does not allow you to create a Storage Group without a volume.)
Create an arbitrarily small (unused) volume for this Storage Group:
Click Finish to complete the request.
Verify that the new Storage Group is created:
Now create a cascading Storage Group to hold the previously created Storage Group which becomes the child:
Select the previously created Storage Group as the child:
Click Finish on this screen:
Verify the configuration of the cascaded Storage Groups:
Repeat Step 5 as necessary so as to create a pair of cascaded Storage Groups for each Masking View that is planned.
Using the components you have created, you may now assemble the masking views. This example shows creating a single Masking View for the cascaded Storage Groups created above. You should repeat Step 4 for each Masking View.
Create the Masking View using the Parent Storage Group, with the correct Initiator Group and Port Group:
You cannot be certain your manually created Export Masks are valid until you create at least as many volumes using ViPR as the number of Export Masks. ViPR should round-robin the assignment of volumes to Export Masks.
This section describes any special situations that might arise in the design or provisioning of Export Masks on the VMAX for use by a VPLEX .
The only reason to cross-connect VPLEX Back-End Ports to VMAX Arrays at different sites (using different clusters) is so that in situations with very few arrays, the VPLEX Metadata and logging volumes can be protected using mirroring.
In this configuration, zoning and masking are set up so that Initiators from both VPLEX clusters can access targets on the array(s). ViPR does not support this configuration for provisioning and must notuse Export Masks so configured. There are two ways this should be prevented:
- ViPR does an explicit validation check that masking views with Initiators from both clusters of a VPLEX should not be used.
- The administrator should include “NO_VIPR” or “no_vipr” in the Export Mask name so that ViPR will not attempt to use it and it’s clear that it was not intended for ViPR.
Copyright © 2014 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners.
Part Number: H12994