VPLEX:  Virtual Volume with service status inactive

           

   Article Number:     537392                                   Article Version: 2     Article Type:    Break Fix 
   

 


Product:

 

VPLEX VS2,VPLEX Series,VPLEX Metro,VPLEX VS6,VPLEX GeoSynchrony

 

Issue:

 

 

The virtual volume status of a distributed device shows a service status of running on one cluster and a service status of inactive on the other cluster as seen below.   
   
        
    VPlexcli:/clusters/cluster-1/virtual-volumes/device_problem_volume_vol> ll   
   
    Name                        Value     
      --------------------------  ----------------------------------------     
      block-count                 104857920     
      block-size                  4K     
      cache-mode                  synchronous     
      capacity                    400G     
      consistency-group           Consistency_Group_x     
      expandable                  true     
      expandable-capacity         0B     
      expansion-method            storage-volume     
      expansion-status            -     
      health-indications          []     
      health-state                ok     
      locality                    distributed     
      operational-status          ok     
      recoverpoint-protection-at  []     
      recoverpoint-usage          -     
      scsi-release-delay          0     
      service-status              Running             
      storage-array-family        symmetrix     
      storage-tier                -     
      supporting-device           device_C1_problem_volume     
      system-id                   device_ problem_volume_vol     
      thin-capable                true     
      thin-enabled                disabled     
      volume-type                 virtual-volume     
      vpd-id                      VPD83T3:6000144000000010604145145xxxxxx
   
   
   
   
    VPlexcli:/clusters/cluster-2/virtual-volumes/device_problem_volume_vol> ll   
   
    Name                        Value     
      --------------------------  ----------------------------------------     
      block-count                 104857920     
      block-size                  4K     
      cache-mode                  synchronous     
      capacity                    400G     
      consistency-group           Consistency_Group_x     
      expandable                  true     
      expandable-capacity         0B     
      expansion-method            storage-volume     
      expansion-status            -     
      health-indications          []     
      health-state                ok     
      locality                    distributed     
      operational-status          ok     
      recoverpoint-protection-at  []     
      recoverpoint-usage          -     
      scsi-release-delay          0     
      service-status              inactive            
      storage-array-family        symmetrix     
      storage-tier                -     
      supporting-device           device_C2_problem_volume     
      system-id                   device_ problem_volume_vol     
      thin-capable                true     
      thin-enabled                disabled     
      volume-type                 virtual-volume     
      vpd-id                      VPD83T3:6000144000000010604145145xxxxxx
   
   
    All other health indicators for the virtual volume are good.
                                                           

 

 

Cause:

 

 

The volume is exported via storage-view on cluster-1 and cluster-2, the inactive state as seen on cluster-2 in this example comes from the fact the there are no front-end ports assigned to the storage-view on cluster-2 but there are front end ports declared in the storage-view on cluster-1 as shown below:   
   
   
    VPlexcli:/> ll /clusters/cluster-*/exports/storage-views/SV_problem_volume   
   
    /clusters/cluster-1/exports/storage-views/SV_ problem_volume:     
      Name                      Value     
      ------------------------  ------------------------------------------------------------------------------     
      caw-enabled               true     
      controller-tag            -     
      initiators                [A_host_HBA0, B_host_HBA1]     
      operational-status        ok     
      port-name-enabled-status  [P0000000047A0038A-A0-FC00,true,ok, P0000000047B00458-B0-FC01,true,ok]     
      ports                     [P0000000047A0038A-A0-FC00, P0000000047B00458-B0-FC01]     
      virtual-volumes           [(0,device_ problem_volume_vol,VPD83T3:6000144000000010604145145xxxxxx,400G)]     
      write-same-16-enabled     true     
      xcopy-enabled             true
   
   
   
    /clusters/cluster-2/exports/storage-views/SV_problem_volume:     
      Name                      Value     
      ------------------------  ------------------------------------------------------------------------------     
      caw-enabled               true     
      controller-tag            -     
      initiators                [A_host_HBA0, B_host_HBA1]     
      operational-status        ok     
      port-name-enabled-status  []     
      ports                     []     
      virtual-volumes           [(0,device_ problem_volume_vol,VPD83T3:600014400000001060414514xxxxxx,400G)]     
      write-same-16-enabled     true     
      xcopy-enabled             true
   
   
    Note the storage view on cluster-2 has no front-end ports declared under port-name-enabled-status and ports fields.   
     
                                                           

 

 

Resolution:

 

 

This is a configuration-based issue and is not due to a fault or failing on the VPlex.         
          To have the virtual-volume enter a running state, the solution is to add front end ports to both storage views where the virtual volume is exported.
   
   
    For instructions to add / remove front-end ports to a storage-view please consult the VPlex Admin Guide for the applicable GeoSynchrony version available on www.support.dell.com