Vmware vSphere Virtual Volumes is an integration and management framework for external storage (SAN and NAS). This framework allows customers to easily assign and manage storage capabilities on a per-application (per-VM) basis at the hypervisor level using Storage Policy-Based Management (SPBM).
Virtual Volumes is an industry-wide initiative that will allow customers to leverage the unique capabilities of their current storage investments and transition without disruption to a simpler and more efficient operational model optimized for virtual environments that works across all storage types.
Figure 1 – Virtual Volumes Partner Ecosystem
Virtual Volumes enables application-specific requirements to drive storage provisioning decisions while leveraging the rich set of capabilities provided by existing storage arrays. Some of the primary benefits delivered by Virtual Volumes are focused around operational efficiencies and flexible consumption models.
Virtual Volumes simplifies storage operations by automating manual tasks and eliminating operational dependencies between the vSphere Admin and the Storage Admin. Provisioning is faster, and change management is simpler as the new operational model is built upon policy-driven automation.
Virtual Volumes simplifies the delivery of storage service levels to applications by providing administrators with finer control of storage resources and data services at the VM level that can be dynamically adjusted in real time.
Virtual Volumes improves resource utilization by enabling more flexible consumption of storage resources, when needed and with greater granularity. The precise consumption of storage resources eliminates overprovisioning. The Virtual Datastore defines capacity boundaries, access logic, and exposes a set of data services accessible to the virtual machines provisioned in the pool.
Virtual Datastores are purely logical constructs that can be configured on the fly, when needed, without disruption and don’t require formatting with a file system.
Historically, vSphere storage management has been based on constructs defined by the storage array: LUNs and filesystems. A storage administrator would configure array resources to present large, homogenous storage pools that would then be consumed by vSphere administrator. Since a single, homogeneous storage pool would potentially contain many different applications and virtual machines; this approach resulted in needless complexity and inefficiency. vSphere administrators could not easily specify specific requirements on a per-VM basis.
Also, changing service levels for a given application usually meant relocating the application to a different storage pool. Storage administrators had to forecast well in advance what storage services might be needed in the future, usually resulting in the overprovisioning of resources.
With Virtual Volumes, this approach is fundamentally changed. vSphere administrators use policies to express application requirements to a storage array. The storage array responds with an individual storage container that precisely maps to application requirements and boundaries.
Typically, the virtual datastore is the lowest granular level at which data management occurs from a storage perspective. However, a single virtual datastore contains multiple virtual machines, which might have different requirements. With the traditional approach, differentiation on a per virtual machine level is difficult.
The Virtual Volumes functionalities allows for the differentiation of virtual machine services on a per application level by offering a new approach to storage management. Rather than arranging storage around features of a storage system, Virtual Volumes arranges storage around the needs of individual virtual machines, making storage virtual machine centric.
Virtual Volumes map virtual disks and their respective components directly to objects, called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive storage operations such as snapshot, cloning, and replication to the storage system.
vSphere Virtual Volume Architecture
Virtual Volumes creates a significantly different and improved logical storage architecture that allows operations to be conducted at the virtual machine level, while still using native capabilities.
There are five different types of Virtual Volumes object types and each of them map to a different and specific virtual machine file.
Figure 3: vSphere Virtual Volumes Object Types
Config – VM Home, Configuration files, logs
Data – Equivalent to a VMDK
Memory – Snapshots
SWAP – Virtual machine memory swap
Other – vSphere solution specific object
The Vendor Provider (VP), also known as the VASA provider, is a storage-side software component that acts as a storage awareness service for vSphere and mediates out-of-band communication between vCenter Server and ESXi hosts on one side and a storage system on the other. Storage vendors exclusively develop vendor providers.
Figure 4: Vendor (VASA) Provider
Unlike traditional LUN and NFS based vSphere storage, the Virtual Volumes functionality does not require preconfigured volumes on a storage side. Instead, Virtual Volumes uses a Storage Container, which is a pool of raw storage capacity and/or an aggregation of storage capabilities that a storage system can provide to virtual volumes.
Figure 5: Storage Containers
A Virtual Datastore represents a storage container in a vCenter Server instance and the vSphere Web Client. A vSphere Virtual Datastore represents a one-to-one mapping to the storage system’s storage container. The Storage Container (or Virtual Datastore) represents a logical pool where individual Virtual Volumes created VMDKs are created.
Figure 6: Virtual Datastore
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
ESXi uses Protocol Endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.
Protocol Endpoints are compatible with all SAN/NAS industry standard protocols:
Fiber Channel (FC)
Fiber Channel over Ethernet (FCoE)
Figure 7: Protocol Endpoint
In the next blog of Part III, I will introduce how vSphere Virtual Volumes help to implement Oracle Database 12c with best performance, best protection and recovery on EMC VMAX3…
Software-defined storage (SDS) is an evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage definitions typically include a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment may also provide policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup.
VMware’s Software-Defined Storage vision and strategy is to drive transformation through the hypervisor, bringing to storage the same operational efficient that server virtualization brought to compute. As the abstraction between applications and available resources, the hypervisor can balance all IT resources – compute, memory, storage and networking – needed by an application.
VMware vSphereVirtual Volumes™ is an integration and management framework for external storage. Virtual Volumes streamlines storage operations through policy-driven automation, enabling more agile storage consumption for VMs. Virtual Volumes simplifies the delivery of storage service levels to individual applications by providing finer control of storage resources at VM granularity. Overprovisioning is eliminated as each VM will consume the exact resources needed – nothing less, nothing more.
By transitioning from the legacy storage model to Software-Defined Storage (SDS) with Virtual Volumes, customers will gain the following benefits:
Automation of storage ‘class-of-service’ at scale
Simple change management using policies
Finer control of storage class of service
Effective monitoring/troubleshooting with per VM visibility
Safeguard existing investment
The goal of Software-Defined Storage (SDS) is to introduce a new approach that enables a more efficient and flexible operational model for storage in virtual environment as shown in below diagram:
The virtual data plane is responsible both for storing data and applying data services. Unlike the legacy physical storage model, virtual data plane is virtualized by abstracting physical hardware resources and aggregating them into logical pools of capacity that can be more flexibly consumed and managed, and makes the virtual disk the fundamental unit of management around which all storage operations are controlled. As a result, exact combination of data services can be instantiated and controller independently for each VM. This allows for per-application storage policies, ensuring both simpler yet individualized management of applications without the requirement of mapping applications to broad concepts like a physical datastore.
In the Software-defined storage (SDS) environment, the storage infrastructure expresses the available data services and capabilities (compression, replication, caching, snapshots, de-duplication, availability, etc) to the control plane to enable automated provisioning and dynamic control of storage services levels through programmatic APIs. These storage services may come from many different locations: Directly from a storage array, from a software solution within vSphere itself, or from a third party location via API. These gives administrators the ability to create unique policies for each VM in accordance with their business requirements, consuming data services from different providers in each.
Policy-Driven Control Plane
In the VMware Software-Defined Storage (SDS) model, the control plane acts as the bridge between applications and storage infrastructure providing a standardized management framework for provisioning and consuming storage across all tiers of storages, whether on external arrays, x86 server storage or cloud storages. And it is the management layer responsible for controlling and monitoring storage operations.
Unlike the legacy storage model the control panel is typically tied to each storage device which are static pre-allocations of resources and data services tied to the infrastructure, Software-defined storage (SDS) classed of service become logical entities controlled entirly by software and interpreted through policies. This makes the Software-defined storage (SDS) model able to more flexibly adapt to ongoing changes on specific application requirements. Policies become the control mechanism to automate the monitoring process and to ensure compliance of storage service levels throughout the lifecycle of the applications. This policy-driven control plane is delivered through Storage Policy-Based Management (SPBM)
In the next blog of Part II, I will introduce in detail on concept and architecture of vSphere Virtual Volumes and how it will transform the legacy storage operations.