Storage Pools

Within OneFS, Storage Pools provide the ability to define subsets of hardware within a single cluster, allowing file layout to be aligned with specific sets of nodes through the configuration of storage pool policies. The notion of Storage Pools is an abstraction that includes disk pools, node pools, and tiers, all described below.


Disk Pools

Disk Pools are the smallest unit within the Storage Pools hierarchy, as illustrated in figure 4 below. OneFS provisioning works on the premise of dividing similar nodes’ drives into sets, or disk pools, with each pool representing a separate failure domain.


These disk pools are protected by default at +2d:1n (or the ability to withstand two disk or one entire node failure) and typically comprise six drives per node and span from three to forty nodes. Each drive may only belong to one disk pool and data protection stripes or mirrors don’t extend across disk pools (the exception being a Global Namespace Acceleration extra mirror, described below). Disk pools are managed by OneFS and are not user configurable.

  

storage_pools_fig1.png


Node Pools

Node Pools are groups of Disk Pools, spread across similar Isilon storage nodes (equivalence classes). This is illustrated in figure 5, below. Multiple groups of different node types can work together in a single, heterogeneous cluster. For example: one Node Pool of S Series nodes typically used for IOPS-intensive applications, one Node Pool of X Series nodes, primarily used for high-concurrent and sequential workloads, and one Node Pool of NL Series nodes, primarily used for archive purposes.  You can also have another Node Pool of HD Series nodes primarily used for deep archive workloads.


This allows OneFS to present a single storage resource pool comprising multiple drive media types – SSD, high speed SAS, large capacity SATA, etc - providing a range of different performance, protection and capacity characteristics. This heterogeneous storage pool in turn can support a diverse range of applications and workload requirements with a single, unified point of management.  It also facilitates the mixing of older and newer hardware, allowing for simple investment protection even across product generations, and seamless hardware refreshes.


Each Node Pool only contains disk pools from the same type of storage nodes and a disk pool may belong to exactly one node pool. For example, S Series nodes with 300 GB SAS drives and one 400 GB SSD per node would be in one node pool, whereas NL Series with 4 TB SATA Drives would be in another. Today, a minimum of 3 nodes are required per Node Pool. Nodes are not provisioned (not associated with each other and not writable) until at least three nodes from the same equivalence class are assigned in a node pool. If nodes are removed from a Node Pool, that pool becomes under-provisioned. In this situation, if two like nodes remain, they are still writable. If only one remains, it is automatically set to read-only.


Once node pools are created, they can be easily modified to adapt to changing requirements. Individual nodes can be reassigned from one node pool to another.  Node Pool associations can also be discarded, releasing member nodes so they can be added to new or existing pools. Node Pools can also be renamed at any time without changing any other settings in the Node Pool configuration.


Any new node added to a cluster is automatically allocated to a Node Pool and then subdivided into Disk Pools without any additional configuration steps, inheriting the SmartPools configuration properties of that Node Pool. This means the configuration of Disk Pool data protection, layout and cache settings only needs to be completed once per node pool and can be done at the time the node pool is first created. Automatic allocation is determined by the shared attributes of the new nodes with the closest matching Node Pool (an S node with 600 GB SAS Drives joins a Disk Pool of S Nodes with 600 GB drives).  If the new node is not a close match to the nodes of any existing Node Pool, it remains un-provisioned until the minimum Node Pool node membership for like nodes is met (three nodes of same or similar storage and memory configuration).

storage_pools_dwg2.JPG.jpg

When a new Node Pool is created and nodes are added SmartPools associates those nodes with an ID. That ID is also used in File Pool policies and file attributes to dictate file placement within a specific Disk Pool.


By default, a file which is not covered by a specific File Pool policy will go to the default Node Pool or pools identified during set up.  If no default is specified, SmartPools will write that data to the pool with the most available capacity.

storage_pools_fig3.jpg


Tiers

Tiers are groups of node pools combined into a logical superset to optimize data storage, according to OneFS platform type. This is illustrated in figure 6, below. For example, X Series node pools are often combined into a single tier. This tier could incorporate different styles of NL Series node pools (i.e. NL400 with 1TB SATA drives and NL400 with 3TB SATA drives) into a single, logical container. This is a significant benefit because it allows customers who consistently purchase the highest capacity nodes available to consolidate a variety of node styles within a single group, or tier, and manage them as one logical group.

storage_pools_fig4.jpg 

SmartPools users typically deploy 2 to 4 tiers, with the fastest tier usually containing SSD nodes for the most performance demanding portions of a workflow, and the lowest, capacity-biased tier comprising multi-TB SATA drive nodes.


storage_pools_fig5.jpg

 

Data Spill Over

If a Node Pool fills up, writes to that pool will automatically spill over to the next pool. This default behavior ensures that work can continue even if one type of capacity is full.  There are some circumstances in which spillover is undesirable, for example when different business units within an organization purchase separate pools, or data location has security or protection implications. In these circumstances, spillover can simply be disabled.  Disabling spillover ensures a file exists in one pool and will not move to another. Keep in mind that reservations for virtual hot spares (VHS) will affect spillover – if, for example, if VHS is configured to reserve 10% of a pool’s capacity, spillover will occur at 90% full.


Protection settings can be configured outside SmartPools and managed at the cluster level, or within SmartPools at either the Node Pool or File Pool level.  Wherever protection levels exist, they are fully configurable and the default protection setting for a Node Pool is +2d:1n.

 

storage_pools_fig6.jpg