There have been a couple of recent questions on the node quorum requirement differences for entire clusters versus OneFS nodepools.
“If I set protection on a six node X410 nodepool in a twenty node cluster to N+4, why does the data get 5x mirrored rather than erasure code parity protected? Clearly the cluster has far more than the nine nodes total which are needed for cluster quorum?”
In order for OneFS to properly function and accept data writes, a quorum of nodes must be active and responding. A quorum is defined as a simple majority: a cluster with x nodes must have [x/2]+1 nodes online in order to allow writes. This same quorum requirement is also true for individual nodepools within a heterogenous cluster. You need a minimum of three nodes of a specific hardware config in order to create a new nodepool. So, in this case, you’d need at least nine X410 nodes in your nodepool to allow for four node failures and still satisfy quorum for that pool.
OneFS clustering is based on the CAP theorem, and does not compromise on Consistency or Availability. As such, it uses a quorum to prevent Partitioning, or “split-brain” conditions, that can be introduced if the cluster should temporarily divide into two clusters. This is a pre-requisite of CAP. The quorum rule guarantees that, regardless of how many nodes fail or come back online, if a write takes place, it can be made consistent with any previous writes that have ever taken place.
So quorum dictates the number of nodes required in order to move to a given data protection level. For an erasure-code (FEC) based protection-level of N+M, the cluster must contain at least 2 M+1 nodes. For example, a minimum of nine nodes is required for a +4n configuration; this allows for a simultaneous loss of four nodes while still maintaining a quorum of five nodes for the cluster to remain fully operational.
If a cluster does drop below quorum, the file system will automatically be placed into a protected, read-only state, denying writes, but still allowing read access to the available data.
In the instances where a protection level is set too high for OneFS to achieve using FEC, the default behavior is to protect that data using mirroring instead. Obviously, this has a negative impact on space utilization. Here’s how that works in practice:
Note that the protection overhead % (in brackets) is a very rough guide and will vary across different datasets, depending on quantities of small files, etc.
More information on OneFS storage protection and overhead can be found in the following blog article: