The following tables summarize important behavioral and operational differences and new features and enhancements between OneFS 6.5 and OneFS 7.1.1. To fully understand these differences, make sure that you review the referenced documentation before you upgrade.
Behavioral and operational differences between OneFS 6.5 and OneFS 7.1.1
The following table highlights some of the major behavioral and operational changes between OneFS 6.5 and OneFS 7.1.1.
|OneFS 6.5 Behavior||OneFS 7.x Behavior|
|The root account is required for full administrator access through SSH, providing a higher level of access than most operations require.|
Requiring the root account for performing administrator tasks is replaced with role-based access control (RBAC). Instead of granting root access to users who perform administrative tasks, you assign role-based access to delegate administrative tasks to selected users. For more information, see "Authentication and Access Control" in the OneFS 7.1.1.Web Administration Guide.
|Auditing refers to monitoring disk usage.||Auditing refers to monitoring disk usage and auditing activities such as system configuration changes and SMB protocol activity. Audit logs that are over 1 GB are automatically compressed into gzip files. The audit forwarder functionality is unaffected. For more information, see "Auditing" in the OneFS 7.1.1.Web Administration Guide.|
|Access to cluster resources is controlled by the available authentication providers, such as SMB, NFS, and SSH.||OneFS 7.1.1 introduces access zones, which control user access based on the IP address that they connect to. For more information, see Access Zones.|
|You can target data to disk pools and manually create and manage disk pools.|
OneFS 7.0 introduces the logical storage types node pools and tiers. A tier is a user-defined collection of node pools that can be used as a target for a file pool policy.
OneFS 7.0 also introduces autoprovisioning. Instead of manually creating and managing disk pools, autoprovisioning automatically assigns nodes and drives to node pools and disk pools. For more information, see "Storage Pools" in the OneFS 7.1.1.Web Administration Guide.
|You can include mixed node types in disk pools and node pools.||OneFS 7.0 and later does not support mixed node types in disk pools and node pools, though you can combine mixed node types in tiers|
Clients connect to L1 cache, also known as front-end cache. L1 cache holds copies of file system metadata and data requested by the front-end network. L1 cache communicates with L2 cache.
L2 cache, also known as back-end cache, holds copies of file system metadata and data on the node that owns the data. When L2 cache reaches capacity, OneFS discards the oldest cached data and processes new data requests by accessing the storage drives.
In OneFS 7.1.1, L1 and L2 cache functionality is the same as in OneFS 6.5, and you can designate SSD drives as additional cache: L3 cache. Designating SSD drives as L3 cache effectively increases L2 cache capacity and improves file access performance.
In OneFS 7.1.1, when L2 cache reaches capacity, it releases the oldest cached data to L3 cache. L3 cache holds the file data and metadata released from L2 cache, effectively increasing the size of cache memory and improving file access speeds. For more information, see "L3 cache overview" in the OneFS 7.1.1 CLI Administration Guide.
|The OneFS command-line interface (CLI) enables you to manage an Isilon cluster outside of the web administration interface or LCD panel. You can access the command-line interface by opening a secure shell (SSH) connection to any node in the cluster. You can run isi commands to configure, monitor, and manage Isilon clusters and the individual nodes in a cluster.||The OneFS 7.1.1 CLI includes new commands and enhancements. For more information, see OneFS CLI Mappings and the OneFS 7.1.1 CLI Administration Guide.|
|You can compile code on the cluster using gcc.||The gcc compiler is no longer supported.|
New Features in OneFS 7.1.1
The following table summarizes major new features in OneFS 7.1.1 and lists references for more information about them.
|Administration||Role-based access control (RBAC)||A role is a collection of OneFS privileges that are granted to members of that role as they log in to the cluster. There are built-in roles for security, auditing, and system administration, and you can create custom roles with their own sets of privileges. Only root and admin user accounts can perform administrative tasks and add members to roles.|
"Managing Roles" in the OneFS 7.1.1.Web Administration Guide.
|Backup administrator role|
You can assign the role of Backup Administrator (BackupAdmin) to user accounts. This enables users to back up and restore data on the cluster through native Microsoft Windows tools, such as Robocopy.
|"Managing Roles" in the OneFS 7.1.1.Web Administration Guide.|
|Audit||You can forward audited system-configuration information to the syslog, and you can audit access zone configuration activities.||"Auditing" in the OneFS 7.1.1.Web Administration Guide.|
|Management||Command-line interface (CLI) commands|
There are new isi commands for managing authentication, NFS exports, SMB shares, quotas, and snapshots.
|OneFS 7.1.1 CLI Administration Guide.|
|Platform API||The Isilon Platform API (PAPI) is a RESTful programmatic interface that you can use to automate cluster management, configuration, and monitoring. The PAPI integrates with role-based authentication to mediate all access to the cluster, including mediating access for InsightIQ, vCenter, the OneFS web administration interface and command-line interface, and customer applications that programmatically call the Isilon Platform API to access the cluster||OneFS 7.1.1 Platform API Reference.|
Access zones control user access to a cluster. Users from different authentication providers can access different cluster resources based on the IP address that they connect to.
In OneFS 7.1.1, access zones require a root directory. When you upgrade to OneFS 7.1.1, all previously created access zones will be assigned a base path of /ifs. Before you can create any new access zones, you must specify new base paths for the existing access zones. The new base paths must not overlap.
If you plan to use multiple access zones, you must alter your directory layouts so that the root of one access zone is not nested within another.
Article 192266: OneFS 7.1.1 and later: Upgrade Prevents Access Zones from Being Configured
|LDAP provider||OneFS 7.0 and later releases are not compatible with the legacy LDAP provider. Before you upgrade, you must migrate clusters using the legacy LDAP provider to the new LDAP.|
|Smart Pools/Storage Pools||Disk pools, node pools, and tiers|
In OneFS 7.0 and later, a group of nodes is called a node pool, and a group of disks in a node pool is called a disk pool. Node pools are groups of three or more equivalent-class nodes that are associated in a single storage pool. A tier is a user-defined collection of node pools that can be used as a target for a file pool policy.
OneFS 7.1 and later includes new commands for managing file pool policies.
In the OneFS 7.1.1 web administration interface, Storage Pools is the feature name. SmartPools is a tab within StoragePools.
In OneFS 7.1.1 and later, file pool policy names cannot begin with a digit.
"Storage Pools" in the OneFS 7.1.1.Web Administration Guide
|Autoprovisioning||Beginning with OneFS 7.0, you cannot view or target disk pools directly. Instead, the OneFS autoprovisioning process automatically assigns nodes to node pools and disks to disk pools. Autoprovisioning optimizes the performance and reliability of the file system. The smallest unit of storage that can be administered is a node pool. Nodes are automatically assigned to node pools in the cluster based on node type. Nodes are autoprovisioned where there are at least three nodes of an equivalence class added to a cluster.|
"Storage Pools" in the OneFS 7.1.1.Web Administration Guide.
For information about equivalent node types, see "OneFS 7.0 Provisioning Equivalencies" in the
|Global namespace acceleration (GNA)|
GNA enables data stored on node pools without SSDs to use SSDs elsewhere in the cluster to store extra metadata mirrors. Extra metadata mirrors accelerate metadata read operations.
In OneFS 7.0, you can enable GNA if 20% or more of the nodes in the cluster contain at least one SSD, and if 1.5% or more of the total cluster storage is SSD-based. For best results, make sure that 2% of the total cluster is SSD-based. If fewer than 20% of the nodes in the cluster contain at least one SSD, GNA is inactive until the ratio is corrected.
|"SSD Pools" in the OneFS 7.1.1.Web Administration Guide.|
OneFS 7.1.1 introduces L3 cache. You can deploy SSD drives as L3 cache to increase the size of cache memory and improve file access speeds. L3 cache is enabled by default for all new node pools added to a OneFS 7.1.1 cluster and new SSD node pools added to the cluster.
L3 cache-enabled SSD node pools cannot participate in GNA because they are designated for L3 cache.
|"Storage Pools" in the OneFS 7.1.1.Web Administration Guide.|
|SmartLock||Enterprise and compliance modes|
SmartLock is Isilon's implementation of write once, read many, or WORM.
Enterprise mode is the default. It provides less-restrictive archiving and is not SEC-compliant.
Compliance mode provides immutable archiving and complies with SEC rule 17a-4. Compliance mode is recommended only for customers who are required by law to adhere to SEC 17a-4 requirements for archiving data. Root access to the cluster is disabled. An administrator must log in using the compliance administrator role (compadmin), which has limited privileges. SmartLock files cannot be altered in any way or erased until the retention period has expired.
|SyncIQ||SyncIQ compatibility between source and target clusters||A source cluster running OneFS 7.1.1 can synchronize with a target cluster that is running OneFS 7.0 or later.||"Data replication with SyncIQ" in the OneFS 7.1.1.Web Administration Guide.|
|Automated data failover and failback|
Automated failover and failback provide continuous data availability in the event of a cluster outage.
Failover is the process that allows clients to modify data on a secondary cluster.
Failback is the process that allows clients to access data on the primary cluster again and begins to replicate data back to the secondary cluster; data that was modified on the secondary cluster during the failover is synchronized back to the primary cluster as part of the failback process.
Note: To use automated data failover between clusters, both the source and target cluster must be running OneFS 7.1.1. Automated data failover and failback are not supported for SmartLock directories. However, you can manually fail over and fail back SmartLock directories.
|"Data replication with SyncIQ" in the OneFS 7.1.1.Web Administration Guide.|