After several recent inquiries from the field on how to effectively gather performance statistics on a cluster, it seemed like this might be a useful topic for a blog article:

 

Before planning or undertaking any performance tuning on a cluster (or its attached clients):

  • First, record the original cluster settings before making any configuration changes to OneFS or its data services.
  • Next, measure and analyze how the various workloads in your environment interact with and consume storage resources.


Performance measurement is done by gathering statistics about the common file sizes and I/O operations, including CPU and memory load, network traffic utilization, and latency. To obtain key metrics and wall-clock timing data for delete, renew lease, create, remove, set userdata, get entry, and other file system operations, connect to a node via SSH and run the following command as root to enable the vopstat system control:

 

# sysctl efs.util.vopstats.record_timings=1

 

After enabling vopstats, they can be viewed by running the ‘sysctl efs.util.vopstats’ command as root:

 

Here is an example of the command’s output:

 

# sysctl efs.util.vopstats

efs.util.vopstats.ifs_snap_set_userdata.initiated: 26 efs.util.vopstats.ifs_snap_set_userdata.fast_path: 0 efs.util.vopstats.ifs_snap_set_userdata.read_bytes: 0

efs.util.vopstats.ifs_snap_set_userdata.read_ops: 0

efs.util.vopstats.ifs_snap_set_userdata.raw_read_bytes:  efs.util.vopstats.ifs_snap_set_userdata.raw_read_ops:   0

efs.util.vopstats.ifs_snap_set_userdata.raw_write_bytes:   0

efs.util.vopstats.ifs_snap_set_userdata.raw_write_ops:   0 

efs.util.vopstats.ifs_snap_set_userdata.timed: 0

efs.util.vopstats.ifs_snap_set_userdata.total_time: 0

efs.util.vopstats.ifs_snap_set_userdata.total_sqr_time:   0

efs.util.vopstats.ifs_snap_set_userdata.fast_path_timed:   0

efs.util.vopstats.ifs_snap_set_userdata.fast_path_total_time:   0

efs.util.vopstats.ifs_snap_set_userdata.fast_path_total_sqr_time:   0

 

The time data captures the number of operations that cross the OneFS clock tick, which is 10 milliseconds. Independent of the number of events, the total_sq_time provides no actionable information because of the granularity of events. To analyze the operations, use the total_time value instead. The following example shows only the total time records in the vopstats:

 

# sysctl efs.util.vopstats | grep –e "total_ time: [ ^0]"

  1. efs.util.vopstats.access_rights.total_time: 40000
  2. efs.util.vopstats.lookup.total_time:  30001
  3. efs.util.vopstats.unlocked_ write_mbuf.total_time:  340006
  4. efs.util.vopstats.unlocked_write_mbuf.fast_path_total_time:  340006 efs.util.vopstats.commit.total_time: 3940137
  5. efs.util.vopstats.unlocked_getattr.total_time: 280006
  6. efs.util.vopstats.unlocked_getattr.fast_path_total_time:   50001 efs.util.vopstats.inactive.total_time: 100004
  7. efs.util.vopstats.islocked.total_time:   30001 efs.util.vopstats.lock1.total_time:   280005
  8. efs.util.vopstats.unlocked_read_mbuf.total_time: 11720146 efs.util.vopstats.readdir.total_time: 20000
  9. efs.util.vopstats.setattr.total_time: 220010 efs.util.vopstats.unlock.total_time: 20001
  10. efs.util.vopstats.ifs_snap_delete_resume.timed: 77350
  11. efs.util.vopstats.ifs_snap_delete_resume.total_time: 720014
  12. efs.util.vopstats.ifs_snap_delete_resume.total_sqr_time: 7200280042

 

The ‘isi statistics’ CLI command, which is a great tool for task here - and its output is current (ie. real time). Isi statistics is a versatile utility, providing real timewith the following subcommand-level syntax

 

Statistics

Category

Details

Client

Display cluster usage statistics organized according to cluster hosts and users

Drive

Show performance by drive

Heat

Identify the most accessed files/directories

List

List valid arguments to given option

Protocol

Display cluster usage statistics organized by communication protocol

Pstat

Generate detailed protocol statistics along with CPU, OneFS, network & disk stats

Query

Query for specific statistics. There are current and history modes

System

Display general cluster statistics (Op rates for protocols, network & disk traffic (kB/s)

 

 

Full command syntax and a description of the options can be accessed via isi statistics --help or via the man page (man isi-statistics).

 

The ‘isi statistics pstat’ command provides statistics per protocol operation, client connections, and the file system. For example, for  NFSv3:

 

# isi statistics pstat --protocol=nfs3

 

The ‘isi statistics client’ CLI command  provides I/O and timing data by client name and/or IP address, depending on options - plus the username if it can be determined. For example, to generate a list of the top NFSv3 clients on a cluster, the following command can be used:

 

# isi statistics client --protocols=nfs3 –-format=top

 

Or, for SMB clients:

 

# isi statistics client --protocols=smb2 –-format=top

 

SMB2 and SMB3 current connections are both displayed in the following stats command:

 

# isi statistics query current --stats node.clientstats.active.smb2


Or for SMB2 + 3 historical connection data:

 

# isi statistics query history --stats node.clientstats.active.smb2


The following command will total by users, as opposed to node. This can be helpful when investigating HPC workloads, or other workflows involving compute clusters:

 

# isi statistics client --protocols=nfs3 –-format=top --numeric --totalby=username --sort=ops,timemax

 

This ‘heat’ command option can be useful for viewing the files that are being utilized:

 

# isi statistics heat --long --classes=read,write,namespace_read,namespace_write | head -10

 

This shows the amount of contention where parallel user(s) operations are targeting the same object.

 

# isi statistics heat --long --classes=read,write,namespace_read,namespace_write --events=blocked,contended,deadlocked | head -10

 

It can also be useful to constrain statistics reporting to a single node. For example, the following command will show the fifteen hottest files on node 3.

 

# isi statistics heat --limit=15 --nodes=3

 

It’s worth noting that isi statistics doesn’t directly tie a client to a file or directory path. Both isi statistics heat and isi statistics client provide some of this information, but not together. The only directory/file related statistics come from the ‘heat’ stats, which track the hottest accesses in the filesystem.

 

The system and drive statistics can also be useful for performance analysis and troubleshooting purposes. For example:


# isi statistics system -–nodes=all --oprates --nohumanize


This output will give you the per-node OPS over protocol, network and disk. On the disk side, the sum of DiskIn (writes)  and DIskOut (reads) gives the total IOPS for all the drives per node.

 

For the next level of granularity, the drive statistics command provides individual disk info.


# isi statistics drive -nall -–long --type=sata --sort=busy | head -20


If most or all the drives have high busy percentages, this indicates a uniform resource constraint, and there is a strong likelihood that the cluster is spindle bound. If, say, the top five drives are much busier than the rest, this would suggest a workflow hot-spot.


We’ll look at this topic in more depth in the next article.