Find Communities by: Category | Product

Martin Libich's EMC Isilon M&E Blog

October 2014 Previous month Next month

Jperf is a Java-based graphical interface for iperf, the network performance tool we rely on for validating network throughput on Isilon installations.

On OS X, installing jperf can require multiple steps:

jperf Executable for OS X can be downloaded from:

http://jesterpm.net/downloads

 

Once you’ve downloaded it and copied the software to your /Applications/Utilities folder you will receive this message if you’re on OS X Mavericks or above:

"To open “jperf” you need to install the legacy Java SE 6 runtime."

Download from:

http://support.apple.com/kb/DL1572?viewlocale=en_US&locale=en_US

 

Java for OS X 2014-001

Download

Java for OS X 2014-001 includes installation improvements, and supersedes all previous versions of Java for OS X. This package installs the same version of Java 6 included in Java for OS X 2013-005.

Please quit any Java applications before installing this update.

See http://support.apple.com/kb/HT6133 for more details about this update.

See http://support.apple.com/kb/HT1222 for information about the security content of this update.

 

Once you have:

JavaForOSX2014-001.dmg

You will receive a message saying:

“jperf” can’t be opened because it is from an unidentified developer.

Instead, option-click (⌥ option key) and you will receive this message.

“jperf” is from an unidentified developer. Are you sure you want to open it?

Opening “jperf” will always allow it to run on this Mac.

You may then receive an error message that requests that you place iPerf in your PATH. Open a terminal window and execute the following:

export PATH=$PATH:/"path to perf"/iperf-2.0.5-i686-apple-darwin10.5.0/iperf

If the error persists, launch the executable directly within the app via terminal by issuing:

open /“Path to”/jperf.app/Contents/MacOS/JavaApplicationStub

With OneFS Jaws we now have limited access to the SMB 3 feature set. In particular, Multi-Channel is now a native option when connecting to an Isilon cluster. Heres a look at how SMB 3 MC support is described in the documentation[1] and how this relates to a real-world scenario:

  • Key Benefits of SMB 3.0 MC:
    • Built-in failover.
      • § Compared to Link-Aggregation, which requires the entire network environment to support this feature and which can lead to performance issues with certain applications when going through 10 Gb network connections, the SMB 3.0 MC connection has built-in failover that works “out of the box”.
    • Load-balancing between NICs.
      • § For standard File System operations that aren’t driving massively parallel throughput or transport all throughput is load-balanced between the NICs, allowing for lower overhead.
        • Keep in mind that standard operations, like for example Windows File copies are subject to the limitations the Operating System imposes on such functionality and may not necessarily see noticeably higher throughput.
      • § For multi-threaded and/or massively parallel file operations, like e.g. 4K Digital Cinema file playback using a dedicated multi-threaded application or multi-threaded replication tools like Robocopy[2] , the full throughput from the Cluster for the operation is available, capable of exceeding port speed for a single 10 Gb link. (See also Throughput Considerations below)
        • The EMC Community article: Uncompressed 4K and Isilon's SMB3 multichannel updates in OneFS v7.1.1[3] gives a detailed update on how to achieve truly impressive performance leveraging the SMB 3.0 MC feature for M&E applications.

Where the OneFS 7.1.1 Release Notes and Technical Overview of New and Improved Features of EMC Isilon OneFS 7.1.1 states:

 

  • With the SMB 3.0 multi-channel support, an appropriately hardware configured Windows 8, Windows Server 2012, or later clients can connect to an Isilon cluster and take advantage of the performance and availability features.
    • This means in practice that any Windows 8 Client or Windows 2012 Server (As a Client to the OneFS SMB mount) will natively connect to an Isilon Cluster using SMB 3 MC, when making an SMB connection, provided all the prerequisites mentioned below are fulfilled.
  1. o   There is currently no SMB 3 support (And subsequently no Multi-Channel Support) on versions of Windows prior to Windows 8 or Windows Server 2012, including Windows 7 and there is currently no SMB 3 or SMB 3 Multi-Channel support on any version of Mac OS. While there is basic SMB 3 support in Samba, there is no conclusive sign that full Multi-Channel functionality is coming to Samba in the immediate future [4] [5]
    • § Notwithstanding, significant single-stream performance improvements in OneFS 7.1.1 over SMB2 bring advanced functionality to these other platforms for applications like uncompressed stereo 2K and limited 4K playback as well as multi-threaded replication via Rsync or similar.
  • No manual SMB configurations are needed on the Windows machine or on Isilon to enable this support.
    • No manual configuration also means no immediate feedback on whether the client is connected via SMB MC
      • § Ways of identifying successful connection via SMB MC include
        • Opening Task Manager in Win 8 or Win 2012 Server and verifying under Performance that both interfaces are active and receiving and sending files during transfers.
        • In the CLI to the cluster, use isi statistics isilon# isi statistics client –numeric and then verify that you indeed see 2 connections from the client, going to each of the 10G NICs on the node.
  • SMB 3.0 multi-channel offers simultaneous SMB client connections to a single Isilon session thus providing increased throughput, connection failure tolerance, and automatic discovery.
    • Because this is a mostly automatic process, all precautions for optimal network performance should be taken, prior to leveraging this functionality in production. This includes configuring both network interfaces on the client, running iPerf between the client and BOTH interfaces on the Isilon node that will be leveraged for SMB3 MC:
      • § Ensure that both interfaces on the Client NIC are active:
        • Enable both interfaces of the NIC in question in the “Network Control Panel”. (Often the OnBoard NIC will still appear: Since most On-Board NICs only offer a single network port they’re not relevant for this feature and the expectation is that the customer will install a PCI-Card with at least a two port NIC on the Client workstation)
        • If the customer environment leverages DHCP, no additional IPv4 configuration is expected. If the client environment relies on fixed IPs, ensure that both interfaces are configured with (preferably consecutive) fixed IPs, routes and DNS servers.
      • § Run iperf on the Win 8 or Win 2012 Server Client against the Cluster Node in –s (Server mode), making sure you test against both interfaces on the Cluster Node. Then run iPerf from the Cluster node in   -c <host>, mode against the Client, making sure to test against both interfaces on the Win 8 server.
        • An example of the paired commands for Client to Cluster testing would be like so:
          • On the Client:
            • iperf –c 192.168.xxx [Node interface 1 IP] –w 2M –l 1m –t 30
            • On the Cluster Node:
              • iperf –s –w 2M –l 1M
          • Repeat for the second Node interface and vice-versa for the test from Cluster to Client.
      • § An 80% overall network bandwidth or higher throughput result on all directions in iPerf is generally considered indicative of a healthy network.
      • § Especially for SMB 3.0 MC functionality, results that are significantly lower than 80% of available bandwidth are indicative of performance issues in the network that warrant addressing prior to putting SMB 3.0 MC in production.
        • Standard troubleshooting steps apply.
        • These can include:
          • Running network traces and analytics to identify and remedy bottlenecks
          • Connecting the SMB 3.0 MC client directly to the switch, or directly to the node that is being configured.
            • § Depending on the environment, this trouble-shooting step may require Data-Center access.
  • The Windows client must have two or more network cards, or one or more network cards that support Receive Side Scaling (RSS), or one or more network cards with link aggregation enabled.
    • Generally 10 Gb connections see more benefits from SMB 3.0 MC than 1 GB connections. For the purpose of this document, the focus will be on 10 Gb NICs.
    • The reference that is frequently used internally for NIC tuning and NIC options is the Intel X520 NIC. Tuning options mentioned in this text refer to this NIC.[6][7]
      • § Other 10 Gig NICS that have been known to work successfully with SMB 3.0 MC that don’t require extensive tuning include the Chelsio family of 10 GB NICs.
    • Ensure that the card has its native driver installed, not the MS default driver.
    • There are a number of performance options that can be configured on the network card on the Win 8 or Win 2012 Server Client especially when the native driver has been installed.:
      • § As indicated, RSS is the key functionality that is required for SMB 3.0 MC to function. Most modern 10Gb NICs have this functionality.
      • § Ensure that “Interrupt Moderation” is disabled.
      • § Unless it is confirmed that the entire environment is fully optimized and configured for “Jumbo-Frames” (>= 9000 MTU) keeping everything at 1500 MTU is the best practice and recommendation for most environments that will leverage SMB 3 MC.
        • (This includes verifying that the switch ports the transaction is going through are configured,)
      • § If the environment requires “Jumbo Frames” those should be configured as the last step in preparing for SMB 3 MC in order to verify the integrity of the network environment prior to putting the functionality into production.
      • § While there are a number of additional tuning options available for the NIC, it is generally recommended to leave them in their default settings, unless the driver includes a Preset for “Low Latency” or similar performance optimized settings.

 

  • The SMB connection is limited to the single node and connections aren’t shared across nodes.
    • This means that the most reliable way to leverage SMB 3.0 MC against an Isilon cluster is to either use Fixed IP mappings or to build a dedicated Smart-Pool for the node’s IP range to ensure consistent connectivity.
  • Throughput Considerations: SMB 3.0 multi-channel can significantly increase read throughput for the appropriate workload running on the Windows machine.
    • This also means that overall aggregate bandwidth and Cluster-load can be taxed when running high-throughput applications (e.g. 4K playback.)
    • Lab results indicate Single-Stream throughput on SMB 3.0 MC can reach 1.4 GByte/s.
      • § At that performance level the equivalent of approximately two entire nodes in a given cluster are driving the throughput required to sustain this kind of bandwidth and that bandwidth is not available to other applications on the cluster.
    • In a real world scenario, leveraging SMB 3.0 multi-channel and OneFS can result in better *read* throughput, when used with an Isilon cluster. The aggregate write throughput will not increase.


 


[1] http://www.emc.com/collateral/white-papers/h13173-new-improved-features-isilon-onefs-7-1-1.pdf

[2] http://technet.microsoft.com/en-us/library/cc733145.aspx

[3] https://community.emc.com/people/RobertMcNeal/blog/2014/08/18/uncompressed-4k-and-isilons-smb3-multichannel-updates-in-onefs-v711

[4] http://www.snia.org/sites/default/files2/SDC2013/presentations/SMB3/MichaelAdam-2013-obnox-presentation.pdf

[5] https://www.linkedin.com/today/post/article/20140603075104-28399909-status-quo-of-smb3-multichannel-adoption-in-linux

[6] http://www.intel.com/support/network/adapter/pro100/sb/CS-031949.htm

[7] http://www.intel.com/support/network/adapter/pro100/sb/CS-029402.htm