EMC IsilonSD Edge Prerequisites - IsilonSD Management Server 1.0.0

Note: This topic is part of the EMC IsilonSD Edge - Isilon Info Hub.

 

IsilonSD cluster deployment requirements using IsilonSD Management Server 1.0.0

 

These prerequisites apply to the production version of EMC IsilonSD Edge with IsilonSD Managment Server 1.0.0. Before you deploy IsilonSD Management Server and an IsilonSD Edge cluster, verify that the requirements listed here are satisfied.

 

Software and hardware requirements

 

ComponentRequirementRecommendation
VMware vCenterVersion 5.5 Update 2

Access VMware vCenter through the VMware vSphere Web Client (browser-based client).

Host
  • VMware ESXi 5.5
  • Minimum number of hosts: 3 with 1 node per ESXi host per cluster
  • Maximum number of hosts: 6 with 1 node per ESXi host per cluster
Web browser

Mozilla Firefox version 39.0 and later and Google Chrome version 42.0 and later.


Note: VMware vSphere Web Client Integration Plug-in does not work if you access Chrome version 42.0 and later and Mozilla Firefox version 39.0 and later. Refer to the following knowledge base articles to address the issue:

 

We recommend the latest versions of these web browsers.
RAM

Minimum unused RAM: 6 GB per node

vCPUMinimum vCPUs: 2 per node
Drive typeSATA, SAS, SSD
Virtual infrastructure

IsilonSD Edge is supported on systems that meet the minimum deployment requirements and are built with Virtual SAN-certified components. For more information, see Identify IsilonSD Edge compatible systems.

We highly recommend the following hardware:

  • DELL PowerEdge R730
  • HP Proliant DL380p
  • Super micro XPRO-2887R

 

Storage requirements

ComponentRequirementRecommendation
Data disks
  • Number of data disks per node:

Either 6 or 12 defined data disks.

  • Minimum size of each data disk: 64 GB
  • Maximum size of each data disk: 2 TB
  • Minimum cluster capacity—1152 GB (Calculated as shown: Minimum disk size * # of drives per node * 3 (minimum number of nodes))
  • Maximum cluster capacity: Varies depending on your licenses and the resources available on your system

 

Journal disk
  • Number of journal disks per node: 1
  • SSD with at least 1 GB free space per node

 

Boot disk
  • Number of boot disks per node: 1
  • SSD or HDD with at least 20 GB of free space

Physical disks, typically mirrored SSDs, are used for creating boot disks.

Total disks (data disks, journal disks, and boot disks put together)

  • Minimum number of disks per node: 8 (6 data disks, 1 journal disk, and 1 boot disk)
  • Maximum number of disks per node: 14 (12 data disks, 1 SSD for the journal disk and 1 boot disk)

 

Note

In some cases, either the ESXi host or vCenter is unable to recognize the SSDs and the SSDs are incorrectly displayed as HDDs. To fix this issue, connect to the ESXi host or vCenter through the SSH client and run the following commands:

 

esxcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --device <disk-naa> --

option "enable_ssd"

esxcli storage core claiming unclaim --type=device --device <disk-naa>

esxcli storage core claimrule load

esxcli storage core claimrule run

esxcli storage core claiming reclaim -d <disk-naa>

esxcli storage core device list -d <disk-naa> |grep SSD

 

Where disk-naa is the Network Address Authority identifier for the disk. For example, you can specify naa.6b8ca3a0f14b6f001c26242406e51fcf for disk-naa. See http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013188 for more information.

 

Networking requirements

ComponentRequirementRecommendation
Internal network (back-end)1 GB/10 GB Ethernet
  • We recommend a 10 GB Ethernet for back-end networking.
  • Isolate the back-end network and ideally route it through a dedicated VLAN or physical switch.
  • Configure LACP or a port channel group to improve the back-end network reliability and to increase the intercluster traffic throughput.
Internal IP addresses
  • One IP address per node.
  • The IP addresses that you configure for the nodes must be contiguous.

 

External network (front-end)

1 GB/10 GB Ethernet

  • We recommend a 10 GB Ethernet for front-end networking.
  • The front-end network must be a different Ethernet network than the back-end network. If this is not the case, make sure that the front-end network is on a different IP subnet than the back-end network.
External IP addresses
  • One IP address per node.
  • One IP address per SmartConnect zone. At least one SmartConnect zone is required for a cluster.
  • Make sure that you allocate the IP address range based on the maximum number of nodes that you plan to deploy in the cluster taking future requirements into consideration. For example, for a six node cluster, you must allocate six IP addresses for the node and one IP address for SmartConnect.
  • The IP addresses that you configure for the nodes must be contiguous.

 

Other networking recommendations

A few other networking requirements follow:

  • Nodes must be on the same Ethernet network.
  • The Ethernet network must allow broadcasts to be propagated between the nodes.
  • IsilonSD Management Server supports the vSPC (Virtual Serial Port Concentrator) service to provide serial console access to nodes. The vSPC service listens on port 8080. Make sure that port 8080 is open and available on the ESXi host for the vSPC connections. Sometimes ESXi firewall settings can prevent transmissions. To avoid this issue, we recommend that you add a Firewall Rule Set for the serial port network connections before you connect network-backed virtual serial ports. Connect to the serial port output through a network with the virtual serial port concentrator option enabled to allow only outgoing communication from the host.