1 2 Previous Next 17 Replies Latest reply: Feb 11, 2014 3:08 PM by mb_live RSS

Queue Depth of FA Port


I've searched and searched and can't find any answer.


Does anyone know how you can find the queue depth for certain frame FA Ports? I've seen websites saying typically it is 4096 / port, on here I saw someone reference a cx-4 as having 1600 as a queue depth. Just trying to find if you can actually tell. If there is any command or vendor documentation that lists this number.



  • 1. Re: Queue Depth of FA Port
    Clay Isaacs

    What platform?  And are you looking for the actual queue depth or the maximum queue depth?  For Vmax, you can find the actual queue depths within the Performance section, FE Director metrics, select ALL metrics instead of KPI's, and you'll see the Queue Depth ranges and Avg queue depth ranges.  The ranges represent a range of IO's queued. As an IO enters the queue it first checks how deep the queue is. Based on depth, the applicable queue depth bucket increments with the value seen by the IO. For example, an IO that encounters a queue depth of 7 will increment bucket #2 (depth 5-9 for OS or 7-14 for MF) by 7. The intent of these buckets is to identify IO bursts which in turn generate large queues and long response times.


  • 2. Re: Queue Depth of FA Port

    Looking for the maximum on a Fibre Port

  • 3. Re: Queue Depth of FA Port
    Clay Isaacs

    What platform?  But more importantly, what's driving the question?

  • 4. Re: Queue Depth of FA Port

    on a VMAX 20k

  • 5. Re: Queue Depth of FA Port
    Clay Isaacs

    Can I ask what's driving the question?

  • 6. Re: Queue Depth of FA Port

    If I know that max of the FA port then that will help aid in the host setting queue depth.

  • 7. Re: Queue Depth of FA Port
    Clay Isaacs

    Well, technically, The DMX and VMAX both support a maximum of 12,288 queue records per FA Slice/CPU. 2 FA Ports share these queues. Enginuity limits the number that any single device can use. Each LUN will be guaranteed at least 32, but can dynamically borrow as much as 384. An FA though, is just a pathway for your data down to your volume/LUN. So, 384 for volume QD is possible (we can borrow QRECs from non busy vols) but it doesn't necessarily mean you'll ever see such deep QD... it doesn't even make sense to queue so much data down a volume not to mention the HW needed to be able to make use of such deep queue.


    Example: you would need a LUN QD of 256 to keep a 32way Raid5 7+1 striped meta busy on each single spindle (provided your I/O pattern is random and with enough threads).


    With that said, I would refer to the EMC Host Connectivity Guide for your OS of choice.  Those documents will contain the BP setups for that particular operating system. There are also documents from Emulex and Qlogic that go into queue depth settings in more detail.  Setting queue depth properly many times requires trial and error for an environment as well.  However, below is an excerpt from the White Paper docu6351 "Host-Connectivity-with-Emulex-Fibre-Channel-Host-Bus-Adapters-(HBAs)-and-Converged-Network-Adapters-(CNAs)-in-the-Windows-Environment."

    In order to avoid overloading the storage array's ports, you can

    calculate the maximum queue depth using a combination of the

    number of initiators per storage port and the number of LUNs ESX

    uses. Other initiators are likely to be sharing the same SP ports, so

    these will also need to have their queue depths limited. The math to

    calculate the maximum queue depth is:

    QD = Maximum Port Queue Length / (Initiators * LUNs)

    For example, there are 4 servers with single HBA ports connected to a

    single port on the storage array, with 5 LUNs masked to each server.

    The storage port's maximum queue length is 1600 outstanding

    commands. This leads to the following queue depth calculation:

    HBA Queue Depth = 1600 / (4 * 20)

    In this example, the calculated HBA queue depth would be 20. A

    certain amount of over-subscription can be tolerated because all

    LUNs assigned to the servers are unlikely to be busy at the same

    time, especially if additional HBA ports and load balancing software

    is used. So in the example above, a queue depth of 32 should not

    cause queue full. However, a queue depth value of 256 or higher

    could cause performance issues.


    I have found success with starting at 32, establishing a baseline performance profile, then adjusting and comparing to the baseline. Raise the QD in small increments. I use increments of 32.  I usually only change the QD's when I can for sure pinpoint that I have a QD problem otherwise you end up spending an enormous amount of time messing with QD's when that has no impact on performance UNTIL you change the value.


    Hope that helps!

  • 8. Re: Queue Depth of FA Port

    This is great info, thanks!!


    Where did you get maximum queue records per VMAX FA?

  • 9. Re: Queue Depth of FA Port
    Clay Isaacs

    From the "smart" people

  • 10. Re: Queue Depth of FA Port

    So EMC doesn't have this info for the general public?

  • 11. Re: Queue Depth of FA Port
    Clay Isaacs

    Probably located somewhere. Most likely on support.emc.com in the knowledgebase.

  • 12. Re: Queue Depth of FA Port

    Symmetrix - QFULL limit


    They weren't able to produce the document either

  • 13. Re: Queue Depth of FA Port

    Looking at the above post, the question goes back to the equation...where do you find the "Maximum Port Queue Length" per storage Port? In that example it is listed as 1600.

  • 14. Re: Queue Depth of FA Port
    Clay Isaacs

    The calculations are the same but the max QD is different for different platforms.  1600 is for VNX as that quote was taken from a VNX focused white paper. The BP recommendation for Vmax is 32 as the QD starting point per HBA and works well in the majority of environments.

1 2 Previous Next