I know the recommended best practice for NL SAS drives is 6+2. So you could create a storage pool that would consist of:
SAS RAID 5 (4+1)
NL SAS RAID 6 (6+2)
With 5.32 you can mix RAID levels in the same storage pool. Then use the FAST Cache automaticlly or assign LUNS as you wish depending on performance requirements.
I read that Raid 6 seems useful when using 2To and above as in case of failure, reconstruction time can be very long and Raid 5 is risky.
As I plan using 1 To disk, the main questio for me is would a 4+6 raid 6 would be faster than 4+1 raid 5 (read AND write) - but I will use fast cache, so maybe my question is useless for the write part ?
max IOPS : 2500 <<-- is that host IOPs or backend iops ? If that's host IOPs i am not sure you have enough spindles to handle the workload
0.5*2500 + 4 *0.5*2500 = 6250 drive IOPS (RAID 5)
6250 / 180 = 35 drives ( 15k SAS)
6250 / 150 = 45 drives (10k SAS, rounded up to 45 to go evenly in to 4+1R5 config)
you guess right, this is host IOPS. I know I haven't enough spindles, this why I plan to have FastCache.
The max IOPs (2500) is during backup time and it is only read IOPS .
So i guessed the 99percentile might the goal in IOPS : 1500
which give me according your formula :
0.5 * 1500 + 4 * 0.5 * 1500 = 3750
3750 / 180 = 21 - 15k
3750 / 150 = 25 - 10k
17 SAS 10k + 9 NLSAS (Q2) seems better, than the 9 SAS 15k+9NLSAS.
But I don't have a lot of margin for the future
I did not expect that FastCache will be helpful for backup - already read that FastCache did not work for sequential read.
I just hope that using FastCache will improve perf evenso the storage is not well configured (in term of number of spindles)
Actually, I consider that backup is out of the topic for sizing the storage - I know it is the best way to do, but I have to make compromise.
Such discussion and read in the forum, help me to decide
FAST VP is useful for SAS and NLSAS combinations
The absolute performance (IOPS/disk) and even more the per-TB performance is still quite different between these two tiers.
Of course like any tiering/caching technology it work best when the working set size / locality of reference fits the size of the fastest tier.
If this application is currently running on a CX4/VNX then EMC can capture I/O trace and run them through a modeling application to estimate effectiveness of both FAST Cache and FAST VP
I though that FastVP was useful with 2 or 3 tiers with 1st tier with SSD. I never think about having 1st tier based on SAS and 2nd tiers with NL-SATA.
The "application" is VM servers which are running actually on BladeCenter S (with local shared storage) and a bunch of physical servers I have to virtualize.
The overall IOPS need is in the inital thread.
The VM dataset can be split in two kind of VM :
- mail / file server : 3,4 servers with 1To each serving email and file (email app is Kerio, based on flat files) - VP seems OK for this
- SQL servers (few windows SQL servers for prod but DB is quite small) - FastCache except for the vmdk where the log are allocated.
I already read some white paper but I have to extract the rigth info for me
- I will use the block mode mainly (FC)
- I will use the storage for VM only