Written By: Anthony Cinelli, Sr. Manager Enterprise Sales, ScaleIO
The EMC ScaleIO 2.0 release is a big one, and the publicity has been huge. The official tagline is "Delivering Public Cloud Agility with Private Cloud Performance and Resiliency." A lot has been said around how Software-Defined Storage technologies and hyper-convergence, ScaleIO in particular, enables levels of IT agility that are just well beyond what can be delivered by traditional infrastructure stacks. So instead, I wanted to focus this blog on the other side of the equation: Private Cloud Performance and Resiliency.
ScaleIO was never focused on being the first software-defined and/or hyper-converged enabler to market. What we have seen out of the early market products were very cool and slick technologies that were built primarily for density, ease of use, and the IT generalist. The primary target markets, where those first to market technologies were embraced, were SMB and ROBO sites. At the time, this was for good reason -- the value prop of those early software-defined and hyper-converged players was a homerun for the SMB and ROBO use cases (and continue to be). ScaleIO, however, was built with a very different focus in mind -- how do you deliver all the benefits that come with hyper-convergence, software-defined, while eliminating proprietary hardware in the enterprise CORE DATA CENTER?
When it comes to block workloads, upon which many Enterprise customers literally run their business, .it is important to note that the world of Enterprise Applications is starting to change. In order to truly make customers feel comfortable in their investment, just showing the ability to run the traditional apps faster/better/cheaper isn’t enough. It also has to be designed to deliver results for the new world of modern distributed applications. No easy task - but something that ScaleIO very uniquely is designed to accomplish in a way that no other technology in its class can. This is only enhanced further with 2.0.
Every introductory conversation I have with customers on ScaleIO always focuses on the architecture. Setting that baseline of how unique ScaleIO is architected as compared to its peer group is critical to understanding why it is so successful delivering against that core data center value prop while others have struggled to move from ROBO to core. What are those architectural differences? There are four primary ones that enable ALLLL the goodness that is ScaleIO:
- ScaleIO is BLOCK. It has a super simple I/O stack. Pretty much every other player in the space delivers block that is back-ended by a file system or object store. This adds a great deal of pressure/burden/work/overhead to the I/O stack. This is one of the things that allows ScaleIO to be just so darn fast, with super low latency. However, performance isn’t everything...there is another advantage to all this speed. More to come on this further down.
- ScaleIO decouples compute and storage under the covers. The early hyper-convergence was all about coupling compute and storage together to be super, super tight...to make it unbreakable! This is great for ROBO and SMB needs, but bad for the Enterprise Core Data Center. Why? Cost. The enterprise core data center has unpredictable needs. Compute and Storage requirements do not always grow linearly. Sometimes you need one, sometimes you need the other. Sometimes you need both. ScaleIO allows you to add only what you need, when you need it. Have plenty of compute but need a bunch more storage performance/capacity? No problem, add a few storage only nodes running on bare metal Linux to the cluster. Need a ton of compute, but ok on storage capacity for now? No problem, drop in blades with super dense CPU and throw your OS and/or Hypervisor on it, no ScaleIO license needed. You will never find yourself in a position where you need to add a resource that you don’t need -- and that applies to not just the physical resources of compute and storage. It applies to licensing across the stack too. The cost savings/reductions delivered by this flexibility, at the scale that enterprise data centers require, is HUGE. At scale, ScaleIO can help organizations realize large savings.. It was designed for with these core data center uses in mind.
- ScaleIO delivers performance through I/O parallelism. The majority of software-defined and/or hyper-converged technologies in the market focus on keeping application data local to the server that runs that application in order to deliver the best performance. Makes sense right? In the SMB and ROBO space- ‑ sure. In the enterprise data center ‑ not so much. All workloads are not created equal. Often times the resources in a single server are not enough to give the workloads living on it what they need. When trying to scale beyond a handful of nodes, hotspots happen quickly and performance bottlenecks happen everywhere. Pretty much a deal breaker if you are thinking you may want to run your business critical block database application on it. ScaleIO does the exact opposite. It takes application data and delivers it as wide as possible across ALL the storage media you have in a particular pool. Have 100 SSD's across 10 nodes? All 100 SSD's work in parallel to deliver I/O. No hotspots. No tuning. As you add more nodes and/or storage, the data keeps distributing wider and wider and wider. This is what delivers perfectly linear scale and massive I/O performance. Concerned about the latency of all this distributed data? This is where another cool part of ScaleIO comes in....the data map. Every SDC (a kernel driver that lives on any host running applications), keeps a map in memory of where all the distributed data is for the particular blocks on that host. It is super lightweight (think 4-8MB for 8PB data set size). That data map basically gives each host direct access to its data as if it were local to that server, with no seeking required. So you get the best of both worlds ‑ direct access to data as if it were local, with the I/O benefit of parallelism on the backend. You can have your cake and eat it to. And, by the way, no bottleneck exists here. Every host that runs an app gets an SDC. This is what allows you to scale compute and storage separately or together. Want to provide ScaleIO storage to a handful of physical blades running a critical SQL database? No problem, just put the SDC on those physical boxes and voila...it now gets to access to superfast and super low latency ScaleIO storage. But as stated above...it’s not just about performance...there is a bigger reason
- .ScaleIO is VERY Lightweight. In the enterprise core data center, resource usage matters. Most software defined storage and hyper-converged technologies use ~20%+ of a server's CPU and 20-50GB of RAM ‑ PER SERVER. How do you feel about having that running side-by-side with your CPU intensive SQL servers. Or better yet....how do you feel about running Oracle (and paying for it!) on a server where 20%+ of the CPU is not running Oracle.. Not cool. ScaleIO uses 5-10% of the CPU (and only if running all-SSD and pushing tons of I/O performance) and a negligible amount of RAM. Again, in small 3 and 4 node clusters, this isn’t a big deal. In an enterprise core data center, where nodes are measured in 10's, 100's, and potentially 1000's, there is big $$$ impact. ScaleIO drives out cost at scale. Resource usage matters.
So that takes me to my biggest point of this post. ScaleIO is architected for the core data center. And the big reason why we see ScaleIO as the first true enabler of Software-Defined Storage and Hyper-convergence for the Enterprise Core Data Center is not just because it delivers massive performance, it’s because it delivers MASSIVE levels of availability.
That is the real secret. Customers are not just concerned about getting the performance they need for their business critical applications, they need them to be highly available and always on. ScaleIO is unique because it delivers to customers that super high availability they are used to with traditional Tier 1 infrastructure stacks. What’s more is you can realize huge savings in CAPEX and OPEX because ScaleIO does all of this industry standard x86 hardware which you can buy anywhere and from anyone.
So how does it deliver such massive levels of resiliency and availability? Easy.....Performance! (Confusing huh?) Since ScaleIO has the ability to harness the full I/O and bandwidth across all resources in the cluster, it has the ability to not only deliver great application performance, but it also means it can rebuild itself when hardware breaks in a CRAZY FAST way. That is the true secret sauce of ScaleIO...delivering the availability and resiliency you are used to in your enterprise core data center, but doing so on standard x86 hardware. Tier 1 availability with no specialized hardware.
Want even more availability? VxRack System 1000 with FLEX,Nodes, powered by ScaleIO, is built using hardware designs that have been completely pre-tested, designed and engineered to deliver the maximum availability from each individual hardware component. Combine that with what ScaleIO natively delivers and you now have a platform that delivers ALL the agility that you are looking for in the cloud, pay-as-you-go, add-on-demand, scale-out era, while giving you the comfort you need of traditional enterprise availability. Add on top of that the VCE and EMC support team, and you now finally have an avenue to achieve that modern, scale-out, software-defined and agile data center that you dream about. One that runs yesterday's monolithic applications along with tomorrow's distributed ones...without having to sacrifice the performance and availability you are used to today.
Want to see this in action? Go download ScaleIO free and frictionless. Come back and share your customer feedback -- we love hearing it! :-)