Find Communities by: Category | Product

Creating a private cloud to manage your Microsoft application workloads can be quite compelling. Yes, it may even be more compelling than a comparable public cloud solution provided by Microsoft Office 365 or Azure.  Organizations are using Exchange, SharePoint, and SQL Server to help them generate revenue and/or reduce costs faster.  At the same time, IT is being asked to provide those systems faster, more efficiently, and at lower costs.  Moving to either cloud option can help IT achieve those goals.  The choice between public versus private cloud usually comes down to IT’s capabilities to deliver and whether public cloud providers can offer the service cheaper.  If an organization has the IT prowess to do it, creating one’s own private cloud can be equally beneficial for an organization. In fact, by implementing one’s own private cloud, a recent Wikibon study indicates that a 10,000 person $2.5B organization with a $40M IT budget can save $27M over 5 years. Pretty impressive, let’s discuss the benefits in more detail and clear the skies a bit. (If you missed part 1, click here.)

private cloud.jpg

 

First, let’s start by defining a private cloud.  A private cloud is similar to a public cloud in that it virtualizes the computing environment and automates the provisioning of the computing resources based on organizational demand.  The major difference is that it is a single tenant environment owned, deployed and managed by an organization’s own IT department not the public cloud provider. Implementing a private cloud can allow IT to meet the organization’s system requirement when needed…not before and not after.  More importantly, IT will be able to align the system costs to the actual usage of the system.

 

When an organization implements its own private cloud, it can reap all of the cloud benefits while still maintaining control of Microsoft application workload data and providing high quality of service in supporting the business’ data demand.  This data control and high quality of service is extremely powerful.  When an organization’s Microsoft data is in the hands of a public cloud provider it creates several potential issues:  1) The security of the data is no longer within their control; 2) Having someone else manage the data may actually violate local governmental regulations; 3) Finally, relinquishing control of the data may eliminate any proprietary benefits derived from the data by granting the cloud provider access to it (the provider will know all of the organization’s secrets!)  Quality of service is often overlooked when considering cloud options, let’s face it…a public cloud provider has standard service levels (SLAs) that all of its customers receive and those SLAs may not meet the needs of the organization.  How long are you willing to wait while your SQL Server data is unavailable to support your organization’s order processing system?  The longer you wait, the longer your customers wait…you may lose millions as your public provider’s cloud gets darker and darker.

 

Just remember what is most important to you for your Exchange, SharePoint and SQL Server application workloads.  If it is critical to maintain control of your data and provide the right quality of service to meet your organization's requirements, then implementing your own private cloud will best meet your needs.  In part three of this blog series, we will clear the skies at last as I discuss how you can utilize the best features of both public and private clouds by creating one seamless cloud fabric for Microsoft application workloads!

Bryan Walsh

XtremPLEX for SQL Server?

Posted by Bryan Walsh Oct 23, 2014

No… I’m not here to announce EMC’s next great product but rather how two existing products can be leveraged together to provide performance and peace of mind. In today’s demanding business environments; high performance and continuous availability are always on the top of mind. This blog post is here to tell you that you don’t need to sacrifice one for the other.

 

One of the most interesting EMC product synergies is XtremIO and VPLEX. This allows customers to realize the performance and local high availability benefits of XtremIO combined with VPLEX’s virtualized storage which provides highly availability for a multisite cluster. System upgrades, software patching and hardware failures can cause host-level or even site-level failures. Leveraging this powerful combination for SQL Server allows for instant failover between datacenters in all of these scenarios.

 

XtremIO is able to handle extremely high I/O through use of its balanced architecture, in-line data reduction, and virtually provisioned storage… eliminating many traditional storage best practices and requirements for complex and time consuming fine tuning.

 

VPLEX Metro is a storage virtualization appliance that sits between the hosts and the back-end arrays. It delivers application and data availability within a data center and over distance with full infrastructure utilization and zero downtime.

 

In a recently posted solution, we look at how to leverage these XtremIO and VPLEX together to provide a high performance solution for SQL Server where a single copy of a database can be shared and accessed in multiple locations over distance. While the solution also shows the high performance achieved by XtremIO, the test objectives were the following:

 

  • How VPLEX Metro, XtremIO and SQL Server cluster provide an efficient solution
  • Demonstrate the minimal impact of any storage, host or site failure
  • Show consistent performance or better in the event of a failover

 

xtremPLEX.png

So, what did we test?

We simulated an active/active OLTP workload on both SQL Server 2012 and 2014 and conducted failover testing with both host (planned and unplanned) and site failure with automatic failover. For this solution a 2 X-Brick cluster was used in conjunction with VPLEX Metro.

 

The configuration was pushed to close to 200,000 IOPS from the SQL Server side and still maintained low latency with multiple concurrent SQL Server database workloads in a VPLEX environment.

 

latency.JPG.jpg

 

What did we find?

  • Disk I/O performance was very similar for SQL Server 2012 and 2014.
  • Fast and simple setup of the environment with little to no storage tuning.
  • Management and monitoring of the environment is simple and straight forward.
  • XtremIO, virtualized by VPLEX, works seamlessly for virtualized SQL Server environments.
  • This solution provides support for even the most demanding transactional workloads.
  • All client activities are kept at the same level during the instant failover process.
  • Sustained performance with minimized disruption in host or site failovers.

 

Remember though, for SQL Server there are multiple benefits with XtremIO beyond just performance. Yes, there is a blog post on some of these additional capabilities. I’m so glad you asked.

My fellow blogger @Noel Wilson recently posted part one of a three-part series that hopes to explain how EMC observes the Private, Hybrid, and Public scenarios that our customers deal with every day.  Unlike Noel's excellent overview of Cloud scenarios, this blog entry is part two of my multipart series that focuses, not on constructs, concepts, and scenarios, but on the actual tires on the proverbial road.  Back on the 11th of September, I posted a blog about "Corporate Availability.  I mentioned that the white paper would be published soon.  Well — it's out and it's more detailed than even I expected.  It's so detailed that we will be releasing a series of video demos to explain all of the aspects the white paper covers. Just in case someone forwarded this blog to you and the links got stripped-out, the paper is on emc.com (http://www.emc.com/collateral/white-paper/h13360-cont-avail-ms-apps-wp.pdf).  So here is a brief summary of what you'll find in this mammoth white paper:

 

Microsoft Business ApplicationsEMC TechnologiesVMware Technologies

Microsoft Exchange Server 2013

Microsoft SharePoint Server 2013

Microsoft SQL Server 2012

EMC VNX

EMC RecoverPoint

EMC VPLEX Metro

EMC VNX

EMC RecoverPoint

EMC VPLEX Metro

Here is the basic visual representation of what the paper explains:

Screen Shot 2014-10-03 at 1.10.38 PM.jpg

The idea is that a total of three data center (DC) locations will be deployed -- two within 60-miles of each other (the Primary pair), the third (Tertiary) is as far away as possible and feasible -- a typical separation between Primary and Tertiary data centers would be 600-900 miles (about 1000km-1500km).  The two "near proximity" data centers will handle the production load in a shared resources model -- either data center can use the resources of the other and workloads can migrate at a moment's notice from one DC to the other.  The third (tertiary) data center is only put into production when the pair of Primary DCs is offline for network, electrical, or physical reasons.  Please note that the tertiary data center will be "moments behind" (in replication terms) from the two Primary data centers, so Reporting, test, development, analytics, cube builds, and the like could all be housed and operational at the tertiary location.  There is one additional workload that would be running at the tertiary data center by default: Backup!  Operational backups should always be made in the building that DID NOT just catch fire...  If backups can be taken AT the offsite location, all the better.  Why backup, then replicate?  Why not replicate, then backup?

In this three site configuration, there are two sites coupled via Ethernet — and the Fibre Channel Fabric is extended between these two Primary sites — there are several ways to accomplish this (none of which are expensive or complex here in the year 2014...) -- and of course the Ethernet is also extended to both Primary sites.  The third site, however, is only connected via Ethernet as there is no need to extend Fibre Channel over a distance of 900-miles!.  Your exact bandwidth requirements will be set based on the actual traffic that is "write oriented" -- only the changed blocks ever travel to the third site (under normal operations).  Only when the third site is brought into Production, does the read traffic ever "back flow" through the Ethernet to either site A or Site B — unless of course you have a tertiary iNet point of presence at that third site for use during failures.  In that case, the tertiary site can me come the sole remaining resource (furnishing both Primary workloads as well as all backup infrastructure).

The two Primary data centers would be connected like this:

Screen Shot 2014-10-03 at 1.20.26 PM.jpg

Anyway… the scenarios that the paper explain are:

  • A single VM failure (and its recovery)
  • A vSphere host failure (and the resulting recovery)
  • An entire Storage Array outage (like it got hit by a swinging gorilla or backhoe — hey, it could happen!!) and the resulting recovery
  • An entire site failure (like it got taken away by an alien space ship — or backhoe) and the resulting recovery
  • An entire regional outage that takes both primary data centers offline — like a massive grid failure (unfortunately these actually do happen) and the resulting recovery

In all but the last scenario, recovery of services happens within minutes — actually, that's not true… in ALL scenarios, recovery happens within minutes — it's just that the dual DC failure takes about an hour to restart everything at the tertiary site… so "minutes" is more like "tens of minutes"… BUT, in all cases, recovery is COMPLETELY AUTOMATED.  I'm serious.  No hot-line, no midnight con calls with 47 people from four states and nine cities.  It's all automated — and wonderfully dynamic.

 

Please scamper (ok , don't scamper)… Just click.  The white paper is http://www.emc.com/collateral/white-paper/h13360-cont-avail-ms-apps-wp.pdf and demos will be following in the next few weeks.  We'll do SQL Server first!

 

Please — comment below if you like it — and please submit your criticisms if you can muster any — seriously.  This is the first paper of this magnitude to be published by a major Cloud enabler — we want your feedback so we can continue to bring you the solutions you've asked for!

 

Cheers!

Cloud computing options for managing your Microsoft application workloads are becoming ubiquitous. IDC estimates that total public cloud investment is roughly $46 billion USD per year.  Furthermore, Technology Business Research (TBR) estimates that Microsoft alone is already providing nearly $5 billion USD per year in cloud related services. Even with the prevalence of cloud computing options today, choosing the right one (public, private, or hybrid) and ensuring it meets your data management needs is so cloudy, it has most in a fog!

cloud_options.jpg

This three part blog series should help clear the skies by defining each option and describing their benefits.  Part 1 focuses on public cloud. A public cloud is a multi-tenant IT environment provided by a third party. The management and provisioning of system resources within the public cloud are virtualized and automated to dynamically meet the needs of your organization and enable self-service capability.

 

Public cloud is best used when an organization is seeking the following benefits:

 

  1. Greater predictability in costs associated with implementing and managing a system. Public cloud providers typically charge a monthly subscription fee for the service based on usage. This model eliminates massive capital expenditures when purchasing a new system. Instead, one only pays for the system when it is used. Transitioning these costs to operating expenses is preferred by most financial managers as it provides greater predictability for expenses and their impact on net income.
  2. Focusing IT resources on more strategic work core to the organization. Outsourcing the less strategic work to a public cloud provider shifts resource focus to activities more aligned with the organization’s goals and core competencies.
  3. Improved performance, management, or security.  Often a public cloud provider can help an organization improve all three leading to the delivery of more IT services faster which can dramatically help an organization achieve its overall goals.


When contemplating a public cloud option, consider the following:

 

  1. Are you comfortable relinquishing control of your data?  Exchange, SharePoint and SQL Server have become mission critical applications generating data vital to an organization.  Allowing a third party to manage the data may expose it to new security risks, violate governmental regulations, or eliminate any proprietary advantage derived from the data by granting the cloud provider access to it.
  2. Thoroughly examine the new operational costs of the public cloud.  While you may not expend significant capital to start the service, the subscription fees over time may actually exceed the initial capital outlay and your own operational costs to deploy it and manage it on your own.
  3. Finally, the quality of service you receive with public cloud offerings for Exchange, SharePoint and SQL Server may be different than what you are accustomed to receiving when you manage those systems yourself.  Microsoft Office 365, Microsoft Azure and other service provider offerings provide standard service level agreements that offer little flexibility to customize them to meet your organization’s needs.  Ask yourself how long you are willing to wait in a queue with the provider’s other customers when you are trying to retrieve critical data.

 

There certainly are benefits to moving your Microsoft applications to a public cloud.  However, the public cloud may not be the panacea for all of your challenges.  The key is to choose the right cloud option to meet your needs. Don’t worry, it may seem cloudy now but the long term forecast is a bit clearer as you learn all about your cloud options for Microsoft application workloads by reading this 3 part blog.

 

Part 2 of this blog series is Now Available

Filter Blog

By date:
By tag: