Find Communities by: Category | Product

1 2 3 Previous Next

Everything Microsoft at Dell EMC

118 Posts

Introduction


In 2011, Microsoft introduced Office 365 and would forever change how organizations would look at  deploying

Exchange SharePoint Skype or blog.JPG.jpgand using their collaborative applications like Exchange, SharePoint and Skype for Business.


Although many organizations like the services of what an online, cloud-based, solution offers - many organizations are hesitant to make the move to a vendor controlled platform.

 

Here are some reasons cited by on-premise collaborative application users for not moving to a cloud based service:

  • Office 365 doesn’t support all our customized application integration.
  • We have users in remote offices where Internet access is limited. 
  • Giving users 50 GB mailboxes encourages them to keep stuff they should delete immediately. 
  • There is no guarantee that Microsoft won’t increase the cost of the service over time.
  • When a problem happens in Office 365, it seems like no one knows what is really happening and you must fall back on Twitter and Facebook to gain insight on when problems might be resolved.
  • Our on-perm Exchange deployment delivers better availability. Reports by Microsoft regarding up time are accurate for the entire service but don’t reflect the experience of individual users.

With the decrease of budget spending, IT departments face unprecedented pressure to improve efficiency and lower costs.  The standard operational model of procuring technology from multiple vendors and managing them independently is problematic and increase cost and complexity.

 

VxRack 1000 FLEX System Components

 

The Dell EMC VxRack™ System 1000 consists of hyper-converged rack-scale engineered systems, with integrated networking, to achieve the scalability and management requirements of traditional and cloud native workloads. The VxRack family is purposely designed to enable customers to quickly deploy Infrastructure-as-a-Service and/or Private Cloud architectures. The VxRack System tightly integrates the hardware with the software and management layer using Dell EMC’s ECS, ScaleIO, CloudArray solutions along with VMWare’s vSphere 6.0.


The result is a fully tested, pre-configured, hyper-converged system with automated provisioning, simplified management and robust reporting capabilities at data center and service provider scale. The VxRack System supports the deployment of a variety of application workloads such as Exchange, SharePoint and Skype for Business, allowing organizations to rapidly deliver new services while improving overall agility and efficiency.


These self-contained units of servers and networking are well suited for use cases that require a highly scalable infrastructure. The flexible, modular design of VxRack Systems meets the scalability, performance, and efficiency requirements of modern data centers, enabling IT organizations to deploy a wide range of both traditional business applications and cloud-native applications.


ECS

The Dell EMC ECS platform is a software-defined cloud-storage system that supports the storage, manipulation, and analysis of VxRack Image blog.JPG.jpgunstructured data on a massive scale on commodity hardware. The ECS system supports communication, collaboration, and messaging applications. It can be deployed as a turn-key storage appliance or as a software product that can be installed on a set of qualified commodity servers and disks.

 

 

CloudArray

Dell EMC CloudArray cloud-integrated storage extends high-performance storage arrays with cost-effective cloud capacity. By providing access to a private or public cloud storage tier through standard interfaces, CloudArray technology simplifies storage management for inactive data and offsite protection.


Designed to combine the resource efficiency of the cloud with traditional, on-premises storage, CloudArray technology enables you to scale your SAN and network-attached storage (NAS) with on-demand cloud capacity. You can easily adjust for future data growth by expanding existing cloud volumes or creating new ones. CloudArray policy-driven caching capabilities determine, based on the type of data stored, the level of accessibility and performance. In the background, the CloudArray system encrypts and compresses the data before sending it to the cloud.

 

 

VMware vSphere 6.0

VMWare vSphere 6.0 transforms a computer’s physical resources by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional VMs that run isolated and encapsulated operating systems and applications, just like physical computers. vSphere provides a highly available, resilient, on-demand infrastructure that is the ideal foundation of any cloud environment.

 

ScaleIO

Dell EMC ScaleIO storage is a software-only solution. It uses existing servers' local disks and local area network (LAN) to create a virtual SAN that has all the benefits of external storage but at a fraction of the cost and complexity. The ScaleIO solution turns the existing local internal storage into internal shared block storage. For many workloads, ScaleIO storage is comparable to, or better than, external shared block storage.

 

 

 

VxRack 1000 FLEX System Benefits

 

The Dell EMC VxRack 1000 FLEX Systems for Microsoft collaboration provides:

 

  • A proven reference architecture that supports mixed Microsoft messaging and collaboration applications running on the VxRack System with ECS for storage extension.
  • Application test results that show that the reference architecture satisfies all recommended performance guidelines for Skype for Business Server 2015, SharePoint Server 2016, and Exchange Server 2016.
  • The ability to start with a small hyper-converged platform and grow to extreme scale easily, providing long-term scalability for the applications.
  • Design considerations and best practices for running Microsoft messaging and collaboration applications in a VxRack and ECS environment
  • This solution provides an easy and efficient way to use Microsoft messaging and collaboration applications that are deployed on VxRack Systems. It eliminates lengthy infrastructure scoping, sizing, and testing, and reduces TCO.
  • Pre-integrated hyper-converged infrastructure system that uses the vSphere 6.0 compute infrastructure and ScaleIO software-defined storage. ScaleIO can achieve extreme scale by starting small and growing to more than 1,000 nodes.

 

 

SUMMARY

 

The VxRack System 1000 FLEX hyper-converged system satisfies the scalability and management requirements of Microsoft SharePoint, Microsoft Skype for Business, and Microsoft Exchange applications. IT organizations that are moving toward a software-defined architecture can deploy the VxRack 1000 FLEX for a diverse set of use cases, especially in situations where application growth is unpredictable.  Please read through the Dell EMC VxRack solution for Microsoft messaging and collaboration applications whitepaper and start sharing with others in your organization that want to join the movement to converged infrastructure adoption.

 

 

Resources:

VxRack Solution for Microsoft Messaging and Collaboration Applications White Paper

 

 

Click here to learn more:

MICROSOFT LINK FOR BLOG.JPG.jpg

     

outlook telescope4.jpg.png
In the Pipeline

We're hiring Microsoft Cloud subject matter experts and leaders. As part of the Microsoft Hybrid Cloud Team, you’ll help transform the hybrid cloud market by bringing together the best technologies from Microsoft and Dell EMC. This team is responsible for the Dell Azure Stack integrated systems, the cloud applications designed to run on Azure, along with the Dell technologies that will deliver the best customer experience with Azure Stack. More Details Here>>

VMAX White Paper Update  Look for an updated section related to SQL Server compression in the currently published  Dell EMC VMAX All Flash Storage For Mission-critical SQL Server Databases.  Download Current Version>>
Microsoft Collaboration, Messaging and UC on VxRack  This solution addresses the growing and disrupting trend of hyper converged infrastructure for Microsoft messaging, collaboration and unified communication application based on VxRack System 1000 with ScaleIO for storage.

Microsoft SQL Consolidation on VxRail  This solution will highlight proven scalability for combined SQL Server workloads utilizing software defined storage that has been co-developed with VMware.  Additional benefits highlighted will include a description of the very flexible configurations options and a single support contact point.

Recently Released
EMC Storage Integrator (ESI) version 5.0 for Windows   EMC Storage Integrator (ESI) for Windows Suite is a set of tools for Microsoft Windows and storage administrators. The suite includes the ESI System Center Operations Manager (SCOM) Management Packs that have monitoring capabilities for Converged Systems VCE Vblock and VxBlock, Software Defined Storage ScaleIO, EMC Symmetrix VMAX series, EMC VNX series, EMC VNXe series, EMC Unity Series and EMC XtremIO series, as well as VPLEX. The ESI PowerShell Toolkit enables you to automate view, provision operations for block and file storage for Microsoft Windows, Microsoft SQL Server. ESI also supports storage provisioning and discovery for Windows virtual machines running on Microsoft Hyper-V and VMware vSphere.   Download here>> (requires support login)

Everything Microsoft and VMAX  This live document captures all Microsoft integration and solutions for EMC VMAX. Feel free to bookmark this document so you can come back to it anytime.  Read More Here>>

Global Services News
Reduce Technical Debt and Invest in New Applications Assess your Application Portfolio to drive out costs by retiring, migrating to the optimum cloud computing model, or replatforming. Create continuous delivery processes. Build a new digital business model. Architect, upgrade, or migrate business critical Microsoft systems, including Exchange, SharePoint, Windows, System Center and SQL Server. Learn More Here >>
Microsoft extends support for Windows Server and SQL Server Microsoft recently announced an extended support program for Windows Server and SQL Server that will provide a total of 16 years of product support coverage from Microsoft.  Dell EMC Global Services can help replatform legacy EOL platforms (such as Windows Server 2003 and SQL 2005) which are still at risk, however newer versions will enjoy extended support as organizations adopt the 3rd platform and hybrid cloud infrastructures.  Read More Here >>
Professional Services for Microsoft Exchange and Office 365 Designed to provide the best messaging solution—whether on-premises, Office 365-based, or a hybrid—Dell EMC is uniquely qualified to modernize, automate, and transform messaging infrastructure and maximize investments in hybrid cloud and converged infrastructure. Learn More Here >>

Canadian global information technology company adopts Microsoft Hybrid Cloud CGI sought a way to dramatically cut project complexities and both deployment time and risks when building private and hybrid clouds for clients. It also wanted to streamline the maintenance of the clouds they deploy. CGI clients require flexible, modular hybrid cloud systems that grow with their business and application needs. Its clients also value the ability to extend data center capacity with Microsoft Azure cloud services for flexible cloud backup, disaster recovery and infrastructure as a service. The Dell EMC Hybrid Cloud System for Microsoft is used by CGI along with its CGI Unify360 hybrid IT management offering to help simplify hybrid cloud deployments, cut project time by more than half, and reduce risks — leading to faster time-to value for its clients. Learn More Here >>

Industry News and Events
Microsoft explains its plan to win the 'battle for the future' against Amazon's Alexa and Google Assistant  Much of that strategy hinges on Windows 10's Cortana personal assistant. Full article on Business Insider website.  Read more>>
Microsoft Azure ExpressRoute Now Live in Cologix's Montreal Data Centre  ExpressRoute provides key benefits to enterprises looking to build hybrid cloud environments, including:
  • Private connections that bypass the public Internet.
  • Lower latency to the Microsoft Cloud.
  • Scalable, densely connected and customizable colocation opportunities.

Full article on IT Business Net website. Read More Here>>

Contact Us

This newsletter is brought to monthly by Dell EMC's Microsoft community.  The editor is Phil_Hummel.  Please leave questions and/or comments in the space below and let us know what you would other topics you would like us to include in upcoming issues .  You can also follow this community site using the button above and get email notifications when new content is posted.

 

Thanks for reading!

Phil Hummel @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png
outlook telescope4.jpg.png
In the Pipeline

Understanding the Dell EMC Modern Data Center for Office 365  In Q1 2017 look for this training course which is designed to help you understand the integration of Office 365 within the modern data center along with market trends and drivers, the Dell EMC approach, planning and implementation and the benefits of other Dell EMC products and service offerings for Office 365.

Microsoft Collaboration, Messaging and UC on VxRack  This solution addresses the growing and disrupting trend of hyper converged infrastructure for Microsoft messaging, collaboration and unified communication application based on VxRack System 1000 with ScaleIO for storage.
Microsoft SQL Consolidation on VxRail  This solution will highlight proven scalability for combined SQL Server workloads utilizing software defined storage that has been co-developed with VMware.  Additional benefits highlighted will include a description of the very flexible configurations options and a single support contact point.
Recently Released

Slide Deck: VMAX Tech-for-the-Trenches with VMAX All Flash and Microsoft SQL Server - November 2016    Covers top 10 reasons for using VMAX All Flash, SRDF/Metro Deep Dive, VMAX Host IO Limits Best Practices, Powershell for VMAX and the REST API. View here>>

White Paper: Microsoft SQL Sever Consolidation Solution For Dell EMC Unity  Describes the Microsoft SQL Server database consolidation solution with Dell EMC Unity storage platform. The solution focuses on the consolidated environment, high performance databases, copy management, disaster recovery, and data backup September 2016  Read more>>

Research Report: SQL Server Transformation: Toward Agility & Resiliency  To explore the trends shaping SQL Server data environments, Unisphere Research fielded a survey among the members of the Professional Association for SQL Server (PASS), the leading organization for SQL Server professionals. Pre-order your copy today to receive the full report upon release and join our insiders list for ongoing tips, trends reports and special offers from DBTA, PASS, Dell EMC and VMware.  Read more>>
Recorded Webcast: Virtualizing Your SQL Server Databases and Moving to The Cloud   Learn how other SQL Server sites are leveraging virtualization and cloud, the key benefits and challenges experienced, and best practices to keep in mind as you plan for the future from Unisphere Analyst Joe McKendrick, VMware vExpert Don Sullivan and Dell EMC Database Technical Marketers Sam Lucido and Phil Hummel.  View here>>
Global Services News

New Dell EMC Professional Services Battlecard  Did you know? Our Microsoft services teams have been named Microsoft Partner of the Year nearly 50 times and holds Microsoft gold competencies in over 2 dozen categories. This sales battlecard for Dell EMC Microsoft Professional Services provides a quick and concise high level overview of many Microsoft service offerings.  Start the conversation with your clients today!  Learn More Here >>

Azure Cloud and Enterprise Onboarding Boot Camp

The Dell EMC team recently attended the Azure Cloud Architect Boot Camp in Bellevue Washington.  This event was specific to Microsoft badged employees as well as elite and exclusive Microsoft partners such as Dell EMC.  The Boot Camp was an immersive learning experience that included general lecture sessions, case study workshops, executive panels and hackathons.  Rob Sonders, Dell EMC PS leader, brought back some great information from the event.  Learn More Here >>

TransVault Products Suite and Demo for Dell EMC  As customers transition to Office 365 and migrate from legacy email archiving platforms, the everlasting issue of PST files continues to be an issue for organizations today.  Join us on Dec 16th at 12PM EST to learn more about TransVault and how they can help with PST eradication projects by including features such as: discovery, pst identification, deduplication, consolidation and migration. Learn More Here >>

Industry News and Events
Manage Azure policy using the Azure Stack Policy Module  The Azure Stack Policy module allows you to configure an Azure subscription with the same versioning and service availability as Azure Stack. Once complete, you can use your Azure subscription to develop apps for Azure Stack.  Read more>>
Deploy Pivotal Greenplum on Azure Greenplum is regarded as the most scalable mission-critical analytical database and is in use by a large number of leading enterprises worldwide. In addition, all major third-party analytic and administration tools are supported through standard client interfaces. Read more>>
What You Should Know About Hyper-Converged Infrastructure In a recent survey of VMware vSAN customers, 64% of the respondents indicate using vSAN to run their most critical business applications including Microsoft SQL Server, Microsoft Exchange Server, Oracle, MySQL, and Microsoft SharePoint.  For customers looking for speed of deployment, the VxRail appliance, co-engineered by VMware and Dell EMC, is a great way to get started with HCI.  Read more>>
Contact Us

This newsletter is brought to monthly by Dell EMC's Microsoft community.  The editor is Phil_Hummel.  Please leave questions and/or comments in the space below and let us know what you would other topics you would like us to include in upcoming issues .  You can also follow this community site using the button above and get email notifications when new content is posted.

 

Thanks for reading!

Phil Hummel @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png
outlook telescope4.jpg.png
In the Pipeline
Microsoft Collaboration, Messaging and UC on VxRack  This solution addresses the growing and disrupting trend of hyper converged infrastructure for Microsoft messaging, collaboration and unified communication application based on VxRack System 1000 with ScaleIO for storage.
Microsoft SQL Consolidation on VxRail  This solution will highlight proven scalability for combined SQL Server workloads utilizing software defined storage that has been co-developed with VMware.  Additional benefits highlighted will include a description of the very flexible configurations options and a single support contact point.
Recently Released
EMC Storage Integrator (ESI) version 4.1 for Windows  New features included more than 20 new ESI PowerShell Toolkit cmdlets,  XtremIO support enhancement,  Unity VVOL Management using the ESI GUI and PowerShell,  VMAX3/VMAX AF Snapshot management for using Snap VX, and much more.  Read more>
BLOG Recoverpoint for Virtual Machines:  DR for Microsoft vApps Answers the question “How can application consisting of multiple VMware VM’s be continuously replicated to a DR site?”  RP4VM’s facilitates DR and operational recovery of Virtual Machines of all types. Read more>
BLOG Discovering Unity with SQL Server  Describes 300F All-Flash storage system single integrated architecture for block, file, and VMware Virtual Volumes that increases its applicability in consolidation scenarios.  Covers topics including performance, backup/restore, copy management and DR protection. Read more>
BLOG  TempDB on PCIe flash or not?  PCIe flash is fast. It is located in the server and is just inches away from the CPU.  But there are limitations.  Read the results from a recent POC comparing PCIe flash to an XtremIO all-flash array for SQL Server TempDB performance and management.  Read more>
Global Services News
What Influences Your Cloud Strategy for Microsoft Apps? For most organizations, some form of transition to cloud is inevitable. As you’re defining your cloud strategy, you’ll either choose to adopt, plan for future transition or justify why you’re not doing so. Learn more about who is moving Microsoft applications to the cloud and some key considerations in doing so. Learn More and Share Here >>
Preparing Microsoft Applications for Transition to Cloud. You may, as the majority of enterprise customers already have, choose a hybrid implementation of Office 365 for your Microsoft applications. But there are many variables which will factor into the decision making process including backup & recovery, functional parity and migration. Learn More and Share Here >>
Professional Services for Microsoft Exchange and Office 365. Designed to provide the best messaging solution—whether on-premises, Office 365-based, or a hybrid—Dell EMC is uniquely qualified to modernize, automate, and transform messaging infrastructure and maximize investments in hybrid cloud and converged infrastructure. Learn More Here >>
Industry News and Events
Microsoft's Massive Azure Extension into the Enterprise   IT pros and developers can build and extend containerized workloads and applications to its network- and compute-enhanced Azure public cloud -- and vice versa  Read more>
Dell EMC Expands Broad Microsoft Support, Delivering New Innovations across Cloud, Converged Infrastructure  “For more than 30 years, Dell EMC and Microsoft have focused on delivering best in class, innovative solutions that span the entire Microsoft product portfolio to organizations all over the world,” said Jim Ganthier Read more>
Nine-in-ten IT professionals say cloud is making them learn new skills  The Microsoft Ignite 2016 Cyber Security Survey, questioned 140 IT workers who attended the Microsoft Ignite event. It also found that one in three respondents believed the cloud could be the end of traditional IT security teams, while 20% felt the cloud made it harder to track IT assets.  Read more>
Microsoft's Consortium Blockchain Hits Azure Marketplace  Project Bletchley, Microsoft's consortium blockchain is now available in the company's cloud app marketplace, while cool blobs now are in more Azure regions. Read more>
Microsoft’s Surface Studio beats Apple’s Macbook Pro 8 to 1 in Viral Share charts the Surface Studio is widely believed to be more innovative than the MacBook Pro.  From @mspoweruser Read more>
Contact Us

This newsletter is brought to monthly by Dell EMC's Microsoft community.  The editor is Phil_Hummel.  Please leave questions and/or comments in the space below and let us know what you would other topics you would like us to include in upcoming issues .  You can also follow this community site using the button above and get email notifications when new content is posted.


Thanks for reading!

Phil Hummel @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

Pass Survey.pngAs SQL Server database environments continue to increase in size and complexity, so does the challenge of maintaining the performance and availability of business-critical systems. To identify the key technologies and strategies being deployed by SQL Server sites to respond to these challenges, Unisphere Research recently completed a survey of 357 participants in partnership with the Professional Association for SQL Server.  The survey focused on 4 major topics of importance to the SQL Server community including, availability, automation, virtualization, and cloud services.


The findings from the survey provided a valuable opportunity to learn how other SQL Server sites are managing up time and availability, leveraging virtualization and utilizing cloud services both today and in the near future.  Survey results also include key benefits and challenges that SQL Server professionals are evaluating in making decisions.


A sneak preview of the important findings from the SQL Server survey are discussed in a recorded web seminar held on November 3, 2016 by Unisphere Research. The web conference included a panel discussion of database and virtualization experts from Unisphere Research, VMWare, and Dell EMC that have been involved in the research project.  The panelists were:

 

 

Joseph McKendrick

Lead Research Analyst

Unisphere Research

Don Sullivan

vExpert

VMware

Sam Lucido

Director

Technical Applications Marketing

Dell EMC

Phil Hummel

Technical Marketing Engineer

Dell EMC

joe.jpg

don.jpg

sam.jpg

phil.jpg

 

pass virtualization.png

During the webinar we presented many of the significant findings from the survey.  A more detailed paper with all the results is currently being prepared. Some highlights from the survey presented in the web conference include:


  • Use of flash in all categories of storage is growing
  • Two-thirds of respondents have seen advantages from employing virtualization within their SQL Server environments.
  • The leading challenge for automation is its perceived complexity, cost, and the need to acquire new skills.
  • Private cloud is delivering the most tangible benefits.
  • Up time and availability are the leading concerns with both private and public clouds

 

In the face of increasing pressure to reduce costs and the ever-growing need for better agility, many IT departments are seeking ways to automate routine tasks and free up resources.  This survey can help managers and practitioners assess where they are in relationship to the survey respondents as an aid to better planning and resource allocation.  The recording from the web conference can be located here:


Virtualizing Your SQL Server Databases and Moving to The Cloud

 

We will have more information to share from the survey when the final report is available.  Please follow this community to receive email notification when new content is posted.  We also hope you will use the comments section below for any questions or comments.

 

Thanks for reading,

Phil Hummel, EMCDSA

On Twitter @GotDisk

 

Share this post through email and with your social media connections:

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

 

sql consloidate DECN.pngSQL Server is a well  integrated data platform with many available services and a couple of editions that cover just about any data management scenario you can imagine.  How you mix and match those options with your code and creativity opens up an incredible world of possibilites.  You might think that finding the right supporting infrastructure given all these capabilties would be challenging, but it doesn't have to be.  Dell EMC is leading the industry in providing technologies for IT resource consolidation.  Yes, consolidation is a big topic that means different things to different people.  However, I think we should be able to agree that maintaining many isolated silos of equipment for each application in the enterprise has to be challenged vigorously.

 

Given the rich menu of RDBMS, BI, data pipeline and other services available in SQL Server, it is espeically important to protect against building everyting SQL Server out in silos.  The rest of this article is going to summarize a recently completed paper by Dell EMC describing how to achieve Microsoft SQL Server database consolidation with our Unity storage platform. The solution engineers focused on documenting the consolidation environment setup, performance testing, data copy management, disaster recovery, and data backup.  The results are avialble in a soluiton guide that you can download here.

 

unity arch.png

Solution Overview

The solution uses the Unity 300F All-Flash storage system.  Unity provides a single integrated architecture for block, file, and VMware Virtual Volumes that increases its applicability in consolidation scenarios.  The engineering team choose Diskspd for the storage performance baseline testing. Diskspd is a versatile storage testing tool that combines granular IO workload definition capability with flexible runtime and output options that DBAs and Windows administrators should download and try if you haven't already done so. It is so much better than its predecessor, SQLIO.


Performance

Storage design is one of the most important elements of a successful SQL Server

deployment.   The solution guide describes storage design and general best practices for deploying SQL Server on Unity storage systems. Because virtualization of a SQL Server environment requires its own set of considerations, the guide also includes a section on best practices for SQL on VMware along with references to more detailed white papers.


The solution resutls show  that Unity is capable of delivering steady rates of low latency I/O  under a wide range of DiskSpd settings.  Most of the test scenarios had latency of 1ms or less which is quite remarkable for an entry level all-flash array.

 

The team ran a wide range of scenarios that included various host counts, block sizes and I/O patterns as shown in this table:

 

Host CountBlock SizePattern
168K

Random with read/write ratios of  90/10, 80/20, 70/30

16

8K, 16K, 32K, 64K

Random with read/write ratio of 70/30
8, 16, 248K
Random with read/write ratio of 70/30


unity backup.pngFast Integrated Backup

In a consolidated database environment using Unity sotrage, DD Boost for Enterprise Applications (DDBEA) dramatically improves the

network utilization efficiency by reducing the amount of data transferred over the network.  With DD Boost, the server only sends unique data segments to the Data Domain system.  Traditional full backups take longer to complete, can require as much space as the original data, and consume bandwidth equal to the amount of data being transferred. DDBEA reduces resources in all these areas by applying intelligent backup data reduction at the client.


Copy Data Management

The solution also shows how AppSync simplifies and automates the process of generating and consuming database copies by abstracting and automating the underlying storage and replication technologies.  AppSync is used to orchestrate all the activities required from copy creation and validation through mounting at the target host and recovering a database.  The copy management use cases demonstrate how to automatically discover user databases and use the database structure to map objects required for the copy through the virtualization layer to the underlying storage LUN.


Data Protection and Recovery

RecoverPoint for Virtual Machines was chosen for the solution to address data protection and recovery across physical sites.  RP4VMs provides a simple to use automated solution to easily manage any consolidated database environment with multiple sites.  Unity integrates with RecoverPoint for Virtual Machines to protect SQL Server instances at the production site by enabling replication and recovery at a secondary site.


Next Steps

Download the full solution guide using the link below.  There are many more details on all of the topics that I briefly covered in this article.  Then contact your Dell EMC account team to arrange for a deeper discusion of the Unity storage platform along with our other solutions for SQL Server copy management, data protection and DR recovery.

 

pdf.jpg

 

http://www.emc.com/collateral/white-papers/h15142-unity-ms-sql-wp.pdf

 

Thanks for reading,

Phil Hummel, EMCDSA

On Twitter @GotDisk

Kucera

TempDB on PCIe flash or not?

Posted by Kucera Oct 7, 2016

I recently completed a POC for a data warehouse on SQL Server using XtremIO.  One of the requirements of the POC was to use PCIe flash for TempDB.  PCIe flash is fast.  It is located in the server and is just inches away from the CPU.  But there are limitations.  Here are some stats I found for PCIe flash:

 

Read bandwidth (GBps)

2.7

Write bandwidth (GBps)

2.1

Random read operations at 4-KB block size (IOPS)

285,000

Random write operations at 4-KB block size (IOPS)

385,000

Read latency (microseconds)

92

Write latency (microseconds)

15

 

When looking at IOPs, those are some really big numbers.  Even the bandwidth numbers are fast.  But something that was left off of the chart was the IO size used to produce those bandwidth numbers.  After some searching, it was identified that a 1MB IO size was used.  Most workloads don't generate a 1MB IO, what happens when the IO size is smaller?

 

There were two different databases used for testing during the POC.  One database was about 1.1TB in size and contained three tables.  The second database was built from the first database but artificially expanded by 10x.  The second database therefore was 11TB and the three tables had billions of records in each.

 

A batch of queries were run against the two databases at various user loads.  These queries were not typical queries.  They were created to stress test the entire system.  Many of these queries were doing multi table joins and some of the result sets on the larger database were in the hundreds of millions.  In addition to the large results sets, many of the queries were using GROUP BY which caused a lot of TempDB usage, sometimes moving terabytes of data in and out of TempDB.

 

As I was monitoring the performance of the server, I was witnessing response times greater than 200ms on the volume used for TempDB.  That was not something I had expected.  Bandwidth on the volume was ~1.9GB/s read and ~1.6GB/s write.  The average IO size was 64k.  Using a smaller IO size may have had an impact on the max bandwidth the PCIe flash could generate.  I have witnessed XtremIO achieving higher bandwidth at those IO sizes.  Could moving TempDB to XtremIO improve performance?

 

The XtremIO array in my environment is made up of two 10TB X-Bricks.  Each X-Brick has four 8Gb FC ports.  There are also four 10Gb iSCSI ports on each X-Brick, but they were not used for this POC.  A single 8Gb FC port can provide 800MB/s of bandwidth.  Each X-Brick is capable of 3.2GB/s of bandwidth.  The XtremIO array in my environment can do 6.4GB/s.  6.4GB/s is more bandwidth than what the PCIe flash is rated for.  But since the XtremIO array is external, can the server get the bandwidth out to the array?

 

The server being used for this POC had 5 PCIe Gen 3 slots.  The PCIe flash utilized a Gen 2 x8 interface.  Theoretical max bandwidth utilizing that interface is 4GB/s.  The server also had 2 dual ported 16Gb FC HBAs.  The 16Gb FC HBAs utilized a Gen 3 x8 interface.  Theoretical bandwidth is 7.877GB/s and using two provides 15.754GB/s.  Plenty of bandwidth available on the PCIe interface.  A 16Gb FC port is capable of 1600MB/s, the server had four.  Four ports provide 6.4GB/s of bandwidth which matches up well to the 6.4GB/s that the 2 X-Bricks can deliver.

 

All of the tests were rerun after moving TempDB to XtremIO.  The TempDB volume still experienced response times greater than 200ms, but there was an improvement in bandwidth.  Bandwidth numbers increased to ~4.5GB/s, which is still lower than what the two X-Bricks could deliver.  There is more to performance than just IO.  CPU performance had a large impact as well.  Most of the time the server CPU was 100% across 44 physical cores.  Even when the server was reconfigured with 96 cores (4 sockets, 24 cores each ), CPU was 100%.  It was determined that the queries that were CPU bound initially, didn't see any improvement or degradation in performance when TempDB was moved.  But those queries that were bandwidth bound on TempDB did see an improvement when moved to XtremIO.

 

In the end, it's all about understanding where bottlenecks can occur.  Yes, multiple PCIe flash could have been used but the server was limited on PCIe slots.  But also, why add more PCIe flash when XtremIO has plenty of bandwidth to handle the workload.  In addition to handling more workload, there are also all of the benefits of using shared storage.  With XtremIO, if we came to a situation where more performance or space was needed, additional X-Bricks can be added while keeping everything online.  The same thing can't be said for PCIe flash.

messy data center.jpgThere Has to Be a Better Way

When personal computers moved from being the toys of hobbyists and tinkers into the hands of business users and the general consumer population there was a common refrain heard:  "I want something that just works out of the box".  Apple and Dell are two examples of vendors that focused on building a business model and brand by messaging to customers that their products "just work".  In order to execute on that vision Apple closely controlled the development and integration of the entire system of hardware and software and Dell was the pioneer of converge for the "open systems" model in partnership with Intel and Microsoft.

 

In the world of corporate data center information technology even today, the majority of enterprise organizations still expend a large share of their resources assembling data center equipment from multiple vendors and putting it all together.  From procuring and configuring equipment, to integrating software and hardware, to installing and configuring monitoring, and then testing everything,  a lot of effort goes into "just getting things to work".  And this all has to happen before applications like Microsoft Exchange, SQL Server and/or SharePoint Server can even begin to be deployed.  Even after all this effort, the end result is that most organizations do have sufficient standardization across their data centers to support significant levels of automation or consistent monitoring.  However, data center equipment procurement and management best practices are starting to change dramatically. 


There is greater awareness of the need to improve time to service being driven by requests for more empowerment of business users and software developers in addition to doing more with less IT budget.  The public contract cloud providers have built technology and business models catering to the requirements of end-user agility and control. Many organizations are under pressure to provide these same service levels within the secure and compliant confines of the on-premises data center.  The realization that there are now options in the marketplace for the enterprise data center that "just work out of the box" is leading to a revolution in what we call "best practices".


In the remainder of this article I'm going to explain how that is changing, due in large part to ongoing efforts at Dell EMC, and what it means for IT and business decision makers.  There is some really interesting news for organizations that rely heavily on Microsoft applications including SQL Server, Exchange, and SharePoint in the third section below.


Brief History of Converged Platforms

Converged-Infrastructure-2015-Vertical_563x720_72_RGB.jpgThe Dell EMC VCE™ brand of products was the pioneer of converged infrastructure with the introduction of Vblock® Systems in 2009. The portfolio quickly expanded to introduce VxBlock® Systems, integrating a wide-array of technologies and offering increased choice and flexibility for converged infrastructure systems.


Dell EMC's Converged Platforms Division is continuing to innovate with a portfolio of products that bring together world class hardware, software, support and purchasing all from one vendor.  Today, converged and hyper-converged products in the Dell EMC catalog include Vblock®, VxRack® and VxRail®With a flash-optimized portfolio, Dell EMC provides even more options for customers to transform and modernize their data centers. Since introducing the industry’s first true converged infrastructure system, Dell EMC has helped more than 1,000 customers transform their IT environments to become more agile, reliable, and cost-effective.

 

With the VCE portfolio of Blocks, Racks, and Appliances, IT leaders have the flexibility to:

  • Reduce costs
  • Deliver New Services
  • Shift Resources from Maintaining Infrastructure to Delivering Innovative Business Value
  • Modernize and Transform Data Center Environments
  • Meet the Evolving and Dynamic Demands of Today's Tech-Savvy Mobile Workforce

 

Meet the Next Level of Innovation Up the Stack

ehc logo.png

While many organizations have successfully introduced virtualization on top of converged infrastructure as a core technology within their data center, the benefits have largely been restricted to the IT infrastructure owners. End users and business units within customer organizations have not experienced many of the benefits of virtualization, such as increased agility, mobility, and

control.

 

Enterprise Hybrid Cloud integrates the best of Dell EMC and VMware products and services to deliver a fully integrated, enterprise-ready solution across all three data center pillars—compute, storage, and network. Enterprise Hybrid Cloud empowers IT organizations to accelerate the implementation and adoption of a hybrid cloud while still enabling customer choice for the compute and networking infrastructures within the data center.

 

There is an extensive library of application blueprints developed for Microsoft application deployment and management available through the Microsoft modular add-on to Enterprise Hybrid Cloud.   The add-on is built using VMware vRealize Application Services and VMware vRealize Orchestrator to enable automated deployment, management, and protection of Exchange Server, SQL Server, and SharePoint Server applications, and to enable application monitoring with VMware vRealize Hyperic during the application lifecycle.

 

Enterprise Hybrid Cloud for Microsoft Applications provides a reference architecture that integrates all the key components and functionality necessary for deploying, managing, and protecting Microsoft applications in a hybrid cloud. It enables customers to leverage Enterprise Hybrid Cloud 4.0 for:

 

  • On-demand, self-service provisioning of Microsoft enterprise applications such as Exchange Server, SQL Server, and SharePoint Server
  • Complete management of the application service life-cycle
  • Provisioning, monitoring, protection, and management of the infrastructure services by line-of-business end users
  • Provisioning of application blueprints with associated infrastructure resources by line-of-business application owners
  • Provisioning of backup, continuous availability, and disaster recovery services as part of the cloud service provisioning process
  • Database as a service (DBaaS), with rapid, on-demand, self-service provisioning of SQL Server instances and databases on SQL Server virtual machines, post deployment

 

We have recently released a solution guide that describes how to use Enterprise Hybrid Cloud 4.0 for Microsoft Applications to provision and manage new and existing Microsoft Exchange Server, Microsoft SQL Server, and Microsoft SharePoint Server applications for on-premises or hosted cloud services.  Use the link below to download the solutions guide and please post any comments or questions in using the comment features below.  We always value your feedback!

pdf.jpg


Download the solution guide for Enterprise Hybrid Cloud 4.0, Microsoft Applications



Deciding where to deploy Microsoft applications such as Microsoft Exchange Server, Microsoft SQL Server, and Microsoft SharePoint Server can involve trade-offs. Traditional on-premises infrastructure gives IT teams more control, but provisioning can take weeks. Public clouds speed up provisioning, but they do not necessarily meet business requirements for data protection, disaster recovery, security, and guaranteed service levels. EMC Enterprise Hybrid Cloud 4.0 provides on-premises or hosted cloud services to meet these business requirements.  If you are looking for solutions for the enterprise data center that "just work" your in luck - check out the Dell EMC Converged Platforms and Solutions website.


Thanks for reading,

Phil Hummel, EMCDSA

You can find me on Twitter as @GotDisk


Please share this article with your networks!

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

data_science.jpg.pngIn this article I am going to attempt to thread together a few developments in computer hardware and software history and relate those to a set of technology that is available today for doing analysis of large data sets with a very simple architecture.  This should be of interest to anyone working with big data, analytics and the systems that are required to support those capabilities.  We are going to cover some history of data science and two new technologies from Microsoft and EMC.

 

Data Science is a relatively new profession that combines the best of applied statistics and computer science.  Data scientists are engaged in a effort to better understand the information buried in the very large data collection experiments made possible in the digital world. For more background on data science see this Forbes online article titled A Very Short History Of Data Science.  What is more important for this article is the realization that many IT organizations are struggling to provide robust environments for data scientist to work in.  This is not a new problem.  Data analytics has been driving hardware and software developers to provide more capability with less complexity since the introduction of vacuum tube computing machine. A Univac I computer with 5200 vacuum tubes was used to successfully predict election results in 1952, long before Nate Silver of the FiveThirtyEight blog was born.

 

The pioneers of data analytics had to deal with all aspects of the hardware as well as writing the software they used.  The need to make data analytics more accessible to a broader audience of statisticians/scientists, many of whom were not proficient in software programming, provided a large commercial software market opportunity for the rapidly growing "mainframe" computer sector in the 1960's.  Many of the most popular tools used by data scientist today including SAS and SPSS were first developed for mainframes in mid-late 60's.

 

Immediately following the introduction of personal computers in the early 1980s, these same same software developers together with a host of new startups rushed to introduce commercial software for data analysis and statistics on PCs.  The early versions of the DOS operating systems developed by Microsoft presented major challenges for mainframe software developers.  Two important innovations widely available for mainframes were missing on the PC platform: 1) availability of systems with multiple processing cores coupled with multi-threaded operating systems and 2) virtual memory management.  These two features are still very relevant to the way we do analytics for big data today which will lead us to why Microsoft R Server together with EMC DSSD D5 are an interesting solution.  First I want to cover a little bit of R background and then bring the pieces together to explain why I'm excited about this integration.

statistical-analysis-r-cran-packages.png

If you don't work in the field of data analytics you may never have heard of the R programming language despite the fact that s the most frequently used analytics/data science software according to the 2016 KDNuggets Software Poll.  If you have now or are going to have a significant data science initiative, chances are there is going to be a need for infrastructure to run R analytics.


There are a couple of factors that make R so popular.  First,  R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS (see The R Project for Statistical Computing for more details). R functionality is organized into packages. There are approximately eight packages supplied with the base R distribution.  Since R is also and open source project, any one can write and contribute packages to the R project.  The Comprehensive R Archive Network (CRAN) currently lists 8,820 available packages in the repository.  This is the second reason that R is so popular, it's not about the syntax it's the availability of readily available packages for almost anything you can imagine needing.


The downside to R relates back to all the background I provided above, the vast majority of R packages are single-threaded and lack virtual memory management.  For more details on this refer to the R for Windows FAQ.  There are many active projects in the R community under way that are "related to pushing R a little further: using compiled code, parallel computing (in both explicit and implicit modes), working with large objects as well as profiling"  that are documented on the R Project website (see High-Performance and Parallel Computing with R). There are downsides here as well.  These newer packages, for the most part, do not bring together parallel algorithm implementations and virtual memory management.  That integration is clearly evident in the Microsoft R Server product.


Microsoft R Server is built using technology that Microsoft acquired in 2015 when they purchased Revolution Analytics.  Here is a link to a one hour Channel 9 recorded presentation on the value of the acquired intellectual property.  For the purpose of finishing up this discussion I want to talk about just the RevoScaleR package.  This package couples parallel algorithms with virtual memory management. Virtual memory management uses a special  binary file format (XDF) for storing data and supporting virtual memory management.  Large data sets get imported into XDF files prior to performing analysis.  The file format provides very fast access to a selected set of rows and/or a specified set of columns. New rows and columns can be also be appended to the file without re-writing the entire file.  The capabilities and efficiency of the XDF file implementation support "chunking" functions used by algorithms to data into and out of memory and supports multiple threaded access. This is a big advantage over the constraints of open source R that all data for an analysis must fit in physical memory at run-time.  However, we know from experience with RDBMS's that the benefits of data chunking between disk and memory evaporate if the disk subsystem is too slow to meet the demand of highly parallel code running on servers with lots of processor cores.  This is where EMC DSSD D5 arrays fit in the solution.


DSSD D5 is a rack-scale flash appliance that combines the performance profile of direct-attach flash with the availability and reliability of shared storage.  For existing servers and applications like Microsoft R Server, DSSD provides a Block Driver that enables you to use DSSD D5 as a block device. The DSSD Block Driver manages the interaction between the client applications and the DSSD appliance.  This is how R Server would access DSSD D5.  Microsoft R Server for Linux and DSSD D5 both support for 64-Bit Red Hat Enterprise Linux  6.x .   You can connect up to 48 hosts via multi-path attach to a DSSD D5 storage appliance that would have access to up to 100 TB of usable persistent flash storage at more than 10 million IOPS and 100 GB/sec of sustained throughput, all at ~100 μsec latency on average.  This level of scale and performance in a multi-host shared configuration is perfect for supporting a team of data scientists that need the flexibility to change configurations in response to changing data analysis needs.  The tight coupling of direct attached flash with a single server is not flexible enough for most enterprise big data environments.


DSSD-D5-SO_720x327_72_RGB.jpgI've covered a lot of material in this post from several different perspectives so I will end with a quick summary and few links to deeper resources on the main points covered.

  • Highly scalable data science systems require software that is multi-threaded and uses virtual memory management coupled with hardware that can support the resulting workload demand.
  • The Microsoft R Server RevoScaleR package includes parallel processing implementations of many popular data science algorithms with integrated virtual memory management support.
  • EMC DSSD D5 rack-scale flash provides ultra-dense, high-performance, highly available, and very low latency shared flash storage accessed through PCIe Gen3 that leverages NVMe™ technology.
  • Microsoft R Server for Linux combined with EMC DSSD D5 provides a powerful, flexible and easy to implement and maintain platform for supporting teams of data scientists working on enterprise big data analytics solutions.

 

More information:

 

  1. Microsoft R Server
  2. EMC DSSD D5
  3. Open Source R Resouces

 

Thanks for reading

Phil Hummel, EMCDSA

Twitter: @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

ProtectPoint-Software-Box-2015_copy_548x720_72_RGB.jpg

The recent release of the EMC ProtectPoint 3.1 has a lot for SQL Server administrators using XtremIO to be excited about.  ProtectPoint integrates features of primary storage with EMC’s Data Domain protection storage to greatly reduce both backup/restore execution run times and operational complexity.  ProtectPoint features are accessible from both SQL Server Management Studio and T-SQL so it makes it easier for both the infrastructure and DBA teams to leverage all available assets for data protection.

 

Many customers using XtremIO storage for SQL Server databases have discovered the power and ease of use of XVC virtual copies for reducing the overhead of backing up large databases.  A recent white paper titled Introduction To XtremIO Virtual Copies will explain many of the features and use case for readers not familiar with XVC.  There is a section devoted to Offloading Backup Operations that is particularly useful for this discussion.

Since XtremIO storage assisted backups use the arrays virtual copies as the backup target,  you end up with both the source and the backup on the same storage device.  If you lose the array, you lose them both.  There are several approaches commonly used to increase the level of copy availability.  You can mount the storage copies to a non-production host and run a backup tool on that server.  You can replicate the storage copies/backups from the array with the source to another storage array.  Both of these approaches involve some level of scheduling and possibly scripting work that can increase operational complexity.

 

A ProtectPoint SQL Server backup takes a point-in-time virtual copy on the XtremIO system and moves the data blocks to the Data Domain system using the RecoverPoint appliance for transport.  This is accomplished without backup or replication assistance from the database host. The RecoverPoint system tracks the data that has changed since the last backup sent to the Data Domain protection device and sends only the changed data to the Data Domain system.

 

ProtectPoint advatages:

  • Provides the ability to perform backups and restores of Microsoft SQL database data that resides on XtremIO primary storage system to protection storage on an EMC Data Domain system.
  • Perform backups and restores of Microsoft SQL Server using SQL Server Management Studio (SSMS), CLI, or T-SQL scripts.
  • Backup and restore either an entire SQL Server instance or only the required databases.
  • Restores automatically from a replicated backup on a secondary Data Domain system when the primary Data Domain system is unavailable.
  • Supports listing and lifecycle management of backups using the native database backup functionality, and deletion of backups that are no longer required.
  • Provides the ability to perform backups and restores over an Ethernet (IP) or Fibre Channel (FC) connection.

 

I recently was able to connect up with my EMC colleague Ryan Kucera based out of the Microsoft Technology Center in Chicago.  Ryan provides pre-sales support to EMC customers interested in the intersection of Microsoft and EMC products.  Ryan fired up a WebEx session and walked me through a demo of  backing up and restoring a 2+TB SQL Server 2014 database  with ProtectPoint.

 

The setup in Ryan's lab for this demo is SQL 2014 running on a HyperV VM with Windows 2012R2.  Storage is supplied from a 2 brick XtremIO cluster connected to the VM using HyperV virtual HBAs.  There are also 2 RecoverPoint appliances in a cluster that replicate XtremIO virtual copies to a Data Domain 4200 that is providing protection storage off array from the XtremIO cluster.  The image below shows the relationship between the components of the solution:

 

PP_config.PNG.png

The yellow circle is our "Production" site with the SQL Server VM, XtremIO and RecoverPoint.  The Grey circle is our "Protection" assets with RecoverPoint, Data Domain and an optional host for accessing the backup copies independent of the production server.

 

The screen shot below shows the UI for selecting a database that you want to protect.  The Microsoft app agent for ProtectPoint is software that comes with ProtectPoint and is installed on the Windows host where SQL Server is running.  The agent supports FULL Database and Transaction Log backups.  Since Data Domain is a de-duplicating protection storage appliance, we only need to run FULL database backups from the client side.

 

PP_database_selection.png

Once the ProtectPoint configuration for a database is complete, the RecoverPoint appliances will begin to replicate the data from the XtremIO primary storage to the Data Domain even though no backup operation has been requested.  Since the XtremIO is an all flash storage array with very fast data reads, the time required for the initial synchronization depends on the Data Domain model and available resources.

 

When a backup is initiated, a virtual copy is taken on the XtremIO array and replicated to the Data Domain via the RecoverPoint appliances.  This completely eliminates any backup overhead from impacting the production SQL Server host.  Since the initial sync of the data made a full copy of the database on the Data Domain, we are only sending the incremental changes that have occurred since the initial sync.  Every subsequent FULL database backup also only has to send incremental changes similar to what a DIFFERENTIAL SQL Server backup would do.  A substantial advantage of using Data Domain and ProtectPoint is we don't have to manage two schedules for different types of backups - we always use full and always get the benefit of incremental.

 

PP_DuringPPBackup.png

Once the backup is complete, the replication between the XtremIO and Data Domain will be idle until the next backup is requested.  We show both the client side message box and a screen clip from the RecoverPoint management GUI.

PP_backup_image_idle.png

The time required for completing a backup depends on the amount of changes to the database since the last backup and the model and resources available on Data Domain.  The purpose of this article was to educate the reader about the capabilities and architecture of the solution.  A future post will discuss the restore capabilities of ProtectPoint.

 

For more information in the meantime, you can go the ProtectPoint product page on emc.com

 

Thanks for reading,

Ryan Kucera

and

Phil Hummel @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png:

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

The EMC Global Solutions Engineering team has published a white paper that provides design guidelines and best practices for Microsoft Exchange, SharePoint and Skype for Business Servers on EMC ScaleIO®. This paper demonstrates that EMC ScaleIO provides a high-performance, scalable, and cost-effective hyper-converged solution for Microsoft messaging and collaboration applications.

 

Lab testing confirms that the ScaleIO design and sizing best practices meet the specified performance requirements. The picture below shows a block diagram of the prototype production environment used in the lab to service our simulated user load.

MS apps on scaleio.png

 

The engineers chose a hyper-converged configuration for the test infrastructure deployment in order to demonstrate the flexibility of migrating the virtual machines among all the hosts. Following the best practices and sizing guidelines from the paper, the test used 15 servers to meet the required workload demand.The lab tests validated the performance of SharePoint and Exchange running together on the ScaleIO deployment.  To simulate client workloads the engineers used the following tools:

  1. Microsoft SharePoint 2013 Visual Studio Team System custom workload
  2. Microsoft Exchange Jetstress

 

Test Results

The SharePoint tests results show host read latency, write latency, and the overall latency of the virtual disks hosting database primary replicas are all below 20 ms. The search index disk latency values are all below 10 ms.  Exchange tests results were average latency on the Mailbox server at 12.72 ms. The performance of the Exchange implementation exceeded the design target for the test.

 

Conclusions

The paper explains in detail how architects can evaluate their organizations needs and answer the following design questions based on their individual circumstances:

 

  1. Which ScaleIO topology to choose, converged (two layer) or hyper-converged (single layer).
  2. How many compute resources will you need for a ScaleIO platform.
  3. How to design the protection domains, storage pools, and fault sets.
  4. When to use cache and, if so, how much cache capacity to allocate.
  5. What ScaleIO network topology to use and how to set network-related configurations.

 

  • EMC ScaleIO provides a software-only solution that converges storage and computing resources to form a single-layer hyper-converged platform for hosting Microsoft applications.  ScaleIO is also capable of being configured in two-layer or mixed role configurations depending on the needs of the organization.
  • ScaleIO storage is elastic and delivers an almost linearly scalable performance. Its scale-out architecture can grow from just a few to thousands of servers.
  • This white paper demonstrated ScaleIO design and sizing best practices by using an example of building a production environment with mixed Microsoft messaging and collaboration applications.
  • The validation test shows that ScaleIO supports Exchange and SharePoint, with the ability to scale out both capacity and performance.

 

pdf.jpg

Thanks for reading,

Phil Hummel @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

What does Spanning do?

At Spanning Engineering we live and breathe SaaS.  We’re experts in working with SaaS vendor APIs and we use that expertise to develop Spanning Backup - data protection for SaaS applications.  We know how to build systems at scale in a SaaS environment and know what we need from SaaS application APIs to build secure, robust, and reliable solutions.  When we decided that Office 365 was our next app to protect, we formed a relationship with Microsoft to make sure we were in the best position to protect Office 365 data and that we had the API support to do so.


What’s partnering with Microsoft been like

Surprisingly easy. Our ISV partner champions at Microsoft connected us with the product managers at Microsoft responsible for the Office 365 REST APIs.  Microsoft is serious about developing a robust, manageable set of REST APIs for Office 365, and the product managers were eager to understand our use cases.  The backup and restore use case is not the typical use case for the REST APIs; we need things like app-level access to the entirety of a tenant, change feeds with reliable bookmarks, and unique identifiers for objects that don’t change.  The product managers listened and understood our requirements.  This was a win-win for both of us.  Microsoft was looking for early real world feedback on their new APIs, and we wanted to work with REST APIs that would be maintained going forward.


Our relationship with Microsoft informed our approach to OneDrive for Business

In the summer of 2015 we knew that the next app we wanted to back up for O365 was OneDrive for Business.  We reviewed the available APIs - the Files API, the Sharepoint API, and the new OneDrive for Business API in preview. 


The Files API was a non-starter - it didn’t have a changes feed and never would.  The Sharepoint API was extensive and covered much more ground than we needed to support backup for OneDrive.  While we were certainly capable of going in that direction, we were looking for a better way to get to the OneDrive for Business data that would be specifically targeted at OneDrive and would evolve along with it.


Microsoft was already in the process of developing the OneDrive REST API to expose OneDrive objects in an easily consumable fashion.  It would align with their roadmap for OneDrive functionality.  We decided the right thing to do would be to hitch our release to the new API. Microsoft shared their roadmap and we shared our use cases, and then we worked together to get the functionality we needed to deliver our product on top of the new OneDrive APIs. 


This strategy worked out to the benefit of both of our teams.  As soon as the OneDrive API went GA, we started exercising the features we needed and provided feedback to the OneDrive Product Management team regarding feature gaps and defects.  The communication and support has been solid.  They’ve listened carefully to our requests and have communicated release schedules for bug fixes, been open about what functionality isn’t available - and what functionality will never exist - and have provided workarounds when bug fixes wouldn’t be available in time for our first release. 


What’s next for Spanning

The OneDrive for Business product is growing and evolving, as is the API support for it.  As new functionality is added we continue to work with the Microsoft team to make sure the API support is there so we can extend our backup and restore service to provide complete protection for OneDrive for Business data.  We also have plans to extend Spanning Backup for Office 365 to protect other Office 365 apps.


In Conclusion

A strong technical partnership is key to success when developing applications for a rapidly evolving SaaS offering such as Office 365.  Microsoft understands that supporting application developers is critical and is adept at building out developer ecosystems that benefit us, Microsoft, and our mutual customers.



About the Author

Andrea_Adams_360x360BW.png


Andrea Adams is the Director of Engineering at Spanning responsible for the Spanning Backup for Office 365 service.

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

Introduction

The way people purchase and use enterprise IT equipment and services is changing at rates never experienced before.  Many organizations are looking for new investment options that will help them reduce the staff time and budget required to manage their data center assets.  The motivation for these changes is a desire to shift investment resources from IT operations and maintenance towards funding new and improved business opportunities that increase income and profit.

 

One success story that has emerged during this transformation is server virtualization.  Server virtualization software is used to allow multiple operating systems and applications to run on the same physical server by sharing the available resources.  It’s transforming the IT landscape and fundamentally changing the way people design and utilize server resources.  The wide spread adoption of this technology has resulted in dramatic simplification of IT operations and significant cost savings.

 

The emergence of Converged Infrastructure (CI) systems in the market place has created a new wave of change that is being evaluated by data center managers. CI Systems seamlessly integrate best-in-class compute, network, and storage technologies together in one appliance. These systems are primarily used to implement a complete Software Defined Data Center (SDDC) capability that is strongly tied to server virtualization.  These CI systems provide dynamic pools of resources that can be intelligently provisioned and managed through software to address changing demands and rapidly shifting business priorities.


Benefits of CI systems can be achieved through:

  • lower capital expenses resulting from higher utilization, less cabling, and fewer network connections
  • lower operating costs resulting from
    • reduced labor via automated data center management and
    • consolidation of storage and network management infrastructure teams

 

Internet scale data centers like those operated by Amazon, Google, and Microsoft have to achieve high levels of standardization, automation and monitoring.  CI systems are taking the best lessons learned from that approach and bringing it to the enterprise data center.  As CI systems mature, they will likely become one of the main tools used to achieve cost savings and improved operational efficiency as organizations continue to invest in data center modernization.

 

Audience

This article was written for database administrators and developers that do not have much prior exposure to hardware and software defined data center concepts and trends.  The hope is that by becoming more familiar with the concepts in this paper, the infrastructure and application teams can have more meaningful planning and implementation projects when considering the acquisition of CI systems.

 

SQL Server and Virtualization

Prior to the widespread availability and acceptance of server virtualization products in the enterprise market, research showed that the vast majority of hardware servers hosting SQL Server had 2 processor sockets and <8GB of RAM (1).  As enterprises increased their capabilities of detecting SQL Server instances on the network and using remote Windows performance monitoring, it became clear that there were too many underutilized resources devoted to SQL Server.  The strategy of isolating one application on its own server had resulted in significant wasted hardware investment and licensing costs.

SQL Server professionals together with infrastructure experts embarked on a program of “server consolidation”.  Investments were made in tools and documented procedures to determine the best candidate databases and instances for consolidation.  Protocols for testing were developed for risk mitigation resulting in a large scale improvement in the cost per transaction served by many enterprise users of SQL Server.

 

As the industry was making significant progress toward server consolidation, Microsoft and VMware were introducing enterprise class server virtualization and it was clear that the same tools and techniques being used would also make conversion from physical to virtual servers possible with acceptable risk.  It is difficult to say how far the conversion from physical to virtual servers has progressed but the SQL Virtualization session at the PASS Summit 2015 attracted in excess of 500 attendees from the 5-6,000 total attendance.  Two surprising results emerged from and informal show-of-hands poll taken at that session: 1) the vast majority of attendees with virtualization for SQL Server are using VMware, and 2) the majority of attendees still managed physical SQL Server instances.  This data is consistent with smaller polls taken during SQL DBA Day events held by EMC throughout 2015.

 

There was wide speculation that database servers for Oracle and SQL Server would be slow to get converted to virtual server technology.  It is hard to make a case that this has been true.  While some workloads such as Active Directory services, file and print servers moved rapidly and almost completely to virtualized infrastructures, the number of virtualized SQL Servers is also impressive.  In 2008 Microsoft made a commitment to use Windows HyperV virtualization to reduce the environmental impacts from the more than 5,000 MS IT SQL Server instances, most of which were running on dedicated servers.(2)  With an end-of-life server turnover rate of 20% per year, the expectation was that the expected energy savings could be realized fairly quickly.   VMWare states in their most recent white paper on virtualizing SQL Server that “organizations are now virtualizing their most critical applications and embracing a “virtualization first” policy...and Microsoft SQL server is the most virtualized critical application.”(3)  The experience and success with server consolidation of physical servers gave the industry the confidence to move aggressively from physical to virtual infrastructure, even for critical database servers.

 

The reports of large numbers of organizations still maintaining physical SQL Server infrastructure results from at least three factors:

  1. the need for very high performance for very large databases (VLDBs) that benefit significantly from dedicated physical servers
  2. complications and uncertainty associated with Windows Failover Clustering and SQL Server Failover Clustered Instances (FCI)
  3. the ability to use multiple instances and server consolidation to achieve high utilization rates using physical servers.

 

With this background, the next several sections will deal with information on the use of new CI systems for hosting virtualized SQL Server instances.

 

Converged Infrastructure (CI) Systems

Every data center that I’ve ever been has been an amazing collection of hardware components and connections that is difficult to imagine could ever actually function let alone well.  Data centers grow over time and each new acquisition tends to be influenced by past vendor choices, budget, familiarity of the current data center staff and company leadership.  In other words, few organizations have been able to standardize with anywhere near the success that has been the model for public cloud and contract hosting providers.  For business with significant investments in IT, the same acquisition and management strategies of the past are not going to be adequate.

Technology research firm Gartner has started tracking a Magic Quadrant for Integrated Systems that analyzes the rapidly evolving converged and hyper-converged infrastructure (HCI) industry space.  Building CI systems is achieved by assembling servers, storage devices, networking equipment and software for virtualization and management into a single product SKU.  The CI vendor typically tries to match the various components in balanced combinations that will all reach full utilization at approximately the same scale factor.  This paper is focusing on EMC’s Vblock CI systems.

 

The main difference between CI and HCI systems is the relationship of the application server to the storage subsystem.  In CI systems there is physical separation between the server CPUs that are dedicated to running VMs and the processors (CPUs) that the storage device uses to read and write data.  The hypervisor hosts “talk” to the storage via an IP or FC network.  This is the typical architecture seen in many data centers today not using CI systems but rather have a collection of servers and storage devices (block and file) connected via networking.

 

In HCI systems, there is a single collection of servers that both run application workloads and manage the storage resources of the system.  Each server has access to 1 or more local disks.  Those disks are shared by the entire “cluster of servers”.  Each server reads and writes data to its local disks for some of the data it needs and some of the data needed by other servers.  Any server can request data from any other server and answer requests for data from any other server.  The CPUs of the system both run applications as well as fetch and store data.

 

Whether you are interested in CI or HCI systems, this approach to equipment purchasing greatly reduces the labor cost and time normally associated with adding capacity to a data center.  The infrastructure vendor does all the component integration and testing prior to shipping. There is also as single point of support for all hardware and software that is included.

 

Vblock Systems Overview

In use by over 1,200 businesses and enterprise organizations around the globe, the EMC Vblock Systems portfolio offers choice, flexibility, and reliability for transforming IT.

 

Vblock systems provide:

  • An engineered, manufactured, managed, and supported converged infrastructure that is ready to be deployed in your data center.
  • A complete integrated solution for virtualization, storage, computing, and networking.
  • Enterprise-class capabilities that include management, performance, security, multitenancy, high availability, and backup.
  • Easily scaled out or scaled up services to meet all your business growth needs and protect your IT investment.
  • Hardened systems according to best practices for each component and enterprise-grade business objectives to ensure the highest level of security.
  • One support number for everything.

 

Vblock systems come in variety of configurations including:

Vblock ModelApplication Target
240

Mid-sized organizations get a highly efficient virtualized infrastructure to run their entire business with plenty of room for expansion powered by:

  • Cisco C220M3 UCS Rack Mount Servers
  • Cisco Nexus
  • EMC VNX5200
340

Enables the substantial scale needed for large virtualization and cloud implementations. This model is built to support mission-critical enterprise applications powered by:

  • Cisco 5108 Unified Computing System
  • Cisco Nexus, Cisco MDS
  • EMC VNX5400, 5600, 5800, 7600, 8000
540

The first all-flash converged infrastructure. Ideal for applications that demand the highest throughput at the lowest latency, such as online transaction processing (OLTP) and online analytical processing (OLAP) powered by:

  • Cisco 5108 Unified Computing System
  • Cisco Nexus, Cisco MDS, VMware Distributed Switch
  • EMC XtremIO
740

The flagship converged infrastructure for enterprise scale mission-critical applications and mixed workloads. Reliably runs thousands of virtual machines and desktops supporting mission-critical applications on SAP, Oracle, Microsoft Exchange, Microsoft SharePoint, VDI and more powered by:

  • Cisco 5108 Unified Computing System
  • Cisco Nexus, Cisco MDS, VMware Distributed Switch
  • EMC VMAX3 and VMAX All Flash storage

 

Vblock and SQL Server

You can think of Vblock as a high performance vehicle for hosting SQL Server virtual machines.  Download the solution white paper using the hyperlink below to read about how a full implantation of SharePoint, SQL Server, Exchange and Lync using a single Vblock 340 can support over 8,000 simulated user workloads.

Converged Infrastructure Solution for Microsoft SharePoint, Lync, AND Exchange on VCE Vblock System 340


In this solution the engineers configured two SQL Server virtual machines with 1 instance each.  AlwaysOn was used to protect the SharePoint databases.  The two SQL Server VMs use VMware affinity rules to ensure that the virtual machines would never run on the same ESXi host.  In order to test SharePoint performance, over 100 million 250 KB documents were loaded into SharePoint Server with a peak performance of 233 documents per second, and an overall average load rate of 137 documents per second.  The test documents had a high degree of uniqueness since they were generated with the Microsoft Developer network tool Bulk Loader - Create Unique Documents based on Wikipedia Dump File.


Some of the key findings from the solution testing are as follows:

  • This solution represents a well-performing, enterprise-class infrastructure that is cost-effective, scalable, and highly available.
  • The test results show that the designed architecture satisfies all recommended performance guidelines provided by Microsoft for SharePoint Server 2013, Lync Server 2013, and Exchange Server 2013.
  • SharePoint Server 2013 works well on Vblock System 340. The designed architecture can easily support 8,000 users with an AlwaysOn AG protection configuration.
  • The Vblock System 340 successfully accommodates Lync Server 2013. The designed architecture can support 8,000 users with two SQL mirroring back end servers and a mirroring witness.
  • Exchange Server 2013 is successfully deployed on Vblock System 340. The designed architecture can support 8,000 typical users in a failover situation of the two-copy Exchange 2013 DAG configuration, where four Exchange Mailbox servers, a single database pool, and a single log pool handled the entire workload.

Another set of resources that you will find interesting is Federation Enterprise Hybrid Cloud for Microsoft Applications documentation.  The Federation Enterprise Hybrid Cloud (FEHC) solution is a complete virtualized data center offering from EMC and its federation partners including Pivotal, RSA, VCE, Virtustream, and VMware.  This is a complete Software Defined Data Center in-a-box straight from the factory.

 

FEHC consists of a standard Vblock with additional pre-built and configured automation and management functionality added on.  The Microsoft Application Services solution for FEHC uses VMware® vRealize™ Application Services and VMware vRealize Hyperic™ to enable automated deployment, management, and protection of all the major Microsoft server applications.  Customers can have full database-as-a-service (DBaaS) and backup-as-a-service (BaaS) capabilities fully developed, tested and ready to use as soon as the equipment is powered up and connected to the data center network.  There is a blog post with links to all the FEHC documentation on the Everything Microsoft Community Network site here Introduction to the Federation Enterprise Hybrid Cloud Solution.

 

Conclusions

CI and HCI systems are becoming a preferred choice for data center usage based on the economics and efficiency of operations.  CI systems are easy to purchase and manage systems that support running large numbers of virtual machine resources with a high degree of reliability and performance. VCE offers a wide range of models that capable of running at nearly whatever scale your organization need.

 

EMC has developed both solutions and add-on tools for Vblock that show what is possible using CI for SQL Server.  The two solutions referenced above are full of suggestions and idea of how you can implement more robust and complete SQL Server solutions in less time using virtualization with Vblock CI products. Please feel free to post any questions or comments on Vblock for SQL Server on the Everything Microsoft Community Network or the Connect Site on the Converged Platforms Community.

 

References

  1. Microsoft SQL Server and VMware Virtual Infrastructure
  2. Green IT in Practice: SQL Server Consolidation in Microsoft IT
  3. Architecting Microsoft SQL Server on VMWare vSphere, Best Practices Guide (March 2016)

 

Thanks for reading,

Phil Hummel @GotDisk

email_button-30.pnglinkedin_button-30.pngtwitter_button-25.png


email_button-30.pngfacebook_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

EMC Data Protection Suite.jpgBusinesses are rapidly attempting to drive complexity out of IT equipment acquisition and management. Private cloud computing architectures are an increasingly popular approach to achieving these goals.   By running business-critical applications and data platforms such as Microsoft SQL Server, Microsoft SharePoint and Microsoft Exchange in a validated Microsoft private cloud environment, organizations ensure reliability and maximum utilization of their virtual infrastructure, while assuring application service levels with lower costs.

 

The benefits of private clouds have been well documented. Virtualization and high availability using fail-over clustering maximizes up time, workload placement, and recovery from local or regional disasters.  However, most private cloud implementations do address protection of applications such as SQL Server and SharePoint.  The EMC Backup for Microsoft Cloud reference architecture  provides a design and architecture using the Microsoft System Center Management tools for automation to deliver BaaS for key Microsoft enterprise software on a Windows Hyper-V private cloud.

 

cloud integration framework.png

Technologies used in reference architecture include:

  • Hyper-V for Windows Server 2012 R2
  • Windows Server Failover Clustering (WSFC)
  • Microsoft System Center 2012 R2
  • Windows Azure Pack
  • EMC NetWorker 9
  • EMC NetWorker Module for Microsoft
  • EMC Data Domain® storage

 

The backup services implemented in this RA enables private cloud tenants to select a backup operation for their application servers during the virtual-machine deployment process in the self-service portal. The application and backup agent are installed in the virtual machine template. When the virtual machine is deployed, the cloud administrator can enable the backup from the self-service portal. The figure below provides an overview of this service.

 

ms cloud baas.png

All application backups are associated with a backup policy, which is a set of backup parameters that are required to back up the application.  Policies enable the administrator to configure backup options for a specific group of application servers.

 

This RA also describes an application restore service. The restore service does not use the self-service portal to perform a restore operation. Instead, it is up to the application owner to choose how restore scenarios are executed based on business requirements and application knowledge.

 

This RA enables customers to provide enterprise-class application BaaS for Microsoft cloud infrastructures by integrating with EMC Data Protection Suite products that will provide these key benefits:

  1. Self-service data protection:
    • Tenants can use the self-service portal to provision the Microsoft application servers with backup software installed by deploying selected virtual machine templates.
    • Administrators can use the self-service portal to enable the backup policy for newly deployed application servers.
  2. Automated backup operations:
    • The integration with a the PowerShell runbook and the NetWorker command line provides automation for backup operations.

 

You can get a copy of this RA using the link below:

Reference Architecture: EMC Backup For Microsoft Cloud

 

Thanks for reading,

Phil Hummel @GotDisk

email_button-30.pngfacebook_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

Introduction to the Federation Enterprise Hybrid Cloud Solution

The Federation Enterprise Hybrid Cloud (FEHC) solution is is a complete virtualized data center offering from EMC and it's federation partners including Pivotal, RSA, VCE, Virtustream, and VMware.  This is a real Software Defined Data Center offering without hype.  Hats off the the product and services teams at EMC and VMware that developed the solution.   EMC scalable storage arrays, integrated EMC and VMware monitoring, and data protection suites are some of the key technologies that power the FEHC.

 

Customers implementing the Federation Enterprise Hybrid Cloud will get:

  • a validated and tested virtualized data center that can be designed and implemented in as little as 28 days
  • a repeatable implementation process
  • EMC support for FEHC solutions implemented through EMC
  • upgrade guidance based on the testing and validation completed by the Federation engineering teams, and
  • the benefit of extensive testing and validation that has been conducted by the solutions engineering teams

cloud.jpg

 

Every implementation of FEHC starts with a foundation of hardware and software configured for Infrastructure as a Service (IaaS).  Some customers need only this configuration but most will layer on higher level application services with add-on modules such as those for Encryption Services, Data Protection Services or Microsoft Application Services.  I'm going to talk mostly about the details of the Microsoft Application Services and SQL Server Database-as-a-Service (DBaaS). implementations in the remainder of this article.  If you want to read more about the foundation concepts and architectural options available within the Federation Enterprise Hybrid Cloud solution then you should download the Concepts and Architecture Solution Guide.

 

What is driving the need for DBaaS?

Businesses that are thriving excel at software development for both internal and external facing applications.  At the heart of that success is an innovation engine that needs the resources to quickly build, test and evolve software.  Microsoft SQL Server has become THE most successful operational database management system on the market according the 2015 Gartner Magic Quadrant results.  It is easy to see why so many application developers rely on SQL Server for their success.

Software-Development-Life-Cycle-by-5-Elements-RPO1.png

 

Many IT departments have not been able to provide self-service offerings for software development innovation that rival what is available from contract infrastructure and platform as a service providers like Amazon and Microsoft.  Therefore, application developers are going outside their corporate IT offerings to get access to self-service DBaaS.  The costs for utilizing these outsourced services are difficult to predict.  For example, there are significant networking infrastructure investments that are needed to link corporate and contract data centers.  Billing rates associated with managing data movement can be complex to interpret and may result in higher than expected monthly charges. Many organizations have been surprised by bills that exceed the estimated costs based on simply the known per minute server fees.  IT organizations need  a credible on premise alternative to reduce the dependence on external contract services solely.  The FEHC is a complete hybrid cloud solution that can can be evaluated as an alternative to outsourced services today.

 

SQL Server Database-as-a-Service the Easy Way

Now, let's look at what you can do with the FEHC solution if you manage lots of SQL Server instances and databases.  Immediately upon  implementation of the FEHC you have access to a complete Database as a Service (DBaaS) offering in your data center.  All you need to do is enable what services you want to be available from the self-service portal and configure user access control to specify what services they can access the portal.  All the workflows that you need to create virtual machines, SQL Server instances, databases, and configure Always On protection have already been developed.


In addition to being capable of supplying self-service DBaaS for software development scenarios, FEHC can also be implemented as a database consolidation environment for existing and planned production applications. The next section will describe the possibilities in more detail.


Microsoft Application Consolidation Capabilities

Standardization and automation are  key to providing cost effective IT services to the business. The designers of FEHC have implemented best practices for virtualization, networking and storage configuration that scale and simplify management of all the major Microsoft enterprise applications in one easy to purchase and implement solution.  The FEHC for Microsoft Applications includes implementation of provisioning and management for Microsoft Exchange and SharePoint in addition to SQL Server.


The Microsoft Application Services solution uses VMware® vRealize™ Application Services and VMware vRealize Hyperic™ to enable automated deployment, management, and protection of all the major Microsoft server applications. Users interact with a fully implemented self-service portal that permits rapid, on-demand provisioning of any or all of the application services listed above.  This means that you can enable all the application services for Microsoft servers or just the ones you need.  Download and read the full description of the Microsoft Applications Solutions Guide on FEHC for more information.

fullbackupscriptsqltr-630x320.png

 

There are also workflows for implementing self-service Backup-as-a-Service. To give you a sense of the maturity and completeness of the FEHC for Microsoft Applications you can read in detail how the Backup as a Service features for SQL Server as well as SharePoint and Exchange have been implemented in the Federation Enterprise Hybrid Cloud 3.1: Microsoft Applications Protection and Availability Solution Guide.


The Future Is Hybrid Cloud

Few organizations have been able to allocate the development time to build a complete solution that compares to the FEHC feature set.  EMC and the federation partners have made the investment because it can be leveraged by thousands of customers.  The resulting solution can ready to use in as little as 28 days, completely virtualized and automated by software.  This can be a game changer.  The self-service portal controls access to all services you choose to layer on the foundation of IaaS such as encryption or data protection.  You can implement multiple services out of the gate or build up capability over time.


The Federation Enterprise Hybrid Cloud solutions can revolutionize the way you purchase and implement hardware and software.  What could you do with the time and labor you will save by purchasing virtual data center solutions fully automated and  ready to add business value in as little as 28 days?

 

Thanks for reading, leave us any questions or comments you have below.

Phil Hummel @GotDisk

Filter Blog

By date:
By tag: