Find Communities by: Category | Product

What's blue, has six wheels and delivers highly available virtual Microsoft workloads? That would be the Avnet Mobile Data Center Solution for EMC VSPEX voted one of the Top 10 Hot Products from EMC World according to CRN!

 

What makes the new mobile data center solution a top 10 product?  It is pre-configured as a fully functional compute, network and storage solution allowing customers to have a virtual private cloud up and running in no time.  It includes high-availability and disaster recovery so applications remain always available.  It can be used as a long-term private cloud solution, as a temporary data center in the event of a disaster or to assist in a data center move.

 

To read more, continue to the Virtual Winfrastructure blog site..

Be sure to check out the Data Recovery Blog by Kroll Ontrack for a full recap of EMC World 2013 and other topics related to data recovery and protecting key electronic assets!

 

http://www.thedatarecoveryblog.com/?source=EMCECN

I will start out by re-assuring my readers, I will do my best to avoid getting the Disney song of a similar title into your head!  On the other hand, I know that for some of you that particular attraction brings back some great childhood memories! So, if you want, go ahead and hum it to your self right now wherever you are.  Go ahead… Now, did that make you feel better? Is your neighbor in the next cube looking at you over your cube wall like you’re insane yet?!?   For those of you that didn’t hum, is the song stuck in your head? Thought so… Sorry, I couldn’t resist!

 

All of these Disney World/Orlando references before the week of June 11th to the 15th can only mean one thing in an Enterprise Technology Blog Post;  Microsoft TechEd 2012!

 

This year, EMC’s Backup and Recovery portfolio has greatly enhanced support for all things Microsoft. In particular, the proven industry leader in backup and recovery for virtualized environments is now available for Hyper-V. This provides Hyper-V administrators with the capabilities to perform incredibly fast and application consistent image backups of critical Microsoft applications. Applications such as Microsoft SQL Server 2012, which are quickly migrating to virtualized Hyper-V platforms as the journey to the cloud accelerates.

 

For the all important data recovery, industry proven flexibility and reliability can now be leveraged by Hyper-V backup admins. From federated Cluster Shared Volume (CSV) Hyper-V image backup to flexible out-of-place VM image recovery, EMC engineers have worked closely with Microsoft to assure incredibly tight integration for the best possible efficiency and ease of use.

 

EMC Avamar also added multi-streaming and together with its integration to EMC Data Domain systems, you will see performance improvements in this recent release that are up to 4X faster Microsoft SQL Server Database backups. As the mountain of data we are continuing to service in these databases grows, these performance improvements are a relief to the “tea cup ride” managing backup windows can be.

 

I will be in Orlando next week. No, not riding the teacups, but at the convention center with all of you folks getting into some incredible conversations concerning all things backup and recovery for Microsoft! I will be located in the EMC Booth (#205) – so be sure to stop by. I hope to meet many of you and would be more than happy to talk with you and show you some demos on the newest features greatly enhancing backup and recovery for Microsoft Platforms and Applications. If you stop by, be sure to look for me, and identify yourself as a reader of The Backup Window blog to any of the EMC Booth Staff.

 

Finally, if you are attending Microsoft TechEd 2012, and EMC Avamar has truly changed the way you do backup in your enterprise, please be sure to sign in with your “MyTechEd login” and vote for Avamar in the Backup & Recovery category for “Best of TechEd Award”.

See you all in Sunny Florida!

 

http://thebackupwindow.emc.com/alex_almeida/its-a-microsoft-teched-world-after-all/

Last week EMC and Microsoft held a three day Technical Summit in which approximately 120 of EMC's Microsoft Specialists from around the globe gathered in Redmond, WA to hear from Microsoft on some of their latest technologies. Some of the topics covered during the Summit included:

 

  • System Center 2012 Fabric & Storage Management
  • System Center 2012 Application & IT Process Management
  • SQL Server 2012 BI, Reporting and Analysis Services
  • SQL Server 2012 Always On Availability Groups
  • Windows 8 Server and Hyper-V 3.0 Overview
  • Sharepoint and FAST Scalability Testing
  • Hyper-V Disaster Recovery Best Practices
  • SQL Server and Private Cloud Fast Track Programs

 

What was great about the event was that it was largely material delivered by Microsoft speakers who were educating EMC specialists not only on the latest and greatest applications and tools but also on future technologies such as Hyper-V 3.0 which looks to be awesome (dare I say face-melting VirtualGeek?!)! When we did present EMC topics it was centered around Microsoft's technologies that were recently discussed and included EMC's management integration, best practices for applications and disaster recovery as well as EMC's Fast Track involvement.

 

For more about the event including some pictures, continue reading on the Virtual Winfrastructure blog.  Additionally, all content is now posted on the Community site at the Microsoft Specialist Content Page.

Reprinted with permission from Paul Galjan, http://flippingbits.typepad.com/blog/.

 

 

Database size is a critical aspect to consider in Exchange 2010 designs where DAG is in play. This is regardless of the disk backend you choose. JBOD, DAS RAID, and SAN designs all need planning where small passive databases are in play. Since Microsoft offers no configurability of BDM on passive databases, there is nothing you can do but be aware of and plan for the workload (aside from going with a standalone configuration).

Many Exchange administrators are used to having small databases (100-200GB) so that ESE maintenance tasks and restore SLAs are easier to address.  I find that this can trip up otherwise solid storage architectures.  BDM can lead to problems, primarily because admins aren’t necessarily aware that BDM schedules that they set via EMC or powershell applies only to the active copy of the database.

Microsoft calls background database maintenance “online database scanning” or “database checksumming” or “background database maintenance”.  It can be associated with page zeroing.  It’s a googlicious challenge – and a tagging nightmare for bloggers. 

Let’s see what Technet says about BDM (aka “checksumming”):

Background database maintenance I/O is sequential database file I/O associated with checksumming both active and passive database copies. Background database maintenance has the following characteristics:

  • On active databases, it can be configured to run either 24 × 7 or during the online maintenance window. Background database maintenance (Checksum) runs against passive database copies 24 × 7. For more information, see "Online Database Scanning" in the New Exchange Core Store Functionality topic.
  • Reads approximately 5 MB per second for each actively scanning database (both active and passive copies). The I/O is 100 percent sequential, so the storage subsystem can process the I/Os efficiently.
  • Stops scanning the database if the checksum pass completes in less than 24 hours.
  • Issues a warning event if the scan doesn't complete within three days (not configurable).
  Let’s reiterate interesting bit: 

Background database maintenance (Checksum) runs against passive database copies 24 × 7.

Now let’s look at what another technet page says about this:

Exchange scans the database no more than once per day. This read I/O is 100 percent sequential (which makes it easy on the disk) and equates to a scanning rate of about 5 megabytes (MB)/sec on most systems.

The important thing to remember is that when you set a maintenance schedule for a database as described on this technet page, the setting applies only to the active copy of the database.

Now, how do you plan for this workload?  Frequently, you don’t have to worry about it at all.  Let’s say you have 5000 users with 2GB mailboxes on a server.  That’s 10 TB of mailbox data.  If you have 2 TB mailbox databases, you have about 5 databases, or about 25 MB/s in BDM workload.  Not a problem.

However, if you limit your databases to 100GB, that can present a problem.  That 5000 users translates to 100 databases, each of which can launch a 5MB/s read workload (or more) at any given time.  Aggregated over a whole single mailbox server, 500MB/s is nothing to sneeze at, especially if it’s mixed with user workload on the same disk. 

What you can do to limit the impact of BDM:

  1. Use larger databases (1-2TB in size).  Although it will increase the amount of time required to maintain the databases, remember that with a DAG configuration, it’s often more expedient to reseed than it is to do ESE maintenance. 
  2. If you are concerned about restore times, use an array that can act as a VSS provider and store your first-tier backups in snapshots, clones, or CDP bookmarks.  In these scenarios, restore times are largely uniform whether you have a 1TB database or a 100GB database (log replay notwithstanding).
  3. If after considering hardware-assisted VSS, you STILL can’t leverage larger databases, confer with your storage vendor to architect for what will turn out to be large block reads at unpredictable rates and unpredictable intervals.  To the best of your ability, isolate the passive from active databases so that BDM doesn’t impact user mailboxes.
  4. Ask Microsoft why passive database maintenance is not configurable.  As it is today, a completely passive Exchange server can generate over 500MB/s in read bandwidth.  Let’s be crystal clear about this: 500+ MB/s is data warehouse territory, and it’s absurd to think that this can occur with no active users on it, and no backups running against it.  It makes little sense, especially in configurations where the silent corruption they’re looking for is detected by other techniques.
  5. What happens to the schedule when checksum pass completes in less than 24 hours?  Does it start again in 24 hours?  Is that from the start of the prior run, or from the end of the prior run?  An authoritative answer would be helpful for those who have to plan for this workload.
  6. What factors can make the scanning run faster or slower than “most” systems?  Does the system issue a 256KB every XX ms?  Does that depend on the processor clock rate?  Can we thus expect the read rate to increase with clock rate?  The difference between 5-8 MB/s doesn’t really matter on a single database, but it really does matter when you’re talking about hundreds or thousands of databases on a single disk array.

  Now, I have a couple questions I haven’t been able to get an answer to: 

Between last week's BUILD conference and this week's SNIA Storage Developer Conference, Microsoft made some significant announcements to the future versions of Microsoft Windows and Hyper-V (Windows Server 8 and Hyper-V 3.0 also referred to as Windows Next internally at Microsoft).

 

Looking at Windows, they have modified the UI again (I can already hear people saying "just when I figured out where everything is…") this time making it more like the Windows 7 phone OS which means it will be touch enabled, use the tile layout currently available with the phone OS and operate more like a tablet. They also added significantly more Powershell commands, more integration opportunities, a Microsoft App Store and an increased focus on cloud technologies through networking and scalability. As one person told me from the BUILD conference, "there was non-stop cheering from the developer community".

 

The most significant change in my opinion to Hyper-V 3.0 (and there will be A LOT) is the ability to run Hyper-V over CIFS or a networking protocol (technically SMB 2.2). This isn't new to the virtualization world as VMware has had support for the file-based NFS protocol for years as has XenServer which supports both CIFS and NFS. Why the wait for Microsoft? It likely had to do with performance and scalability concerns which Microsoft has now addressed. This means customers will be able to deploy Hyper-V VMs using NAS in addition to SAN protocols like Fibre Channel, iSCSI and Fibre Channel over Ethernet (FCOE). This should reduce the complexity of storage management with Hyper-V for some customers and will certainly reduce the cost associated with deploying Hyper-V VMs.

 

Other impressive changes planned in Hyper-V 3.0? Windows Server 8 will support up to 160 processors, processor cores or threads as well as up to 2TB of memory. With the increase in hardware performance, Hyper-V will also scale up with support for up to 32 virtual CPUs and 512GB of memory (a significant increase over the current limitation of 4 virtual CPUs and 8GB of memory). They will also now support up to 16TB for virtual hard drives. Hyper-V clusters will also be able to support up to 63 nodes and 4000 VMs in a cluster! Wow.

 

Other major changes include a new virtual switch for networking and new disaster recovery to expand how an administrator can protect Hyper-V virtual machines. Both features include APIs to allow partners to plug-in and add features to monitor and manage network traffic between VMs as well as simplify disaster recovery to easily replicate VMs between servers and storage (more on these later).

 

Add all of this along with the upcoming changes to System Center 2012 and Microsoft has greatly improved their private cloud, virtualization and management strategy.

 

So what's that sound behind you VMware? Don't look now but it's Microsoft quickly catching up to you!

Filter Blog

By date:
By tag: