Find Communities by: Category | Product

These two events happened in relatively quick succession.  I started getting questions about positioning because at first glance, it looks like they might be contradictory.  In fact, they solve two, very different  customer challenges.

 

Twinstrata is all about leveraging the cloud for storage, and keeping your compute local.

ExpressRoute is all about leveraging the cloud for compute, while keeping control of your storage.

 

So, if your customer is looking at cloud to provide fourth (or fifth) storage tier, Twinstrata fits the bill. 

 

If your customer is looking at cloud to augment (or replace) their compute, but don't want to (or can't) leverage cloud for their storage, then ExpressRoute is part of the answer (at least for Azure).  I believe this has a wider variety of use cases, as there are more numerous obstacles to cloud storage than there are to cloud compute.  These range from data availability SLAs, data governance issues, regulatory pressure and so forth.

 

We should also remember that there are uses for low-latency, high bandwidth links into public cloud provider data centers.  And it's worth noting that Amazon and VCHS have similar offerings - both uninspiringly named "Direct Connect".  I guess Microsoft had a little extra left in the tank to name their offering.

EMC World 2014 was, as usual, a whirlwind.  I saw a good number of sessions, but not nearly all that I wanted to see.   That's pretty much how it's been the last few years I attended.  But this year is different.  Because the EMCW sessions were recorded, I can go back and catch the ones I missed, and watch those that had way too much content to ingest the first time around.  Here's what I plan on taking a look at again (or in some cases, the first time):

 

 

Now, that's about 18 hours of content, but I know I missed something.  What did you see - live or virtual - that I missed?  What matters to you that I didn't include?

Paul Galjan

The Preferred Architecture

Posted by Paul Galjan Apr 22, 2014

The Preferred Architecture

(cross posted from flippingbits)

 

Ross Smith from Microsoft recently published a blog entry outlining Microsoft’s preferred architecture for Exchange (I’ll call it the MS PA).  It’s certainly worth a read, as it covers the design criteria for Exchange that they’ve been working toward since well before the Exchange 2007 release.

I wouldn’t be the first to congratulate Ross and the rest of the Exchange team on their achievements, but I’ll do it anyway – their journey has been impressive, and they’ve kept true to the vision they set out nearly a decade ago.  They have been absolutely relentless in driving out infrastructure cost from Exchange, and alongside the SharePoint and Office teams, have delivered a competitive and feature rich (if relatively closed) SaaS offering.  Consider my cap doffed.

As you might expect, though, I respectfully disagree with some of Ross’ conclusions in the blog post.  I don’t think Ross is being disingenuous in his assertions.  Rather, I believe our disagreements are the result of our different perspectives.

A messaging architect’s mission is to facilitate the architecture supporting email services with a reasonable SLA and a low cost.

An enterprise architect’s mission is to facilitate the architecture supporting all business applications in the enterprise with an SLA that meets the applications’ respective business criticality and a low cost.

For the messaging architect, the MS PA may make some sense (although a lot of folks would take issue with claims around the value of virtualization and the need for backups).  But at least at a philosophical level, eliminating anything not in the core application increases simplicity, reduces cost and increases availability.

For the enterprise architect, providing a common set of technologies that can be leveraged by as many applications as possible reduces duplication of effort, streamlines operations, reduces complexity, and increases availability. These technologies and techniques include management/automation, backup and recovery, and business continuity, just to name a few.

For the enterprise architect, the following points generally hold true:

  1. Standardization and normalization of operations drives higher availability and agility, while driving out operational expense.  An enterprise architect will adhere to her infrastructure strategy wherever possible because tactical deviation will increase unplanned work, while squandering existing and planned investments.  And while Exchange is an interesting workload, it certainly plays nicely with nearly all common infrastructure strategies.
  2. Virtualization is the norm.  According to IDC, four times as many virtual Windows Server instances than physical will be deployed in 2014.  As you can imagine, this gap will continue to grow.  Applications requiring physical infrastructure are an aberration.  As a result, customers have modified their operations to default to management and automation of virtual infrastructure, and therefore physical infrastructure is the more complex, expensive option.
  3. Ross and I agree that more management control points create complexity and add to cost.  But even if an enterprise architect can eliminate the infrastructure management layer for Exchange, he’s still going to require those components for other applications.  In a data center where each application has its own infrastructure, M&A, business continuity, and backup strategy, overall management and problem identification/resolution are going to be outrageously complex, expensive, and time consuming.
  4. If an enterprise architect is deploying and removing mailbox servers on a daily basis in the support of millions of mailboxes, it makes sense to automate around the application.  But an organization of 100,000 mailboxes with a site resilient setup might have just 16 mailbox servers.  That’s a fair number, and it’s surely worth automating.  But it would make little sense for an enterprise to build a customized management and automation strategy from the ground up just for Exchange.
  5. The MS PA requires impressive scale in order to be cost efficient.  A “typical” (~$25k) server nowadays will have 24-32 cores and between 768GB to 1TB of RAM.  Let’s say that two of those servers can support 15,000 mailboxes in a fault tolerant configuration.  However, in a physical environment, you’ll have to use more, smaller servers.  While this will cost only slightly more in capital expense, operational expense explodes – rack space, cabling, power/cooling, inventory management, and simple hardware maintenance to name a few.  Virtualization indubitably fixes this.  Of course can also expect this gap to increase in the future – in a few years, that pair of $25k servers will support 100,000 mailboxes, and it’s going to be even more difficult to make efficient use of compute infrastructure without a hypervisor.
  6. Bypassing the HA options inherent in virtualized storage and server infrastructure wastes the competencies and efficiencies in which most enterprises have invested.  These competencies cannot be eliminated because of the advantages they provide to other applications in the data center.  This shifts (and increases) operational expense from the operations team, which is optimized for infrastructure deployment and maintenance to the application maintenance team, which is not.

These are just a handful of perspectives and challenges that face today’s enterprise architect that differ from that of a messaging architect.  Does it really make sense to duplicate Office 365’s architecture in an general purpose data center environment?

So, what is EMC’s preferred architecture?

It’s easier to ask questions than it is to provide answers, and in that I applaud Ross and team’s clarity in their recommendation.  My preferred architecture is the one in which the customer has the broadest competency.  If you virtualize your infrastructure with VMware with VNX, then Exchange on VMware with VNX makes the most sense for you.  If you’re finding success with vBlock, VSPEX, Hyper-V, and/or VMAX, then those are your preferred architectures.

Put shortly, deploy Exchange in a manner that will be least disruptive to your operations and management strategy.  This will put you in the best position to cost effectively support the application.

Last night I was perusing the Windows store for productivity apps and stumbled on the Syncplicity app.  It is gorgeous, not to mention useful.

Screenshot.99228.1000000.jpgManage_Folders.png

 

For those of you who aren't aware, Syncplicity is EMC's enterprise cloud file share and collaboration tool.  There are clients for MacOS, iPhone & iPad, Android, Windows Phone, Windows (pictured right), and Windows 8UI (pictured left).  There's also, of course, a highly functional web interface.  Like DropBox, it is highly addicting to the end user.

 

It wasn't until I started using the Windows 8 UI client that I realized why we've invested so much in two different applications for the same operating system.  It's because I actually use both on the same device:  On my full-time laptop with a terabyte drive, I sync all the folders actively, and barely ever use the Syncplicity interface, just relying on the "engine" that replicates the data to and from the Syncplicity cloud.   On my travel tablet where only have substantial, but nonetheless limited storage, I've installed both clients.  I sync a handful of folders actively, and then rely on the Windows 8 UI for the rest of them.

 

This made me realize that the marketing is real.  I truly do have a tablet when I want one, and a laptop when I need one.  Syncplicity has exploited both.  It also makes me realize that Apple has some important decisions to make in the near future.

 

Steve Jobs was very clear that iOS and MacOS solve two different problems, and in the context of the iPhone, he was right.  The legacy of that thinking means that the OSes that originated for phones (iOS and Android) are designed primarily for content consumption.  iOS is particularly unsuited for content creation as it lacks an end-user accessible filesystem - very handy for security, but makes it very clunky to do things that you barely even think about in a desktop OS environment.

 

Those lines get blurred in the tablet world.  I've had Android tablets before, and my whole family (aside from myself) has iPads.  I've never traveled with either as they all lack the functionality I need for multi-day travel (robust email, fully functional excel, word and powerpoint, VPN clients, etc).  I'd sooner invest in a thinner, lighter laptop than a tablet that would add weight to my bag, but give me little added functionality.

 

Recently, I got an Atom-based Windows tablet.  It gives me iPad-like battery life, is binary-compatible with Windows apps, and charges off a phone charger.  It cost less than even a barebones iPad mini.  It's not perfect, but it's a lot more versatile than the Android tab I have, or the iPads that my family use.  Contrast that with iOS.  As robust as its app ecosystem is, it simply doesn't have the functionality of a desktop OS, and I believe that there are architectural limitations that prevent it from ever doing so.  So iOS and Android tablets are nearly always "second computers", whereas a Windows tablet can be a primary computing device (content creation) as well as a secondary device (content consumption).

 

I fully anticipate my next tablet will be my only computing device - in fact if I were willing to part with about triple what I paid, I could have an all-in-one Windows tablet as long as I were willing to compromise on some of the power in my current laptop. Apple does not offer an all in one device at any price point.  Their users, no matter how loyal, I have to choose between a tablet and a general purpose computer.

 

Apple's immediate problem is that every consumer will need to refresh their general purpose computer at some point, and many will arrive at a similar conclusion:  "If I still need a general purpose computer, why not get one that can be my tablet as well?"  IT shops that haven't gone down the BYOD route yet will start thinking about the versatility of the devices they supply, along with whether they can use their existing toolsets for management.  In either case, it's not a good thing for Apple.  This is completely aside from the odd fact that Apple still doesn't have a touch-enabled MacBook.  I was in a store a few weeks ago and watched a woman interacting with one of Apple's beautiful displays, and she instinctively touched an icon to open it.  That's a huge red flag for usability, in my opinion.

 

So Apple will have to re-think Jobs' stance that iOS and MacOS are two different products for two different problems.  Possible solutions include:

 

  1. Make the architectural changes to iOS necessary to make it into a general purpose OS, possibly forking the code so that there's a pro and consumer editions.  I find this an unlikely solution, as the app ecosystem for content creation is heavily influenced by Microsoft.
  2. Touch-enable MacOS, and start developing convertible MacBook Airs.  This is Apple's second-mover advantage - they'll surely learn from Microsoft's branding mistakes with the Surface RT, which poisoned Atom and iCore-based Windows tablet sales.  But no matter they bring it to market, it's going cause some confusion and unhappiness among customers - particularly the loyal ones who want a MacBook tablet that will run the apps they bought for their iPhone and iPad.
  3. Do the same as above, but provide a virtualized iOS environment on the convertible MacBook Airs.  I'm not entirely sure of the feasibility of this approach, but it's the one I would go with.  In any case, I certainly wouldn't call it an iPad.

 

Whatever they choose, Apple's going to have to make a decision soon.  The popularity of tablets have delayed the refresh cycle of consumer laptops - they certainly haven't made those laptops unnecessary.  Eventually people will want to refresh their laptops, and Apple shouldn't want consumers finding out they can replace both their tabs and their computers with one device with Microsoft, but not Apple.

If you follow any EMC blogs at all, you'll have seen that we launched the XtremIO product last Thursday.  Our entrée into the all-flash-array (AFA) storage segment, XtremIO rounds out EMC's robust flash strategy consisting of hybrid arrays (VNX & VMAX), server flash in the form of both straight storage (XtremSF) and cache (XtremSW), distributed server based storage (ScaleIO), and all flash arrays (XtremIO).

 

If you're interested in how it works, you should head on over to Chad's blog for a look-see.

 

What you might have missed is how this applies to Microsoft workloads.  As you might imagine, they fall into a few different categories (I wrote a blog post about this last year, but it needs an addendum for AFA appropriate applications):

 

Major use cases for flash technology in general

 

  • I need gobs of IO:  These are workloads that generate a bunch of IO.  Think of VDI boot storms, or the various wackiness that can come about from any dense virtualized environment.  Or perhaps high performance SQL Server environments (virtualized or not)

  • I need low latency:  Think of a SQL-backed application that isn't performing the way you'd like.  You don't have to think hard to arrive at one.

  • I need smaller footprint: Any application that you're short-stroking traditional disks for.  Move people have turned to automatic tiering on a hybrid array to solve this.  But still - if you have  legacy HW and the right type and footprint of application, you might add AFA to the evaluation list

 

And let's add a couple to the list for:

Major use cases for all-flash arrays specifically

 

  • I have a heavily duplicated environment: The obvious one here is VDI (look at the blog title for a hint as to what OS gets virtualized most frequently).  Let's also think about SQL Server Always On Availability Groups (AAG).  As you probably know, AAG essentially duplicates SQL databases, allowing them to be attached to separate servers for both high availability, as well as read-only scale-out.  Read-only scale-out can mean anything from offloading backups to a separate server, to offloading reporting workloads and cube generation.  The inline deduplication feature of XtremIO allows you to achieve a primary design goal of SQL AAG technology without any increase in storage footprint.
  • I need wicked low latency, uniformly, at scale, unencumbered by cache or other workloads on the array.  A corollary of the second bullet point above.  But it bears some inspection, particularly in those instances where transactions per second can be translated into a dollar figure.

 

There are some things you'll want to consider around the method of deduplication, scale out, and so forth to determine which AFA might be right for you.  But in any case, a comprehensive strategy is called for in any of these cases.  For example, the VDI use case you might test out in the lab would consist of system partitions.   But user data often dwarfs the capacity footprint of system partitions in VDI environments, and it has neither the duplication features, nor the performance requirements of the system partitions.  For those, NAS might be the best approach.

 

You'll also want to consider other features.  In the case of XtremIO, it comes with VPLEX support out of the box, which allows you to do geographically dispersed clustering (check out the white paper on Microsoft's site for details), not to mention RecoverPoint support and all the goodness it brings.

 

Finding the ideal use cases for flash in your data center

 

Chances are that you can think of a couple applications you're responsible for that could use a performance boost.  You've given an application all the memory and CPU you can, along with all the storage you can afford.  You (or your management) is either unwilling or unable to optimize application itself.   You've got to feed the beast.  For those apps, run perfcollect on all the layers of that app.  Get the results to your EMC SE or EMC partner SE.  Let us know what you're doing with the machine - are you doing reporting, streaming backups, SSAS cubes?  We can tell you what the latencies are, and what you can expect with an EMC flash solution.  If the capacity and IO requirements fit, the answer may be XtremIO.  Or it might be server-based flash, or it might be an automated storage tiering.  Or it might be as simple as shrinking your backup windows with hardware offloaded snapshots or clones.

 

You never know if you don't ask.

An internal email distribution list at EMC recently lit up as the MSpecialist crew works with customers to accommodate Exchange 2013.  The advice from Microsoft is here.  Working off a typical Exchange config with 2500 active, 5000 total mailboxes on a mailbox server with a 150 message/day/user workload, it turns out to be about 192GB of RAM.

 

My initial response was: "ARRGH!  The RAM!  It will consume all of us!  We cannot possibly virtualize this crazy thing!"

 

But let's step back for a minute and face a few facts.

  1. Yes, 192 GB of RAM is quite a lot for a single mailbox server to consume, but it's not THAT big.
  2. With the Exchange team's new cumulative update policy, we can no longer "set it and forget it" like we did with earlier versions.  Infrastructure requirements can and probably will change over the course of the usable life of the hardware you purchase, and 192 GB RAM may be too much in the future, or it may not be enough.
  3. Virtualization can help you save even more on infrastructure costs with these larger RAM requirements
  4. Storage pools can enable efficiency in the same manner as virtualization

 

First, according to NIST, elasticity is one of the five essential components of cloud computing.  It is an advantage because software gets released and updated much more frequently than in the past.  If you over-provision because the application requires something at RTM that it doesn't require in subsequent releases, you should be able to reclaim that capacity and use it for something else.  If you need more infrastructure, you should be able to add it easily.

 

Second, you can deploy reliably on a far less hardware, while leaving yourself with far more flexibility in the future if you virtualize.

 

To illustrate my point, let's consider an Exchange 2013 environment, with 30,000 users, 150 messages/day/user, and a 2 copy active/active DAG.  This is a pretty ordinary single site configuration, which simply gets mirrored at a DR site.  With this, we'd typically put 2500 active and 2500 passive mailboxes on a single server, resulting in a total of twelve servers.

 

The Physical configuration

If I were to go entirely physical, I would need something like this:

 

With this configuration, my "steady state requirement" (hosting 2500 active and 2500 passive mailboxes) would be about 10 cores and 96 GB of RAM.  But I need to provision double that, because every server needs to be able to host 5000 active users.

 

Aggregated across the entire environment, I would be looking at:

The Virtual Configuration

If I were to consider virtualizing this environment,I would look at 4 physical hosts, each with 3 mailbox VMs.  The mailbox VMs would have the same requirements as the physical VMs, but I could take into my failure domains into account, saving on infrastructure:

 

With this configuration, the worst case scenario is the loss of a single physical host.  But in that case, each physical server is only taking on an additional 2500 users.  This allows about a 33% efficiency gain over a physical infrastructure, depending on the component you're looking at:

 

 

It's worthwhile to note the IO/s.  It's true that there's no way to save capacity (actually there is, but that's a topic for another post).  And it's also true that Exchange is already great on nlSAS, but saving that extra 33% in IO might mean the difference between a RAID1/0 configuration on nlSAS and a RAID-6 configuration.

 

So let's put away this myth that Exchange is unfriendly to virtualized infrastructure.  To my thinking, going physical with Exchange makes very little sense - it's not only more expensive, power hungry, and all that, but it also leaves the administrator with a very rigid infrastructure that cannot adapt to the Exchange team's rapid release cycle.

WARNING:  GEEK BLOG POST

 

When you talk to a storage vendor about asynchronous block replication, your first two questions should be:

  1. Do you preserve write order fidelity within a single LUN?
  2. Can you preserve write order fidelity between multiple LUNs?

 

Consistency Group (CG) technology is cool.  When you put all your databases and associated logs in a CG, you can replicate asynchronously and still have your database come up at the DR site every time.  When you don’t have it, you need to enforce consistency by entering a state in which the database can be backed up while the database is mounted.  With SQL, this would mean using a VDI or VSS requestor to enter that state, taking a snapshot with a hardware provider, and finally replicating that snapshot. 

It’s not that snap and replicate is a bad thing – people have been doing it for years.  But it does limit your achievable recovery point objective to double the frequency with which you can comfortably quiesce and replicate your database.  It also limits your achievable recovery time objective because often extra steps are needed to recover your database.

 

This is all tribal knowledge amongst storage and database folks.  But people often don’t know why, either because storage and database administrators are mortal enemies or they speak different languages. 

 

So here’s why:

 

Let’s start with a concept known as “Write Order Fidelity” (WOF).  When applied to asynchronous remote replication technology, this means that the writes at the disaster recovery (DR) site are applied in the same order as they were applied at production site. 

file:///C:/Users/pgaljan/AppData/Local/Temp/WindowsLiveWriter1286139640/supfilesCB64E2C/async_noWOF2.gif

async_noWOF.gif
Async replication without WOF

 

In the instance above, when you try to attach that database, it will appear wholly inconsistent and may not attach.  Worse, it could attach a corrupt database successfully.

 

WOF preservation looks like this:

asyncWOF.gif

file:///C:/Users/pgaljan/AppData/Local/Temp/WindowsLiveWriter1286139640/supfilesCB64E2C/asyncWOF2.gif

Async replication with WOF

In this case, you’re replicating asynchronously, but the writes are applied to the DR site in the same order they were applied at the production site.  So at any given time, the data at the DR site looks as if the server had simply stopped working at the production site.  There’s data loss, and transactions may need to be rolled back, but that’s an automatic, normal operation with a database like SQL, the JET database backing Exchange, or Oracle.  In fact, that’s what SQL does every time there’s an unplanned cluster failover. 

But why don’t we need WOF with synchronous replication?  That’s an interesting question.  First, WOF is implied with true synchronous replication.  Second, true synchronous replication actually writes to the DR site before writing to the production site:

file:///C:/Users/pgaljan/AppData/Local/Temp/WindowsLiveWriter1286139640/supfilesCB64E2C/sync5.gif

sync.gif
Sync replication – WOF is always enforced

 

In this case, the DR site is always in complete synchronicity with the production site, writes must be acknowledged at the DR site prior to being considered “applied” at the production site.  Of course, this presents the optimal situation – replicated data with no potential for data loss.  However, it comes at a cost:  any network latency you have will be added to the storage latency.  So in effect the distance you can replicate is limited by the storage latency your application can tolerate.  For those of you keeping notes, you generally want to keep your write latencies to your transaction logs under 10 ms, which makes for a pretty limited distance.

 

So that’s the reason behind the first question you’re asking the storage vendor.  What’s up with write order fidelity among multiple LUNs?

It turns out that most people will follow their database vendor’s advice and put their database and transaction logs on separate LUNs.  It’s sorta outside the scope of this post, but in general it’s to ensure recoverability in the event of a lost LUN.  It’s also for performance purposes – your transaction log is sensitive only to write latency and is always sequential in nature, whereas your database is more sensitive to read latency and can be random, sequential, or anything in between.

 

The function of preserving write order fidelity across multiple LUNs is generally performed by a “Consistency Group” (CG) in EMC parlance.  Usually other vendors will use that term – I don’t believe it’s trademarked.  CG technology is integrated into RecoverPoint, SRDF/A and even MirrorView/A.  Remember, it’s not needed with any true synchronous technology.  But most people have asynchronous replication requirements.

And Groups are really, really important for databases

 

This has to do with the ACID properties of databases that are in wide use today (if you want a brief but cool read on the history of the modern database, wander on over here).  Specifically iy has to do with the atomicity part of the ACID properties.  If part of a database transaction fails – no matter the reason - the entire transaction gets rolled back.

 

That’s one of the big reasons the transaction log even exists.  Lots of storage people think the log is there only for rolling forward in the event of a failure.  Not true.  It can be used to roll back in the event that a transaction fails.  In fact, storage failure is not the only reason a transaction fails.  Go look at the ACID properties to see other reasons why a transaction might fail. 

 

So anyway, with atomicity in mind, consider the following scenario:  You’re replicating asynchronously, and you’ve verified that your storage vendor honors write order fidelity within a single LUN.  However, write order fidelity is not honored among multiple LUNs, and you’ve followed best practices in separating your databases and logs.  A failure scenario might look like this:

file:///C:/Users/pgaljan/AppData/Local/Temp/WindowsLiveWriter1286139640/supfilesCB64E2C/noCG2.gif

noCG.gif
Multiple LUNs without consistency group technology

 

In this case, the database is slightly “ahead” of the transaction log.  The RDBMS (like SQL or Oracle) would say, “well I’ve got only part of a transaction here.  No problem.  I’ll roll it back.  I’ll refer to my transaction log to see how I might achieve exactly that”.

Keep in mind I don’t write software for a living.  I’m paraphrasing.

 

However, when it refers to the transaction log, it doesn’t see stuff in there relevant to how it might go about rolling back the transaction.  In my snazzy animation, it needs data from blocks six and nine to roll back the transaction.  The RDBMS promptly gives up, goes for a latte, leaving you to restore from a backup.

 

Enter a consistency group.  As I’ve mentioned, this technology enforces write order fidelity across multiple disks.  So you can have your cake and eat it too.

file:///C:/Users/pgaljan/AppData/Local/Temp/WindowsLiveWriter1286139640/supfilesCB64E2C/CG2.gif

CG.gif
Multiple LUNs with consistency group technology

 

In this case we see a failure happen in mid-transaction.  Of course this can happen any time even without any sort of remote replication.  However, if the database and transaction log are in the same consistency group, the transaction log will always have the data necessary to automatically roll back the transaction and begin processing.

 

That’s about all there is to it.  When I call this “crash consistency” the emphasis is on “consistency”.  As long as all the data associated with the database (logs and DB file) are consistent, the RDBMS will be able to recover.  It’s a normal, regular, every day operation that happens whenever a fault is sensed within a SQL Server resource group.  Emphasizing “crash” as in “car wreck” is misleading.

Lastly, it’s only a matter of time before someone at Pixar notices my awesome animations and calls me with some sort of really cool job offer.  So I’m not sure how long I’ll be around here.

The end is nigh for the era of technet subscriptions.  There are lots and lots of opinions out there, and my Mom used to say....  Maybe I shouldn't mention what my Mom said about opinions.  I'll just leave it at the fact that I have one too.

 

Most folks (like the Register) are focusing on the idea that it hurts the small IT shop the most.  While this notion this dovetails with Microsoft's attempts to push smaller shops to services like Office 365 and Azure, I completely disagree.  An IT shop servicing even 50 seats will spend enough yearly that $6200/year is going to be in the noise, financially speaking.

 

But make no mistake - this is still a huge miscalculation for Microsoft.  Inexpensive Technet subscriptions (compared to relatively expensive UNIX training) is the primary reason Microsoft was able to break into the data center over the last couple of decades.

 

I broke into IT with Microsoft expertise (which I gained from technet) in the late 90s, and learned UNIX once I got my first job (and hence access to expensive UNIX kit).  As a result, although I was very comfortable in UNIX environments, I knew exactly what what could be done more easily with Microsoft platforms.  So it came time to make decisions about whether to move from Sendmail to Exchange, or NIS+/Solaris LDAP to Active Directory, I knew exactly where I wanted to go.

 

With meaningful MSDN subscriptions that include SQL and Exchange in the $6,000/year range, access to Microsoft software is out of reach for technologists who will be making decisions in 5-10 years.  When my nephew builds his first email network to see how it works, he'll use Zimbra instead of Exchange. When he starts playing with databases, he'll probably use MySQL rather than SQL Server.  Sure, he could pay for the privilege of running this in Azure, but why should he when he can do all of it with KVM or Virtualbox on the very same second-hand laptop he needs to access Azure services?

 

Microsoft has been known to back down when they've made a mistake.  I hope they will recognize this as a bad decision, and do exactly that.

 

(cross-posted to flippingbits)

One of the biggest requests I get from users of perfcollect is the ability to run perfcollect remotely from a single machine.

 

With perfcollect 5.0, you can now run perfcollect from the command line, specifying the duration and sample interval.  You can also specify a CIFS share as a target.   So if you have a tool that can push software out to hosts and run it, then you can greatly simplify the collection of data from dozens or hundreds of hosts.

 

Ed Howard from Avnet showed me an example today using a free tool called PDQ Deploy from Admin Arsenal.  It's quite simple.  When installing the tool, you give it credentials.  Then you create a package, specifing the duration, interval, and output director you'd like to collect the data in.  Quick note - spaces are not currently supported in the output path.

 

Untitled.png

 

Click "Deploy Once", then select the hosts you want to deploy it to, and you run it.

Untitled.png

 

As you can see, even if there's a failure, it doesn't hang any of the other hosts

Untitled.png

 

If you want to see the whole process in action, check out the video below...

One of the worst 20 minutes of my life occurred when I first installed a beta edition of Windows 2012.  Lots of stuff had moved around, and I had difficulty finding things in the interface.

 

In the interest of having you avoid this trauma, I've recorded a quick 5 minute tour of the new management GUIs.

 

I managed to cram a two-node DAG onto my laptop.  Working in their management interface, a lot the changes and seeming emphasis on Powershell over the management console in Exchange 2010 makes a lot more sense now (the management console is now gone in favor of a rather elegant, if limited, web GUI).

 

So here's how I did this.  I have a Lenovo T430s with 8GB RAM and an SSD.

 

VMWare Workstation Configuration

  • VMNET 3 Private/unrouted production network 192.168.178.0/24
  • VMNET 4 Private/unrouted heartbeat network 192.168.179.0/24
  • 5.9 GB dedicate to VM Workstation

 

VM Configuration - all cloned off a template

  • DC01 - 1 GB RAM, 1 vCPU, 1 network (VMNET 3) domain controller, with iSCSI services if I want to play around with that
  • EX01 -  4 GB RAM, 2 vCPU, 2 network (VMNET 3 and 4) - exchange MBOX/CAS Server
  • EX02 -  same as EX01

 

Downloads required (all eval software):

 

None of this required internet connectivity.  I installed UCMA and the office filter packs, rebooted, and then started Exchange setup which completed without a hitch or reboot in about a half hour.  Once installed, I was able to get a DAG set up with EX01 and EX02 using the management GUI (http://localhost/ecp) and played around a bit with PS.  It's definitely worth swinging through the interface to get an early peek at what they've done - it really shows what they're going for - a seamless administration interface for On-Premises Exchange and Office 365.

 

We have some things cooking for Q2 - we hope to have a robust Exchange 2013 sandbox checked into vLab for everyone to play with at will.

 

If you give this a shot, let us know your experience.  I'm sure this configuration could be tightened up a bit.

Filter Blog

By date:
By tag: