Find Communities by: Category | Product

I was asked a question the other day about an error that was reported at one of our partner's locations to the local SQL team.  The Windows team was seeing errors on their domain controllers like this:

There are multiple accounts with name MSSQLSvc/ of type DS_SERVICE_PRINCIPAL_NAME.


The SQL team was not experiencing any issues so they wondered if this was something that needed attention.  Sounds like a simple question, however, the answer is not so simple.


This error surfaced on the AD domain controller is because it originates from how SQL Server handles logins from clients using Windows integrated authentication. The DS_SERVICE_PRINCIPAL_NAME in the error tells me that this is a Kerberos authentication issue.  Microsoft added another network authentication protocol option in Windows 2000 called Kerberos in addition to NTLM.  Kerberos became the default authentication protocol for SQL Server starting with SQL 2005.  For background, the Kerberos computer network authentication protocol was originally developed by MIT in the 1980s.  There are a lot of articles on the web and a good section in Wikipedia for those looking for more historical context.


  Many Windows administrators and SQL DBAs don’t know about Kerberos authentication.  It is used automagically between most Windows client and server operating system for user authentication.  Even though it is the default authentication protocol for SQL Server, it oftentimes requires additional configuration steps after SQL Server is installed in order to make it work. So that begs the question, if people don’t know about it and typically don’t configure it properly but it is the default, how are clients using Windows integrated authentication successfully logging in?  If Kerberos authentication cannot succeed, then SQL Server will use the older Windows authentication protocol, NTLM.  This is called “failing back to NTLM”.  How bad is that?  It depends (go figure).


There are a handful of scenarios that need or greatly benefit from something called Kerberos Constrained Delegation (KCD).  KCD greatly enhances the security of server to server authentication and does so much more efficiently, both good things.  These include some SharePoint configurations, some Analysis Services role base security scenarios and some SQL Server linked server configurations.  If you look at this list and say woo hoo, I don’t use any of that stuff and plan to bail on this article ask yourself “might you want to use those in the future”. KCD must have Kerberos authentication working as a pre-requisite.  I have seen more than one enterprise attempt to add SSAS with role based security only to discover that they needed to do a significant clean-up of old misconfigured and incorrect Kerberos configuration data in Active Directory.  On the other hand you may never need Kerberos and never have any issues by relying on NTLM failback authentication.


So what is the answer “does any of this really matter”.  I suppose I could say "just do it right" and leave it at that. However, since doing it right involves Windows Active Directory configuration coordination with SQL Server setup best practices that is not really the best answer for everyone.  The other option is to say "Who cares?  If it ain't broke don't fix it."  I'm not really comfortable with that either because sooner or later you can have a big configuration mess on your hands that will take a long time to fix properly.  So I'm going to try to quickly explain what is causing the errors I mentioned above, how you can avoid it, and point you to some additional detailed resources.  At that point you can make an informed decision about what you want to do.


I’m not going to talk at all about the weaknesses of NTLM and the use of password hashes.  If that is a concern to you or your security auditors then configure Kerberos properly, end of story.  I am also not going to make any statements about the performance differences of authentication using NTLM vs Kerberos.  Kerberos is way more efficient. If you are operating at such a large scale that the number of authentication requests you receive per second is important to you then you already know about and use Kerberos. So I’m just going to cover the basics about how Kerberos works and what are the important configuration steps.


Kerberos uses symmetric key encryption which is very efficient.  The biggest challenge with symmetric key encryption is securely storing and exchanging keys between the two parties that want to encrypt their communications.  Kerberos requires a Key Distribution Center (KDC) (not to be confused with KCD from above) for every security domain that uses the protocol.  In the Windows implementation, the KDC for a domain is located on the Active Directory domain controller for every AD domain.


Every computer and user account in Active Directory gets assigned a unique encryption key at the time the user or computer object is created.  The AD domain controller keeps a copy of this key and copies it to the user session (stored in memory) when a user logs on regardless of which client computer the session is originating from.  Computer objects get a copy of their key each time they power up and communicate with a domain controller.  These are the keys that Windows uses in the Microsoft version of the Kerberos protocol.  Here’s a quick synopsis of how it does that.


When a Windows user attempts to connect to SQL Server using integrated authentication, SQL Server replies with a message “I prefer to use Kerberos”. If the client understands that message then the request is redirected to the domain controller to ask for assistance in creating a Kerberos authentication “ticket”.  The DC needs to find the object (computer or user account) that is the startup account for the SQL Server service.  You can find out what that is by using the SQL Server configuration manager tool.  Local System is an example of SQL Server running with a computer account and something of the form domain\user is an example of a user account (oftentimes referred to as a service account). The DC cannot interrogate the server or service to find out what the account is like you can (if you are sysadmin).  The DC looks for an object created and stored in AD schema that tells it what account is running the service that the user wants to connect to.  That object is called a Service Principal Name or SPN.


Remember the error that was reported by the AD team:

There are multiple accounts with name MSSQLSvc/ of type DS_SERVICE_PRINCIPAL_NAME.


There can be one and only one SPN for each service that wants to use Kerberos authentication.  This error says there are multiple accounts associated to one SQL Server service instance. Why does that matter?  Because the SPN is used to identify what the service account is for that service.  Remember, AD can’t figure that out.


In order to make this error go away an application or administrator would have to delete the invalid SPNs in AD and add one with the correct mapping by creating a single valid SPN of the form:

MSSQLSvc/<SQL Server FQDN>:port number <Domain\Account>


The MSSQLSvc is required for SQL Server.  The fully qualified domain name and port number have to be unique to that SPN and no others.  The default TCP port for SQL Server is 1433.  If your SQL Server uses a different port you would use that.  When you create an SPN for a clustered SQL Server, you must specify the virtual name of the SQL Server Cluster as the SQL Server computer name.  The other piece of information required for a complete SPN is the service account information.

Now let’s get back to the client authentication request and see how this all comes together.  AD know the identity of the user trying to authenticate to the SQL Server because that user has been redirected to AD by SQL Server (remember, I prefer Kerberos).  AD packages all the information that Kerberos needs including all the AD groups that the user belongs to in a package called the authenticator.  AD uses the SPN to determine the user account that the SQL Server is using.  AD gets a copy of the encryption key that belongs to the SQL server service account and puts it in an encrypted package along with the authenticator and sends it to the user.  The package sent to the user is encrypted with the USERS private key.  That way only that user can un-encrypt the package with the SQL Service key and the authenticator.  The user process uses the SQL SERVICE private key and encrypts its authenticator and a time stamped message and sends it to the SQL Server.  Since this package was encrypted with the SQL Server service account it has a copy of its key stored in memory and can unencrypt the package.  SQL Server can then check the identity of user against the permissions of the object being requested and decides if the request is authorized or not.  If not authorized,  the user will get an access denied message.

The efficiency of Kerberos is realized on subsequent requests since the user has the authenticator and the service ticket that has the private key for the SQL Server; it never has to talk to the DC again until the user session ends or the service ticket expires.
The tools needed to add and delete Service Principal Names in Windows AD are best called primitive.  You also need to be a domain administrator to create and delete them.  There is a relatively new tool that is very useful called the Microsoft Kerberos Configuration Manager for SQL Server. It is a diagnostic tool that helps troubleshoot Kerberos related connectivity issues with SQL Server.


Here are a few useful links for more information;


Kerberos Wikipedia


Kerberos Windows 2000


Microsoft Kerberos Configuration Manager for SQL Server


Thanks for reading,

Phil Hummel

Mountain View, CA

I can't tell you the number of Windows Clusters I've implemented over that past fourteen years because of patch Tuesday, but let's just say it was more than I could keep track of.  If these words ring true for you, you'll be very excited to hear that EMC is releasing a massive Solution white paper that discusses how to bring your systems, not just up to local availability, but to Corporate Availability.  We are bringing you abilities to not just patch whenever you want, but to do whatever you want, whenever you want — at all levels of the Storage/Compute/Network stack.  We are bringing you products and techniques that protect your data and your systems better and more accessibly than ever before.


But what the heck is Corporate Availability?  It's the level of application availability that business units have learned to expect and demand.  It means that the systems they use to generate revenues, to manufacture products, to maintain contact with their customers will be available -- constantly available.  Corporate Availability means that local High Availability techniques are no longer "good enough".  It means that patching needs to occur without a testing cycle -- it means that the systems that have been patched need to be able to go backwards in time to "undo" a patch that my have negatively impacted the system.  Corporate Availability means that, no matter what is happening in Server Administration, Network Management, or Storage Management, every application owner can focus on running their applications.  There's no more tolerance for scheduling outages.  There's no more "agreeing to wait for them".  Application owners need to have access to their application infrastructure constantly and consistently.  That's what Corporate Availability demands.


There once was a time when applications had "maintenance windows".  I remember when I was a Technical Account Manager (TAM) at Microsoft back in the last decade, the customers I supported were large, international financial services firms.  Their IT shops, just like today, need to support hundreds of SQL Server instances, dozens of Exchange 2003 servers, and hundreds of Windows File Servers.  Every month, they would wait for Patch Tuesday.  At about 10:00am PST, the email blast would come from Bellevue or Redmond announcing to the organization, which groups would need to begin their systems tests based on the latest hotfixes for the "following products are affected" list… and the fire drill would begin.  Who would be affected, how long would the systems be down, who would work late, what impact would these updates have on related systems.  How would we role back?


Many of the issues related to "Patch Tuesday" have been satisfied by Clustering techniques and by snapshotting technologies — both VM's as well as datasets.  Hardware VSS has made going backwards in time a commonplace concept.  But what about the rest of the stack?  Can application availability be totally satisfied by snapshots?  We all know the answer is "NO".


Networking and the "connectedness" of our user base is so complex, and so entirely required for everything we do, that we need to provide layered techniques to maintain access to our valued datasets.  These datasets are, of course, the reason that IT organizations and IT infrastructure exists.  Each data element, whether it's a customer order for a pair of black and white checkered Vanns skateboarding sneakers or an email from your HR department outlining the new holiday schedule for 2015, those data elements are the key to an entire system's existence.  Allowing the consumer, the end-user, to access those data elements is the stuff this new EMC Solution is all about.


In the Solution, we gathered your favorite applications -- ok, so "favorite" might not be the right adjective… -- we gathered your most popular applications -- Sharepoint, Exchange, and SQL Server.  We implemented them in the way that you would most likely chose.  We even put the SQL Servers into Failover Clusters, not AlwaysOn Availability Groups.  And we demonstrate how to maintain application availability across THREE sites. EMC's architecture does not REQUIRE three sites, but it allows you the opportunity to explore what EXTREME availability can look like.  And it's pretty amazing how simple the whole thing really is.  It's not simple because it's 2014; it's simple because it's EMC.  The folks you've trusted since 1990 when we brought you Symmetrix, the world's first storage supercomputer.


Please take some time to glance through the new white paper -- it'll be launched very soon -- it'll show you the exact steps to take to create an availability infrastructure unlike anything you've seen before.  It takes advantage of products that you have come to know and trust -- components such as EMC VNX, EMC RecoverPoint, and EMC VPLEX.  It uses new code versions that allow them to all know about each other and do things that you've wish they could do.  Things such as replicating from a stretched Windows Failover Cluster (located in two synchronously-connected data centers) to a third data center that's asynchronously connected (three states away) AND automatically fail to it if the network access links get a visit from the Grim Backhoe.


This blog will be updated when the paper is published, but for now, please post your comments below.  Please let me know if you are as excited about this paper as I am!

As SQL Server continues to introduce new and improved capabilities, companies are increasingly leveraging SQL Server for business critical applications. Microsoft introduced AlwaysOn Availability Groups (AAGs), in SQL Server 2012, which provides an improved level of availability and protection for your databases. AlwaysOn can provide a number of benefits for your environment, including offloading backup I/O to provide more efficient backups. SQL Server 2014's implementation of AAG even includes support for hybrid cloud scenarios. Despite the numerous improvements, software corruption may still replicate to all copies of your data, rendering all copies compromised. A robust data protection solution for SQL Server is required for critical applications.


EMC leverages AAGs and the new hybrid cloud capabilities of SQL Server 2014 to provide enhanced data protection. Both EMC Avamar and Networker support protection of SQL Server 2014 in multiple hybrid cloud scenarios:


  • On-Premise with Cloud Disaster Recovery (DR) Secondary
  • On-Premise, Remote On-Premise & Cloud DR Secondary
  • Cloud with On-Premise Secondary




Enabling DBAs

Avamar & Networker leverage SQL Server 2014 Hybrid Cloud capabilities and enable the SQL DBA to set the backup policy for the Hybrid cloud. The SQL DBA can set a preference via the backup preferences tab in SQL Server Management Studio (SSMS), for on-premise SQL DB secondary copies to enable fast and efficient backup of cloud enabled applications.


EMC expands backup and recovery for Microsoft SQL Server beyond the data center and into the public cloud. For hybrid cloud deployment scenarios, SQL DBAs can set AlwaysOn backup policies to backup the on-premise active database or an on-premise secondary database.


In addition, Data Domain Boost for Enterprise Applications (DD Boost) integrates seamlessly with SQL Server, allowing DBAs to backup data directly to Data Domain. This provides application owners and database administrators complete control of their own backups, using their native utilities, reducing training time and complexity.


Data Growth is Exponential

Companies are rethinking their next generation data protection methods as a result of exponential data growth. Traditional solutions are inefficient because they repeatedly backup duplicate files and sub-file data segments that exist across many databases and servers. When combined with traditional daily, incremental, and weekly full backups, the amount of duplicate data is staggering and often leads to decreased performance and increased cost.


De-duplication is the key.


EMC’s Avamar de-duplicates data right at the SQL Server to reduce the data moved over the network, reduce backup storage used and reduce backup times by 90%. Avamar starts by dividing data into variable-length sub-file segments. It compresses them and applies a unique hash identifier to each one. Then it can determine if that segment has ever been backed up before or whether it‘s unique and needs to be backed up. If the data has been backed up previously, either from this SQL Server, or any other—physical or virtual—then Avamar will not back it up again. Using this process, Avamar routinely reduces data by up to 99 percent before backing it up. This can have a huge impact.


In addition to the capabilities of Avamar, Data Domain de-duplication storage systems de-duplicate data inline, so that backup data winds up on the disk already de-duplicated. Data Domain can easily integrate with Avamar through the use of Data Domain Boost software, which combines the simplicity of Avamar with the performance and scale of Data Domain.  DD Boost also empowers applications owners to backup de-duplicated data with the control they desire and eliminates storage silos for application protection within an enterprise environment.




Policy Driven Data Protection

While backup is essential to any data protection strategy, the potential for data loss is so great today, that additional methods are needed. Appsync allows SQL DBAs to directly manage snapshots and replicas to meet their Service Level Agreements (SLAs) without requiring engagement of the storage team. Leveraging AppSync, application administrators and DBAs can spend less time learning new tools and quickly enable policy driven data protection. Through the use of tiered Recovery Point Objectives (RPOs), AppSync helps to simplify the conversations between IT and business teams. When an application or SQL Server database is assigned to a service plan, the app owner or DBA can simply check the AppSync dashboard to ensure the SLAs, agreed to, are being met.


With its simple management console, AppSync allows SQL Server DBAs to take charge of their databases directly, instead of replying on storage administrators or other IT staff.



SQL Server data protection has many complexities and challenges. EMC provides a full suite of data protection technology, from backup software, target storage, snapshot capabilities, and replication technologies as well as professional services to provide simple solution for your business and technical challenges. EMC enables DBAs and application owners to control their backup and restores while still leveraging an enterprise backup solution. This allows for optimized de-duplication and yields great results in any environment.


Engage with your EMC account team today to discuss how we can provide the right protection strategy for you and your SQL Server environment.


EMC Data Protection:

Filter Blog

By date:
By tag: