Find Communities by: Category | Product

Everything SAP at EMC

August 2011 Previous month Next month

clicktotweet.png

Implementing FAST VP for SAP on VMAX, the art of creating FAST VP policies


In my first blog on “Implementing FAST VP for SAP on VMAX”, I reviewed some recommended best practices to define Storage Pools and Storage Groups, which form the foundation of implementing FAST VP in any application environment.  Setting up the Storage Pools is the first step for deploying a VMAX and defining the Storage Groups is the first step to implement FAST VP.


In this blog, I am not going to expound on the wonderful benefits of deploying FAST VP for SAP since you can read about them in this white paper.  While the concept of “putting the right data in the right tier at the right time” is very appealing, it runs counter to what a typical, highly disciplined SAP Basis person and their storage admin have been conditioned into thinking for years: in traditional thick LUN environments, data layout is very careful designed and implemented on SAP Production systems, yet FAST VP will change all that “static” layout since chunks of unused data will be demoted to a lower tier while hot data will be promoted to higher performing tier.


This notion of “automated” data movement by FAST VP can be unsettling to the SAP Basis team, so it is important that they completely understand not only how FAST VP works, but how it would be implemented, what are the safeguards, what are the undo options, and such technical details.


The first point to make when starting a FAST VP conversation is that the vast majority of SAP customers agree that as much as 75% to 80% of data in a SAP Production database is NOT touched during normal operations and that data could and should be archived out, but people are reluctant to undertake this work – this is why SAP databases have grown to become so large these days, and it is no longer uncommon to find SAP ECC databases to be around 10TB and SAP BW databases approaching 18TB.


What are needed are a FAST VP policy, and actually a bunch of FAST VP policies, which can retier data according to actual SAP workload, in order to save money and boost performance, if possible.


Our work with our showcase customer, and with other customers, has clearly shown that creating FAST VP policies are more art than science!  Here are some interesting thoughts and observations.


First, we have learned that you will need to create different FAST VP policies for SAP ECC than for SAP BW since I/O patterns for an OLTP application is quite different than those of an OLAP application.  Second, there will be the need to create different FAST VP policies for Production environments vs. non-Production environments, since once again, I/O patterns for each of those environments will be different.


Third, there will be times when FAST VP policies should NOT be applied to certain SAP applications (such as APO, SCM, or LiveCache) or to certain data stores, such as redo logs – I will discuss the reasons why NOT in future blogs.  Fourth, we have to carefully review the I/O patterns on the existing SAP Production on thick LUN and even run some collected data through I/O modeling tools such as TierAdvisor in order to have a good understanding of how FAST VP policies could be designed.


Finally, and this is perhaps the most important point, the effectiveness of FAST VP policies should be VALIDATED, either by extensive load testing in a non-Production environment or if load testing is not possible, by a gradual roll-out of a series of FAST VP policies, one increasingly more “aggressive” than the previous to see how things are working out under actual Production work loads and take advantage of data movement under FAST VP control.


Monitoring the performance of the SAP systems, both at the application level (using ST03N for example), at the database level (using AWR for example), at the OS level (using NMON for example), and at the storage level (using SPA for example) is a crucial part of any successful deployment of FAST VP.  The good news is that the tools needed for doing such monitoring work are plentiful and readily available, and in future blogs, I will discuss these tools and the measurement methodologies in details.


As we monitor the performance, we should not hesitate to make adjustments to FAST VP policies as part of the validation process – once again, this is more art than science since each SAP customer work load will be different from the other.


We MUST adopt an Observe & Adjust strategy when it comes to implementing FAST VP policies because the goal, indeed the first ground rule of designing good FAST VP policies is that the resulting performance (when FAST VP is initially turned on) will be at least equivalent or better than the performance of the customer’s existing storage environment.


In the next blog, I will discuss some recommended FAST VP policies which were given to customers, and comment on how they should be implemented.  In future blogs, I will discuss topics ranging from measuring the effectiveness of FAST VP policies, to validating the effectiveness using load testing, to designing policies for non-Production environments, to documenting costs savings and/or performance increase which would provide a nice TCO analysis which would validate the concept that implementing FAST VP for SAP on a VMAX is the right thing to do.


Stay tuned!

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Solutions Group

clicktotweet.png

SAP System Cloning & Copy, what is it all about?

 

Every SAP customer needs to make a copy their SAP Production system, for a wide variety of reasons, ranging from refresh of the Production database for new development work, to creation of a sandbox for testing new patches or version, to have a functional or load testing environment using the latest Production data, to creating a training environment.


All SAP customers are typically familiar with how to perform a system copy, which is in fact SAP’s recommended way of making a duplicate for the purposes mentioned above.

 

But the problem is that SAP’s traditional system copy is such a time consuming and resource intensive process that most customers don’t do it very often, such is the high level of pain and trauma that this sort of operation can cause.  It is quite common to hear customers state that “we only do system copies once or twice a year” and “it takes anywhere between 3 to 7 days for us to fully complete a system copy”.


So several vendors, including EMC, have unveiled various SAP system cloning and copy solutions to address this challenge and offer the SAP customer something more manageable.


In this first of a series of several blog posts, I will discuss why SAP system copies are such painful exercises, as well as review what would make for a good SAP system copy solution and point out which EMC technology could be used to address this requirement.  I will eventually conclude with a complete explanation of EMC’s own SAP Intelligent Cloning solutions.


In an ideal world, making a copy or clone of the SAP Production system should be a one-push-button operation, akin to the mythical Easy Button (of Staples fame).  But what makes this such a difficult thing to achieve is the fact that each SAP customer’s needs can be different, and there are varying degrees of complicated pre-processing and post-processing activities which must be performed, depending on purpose of the copy, in order to make it useful to the SAP customer.


OK, so why is a SAP system copy such a resource intensive operation?  It’s because the CPU of the SAP Production server has to process the copying of the data from the source database to a destination database.  Depending on how large the database is and how busy the Production system is, this copy operation could take hours and while the copying operation is underway, the performance of the Production database server and the storage subsystem will adversely be impacted.


What’s needed is a SAP system copy solution which does not put any overhead on the Production database server, and which can very quickly make a copy of the SAP database (we’re talking of copying terabytes of data in minutes and not hours).  There are many vendors which offer solutions which can copy the data very fast, such as using snapshot technologies as an example.  But only EMC offers a solution which copies the data very fast and put no overhead on the SAP Production database server: on a VMAX with Enginuity 5875, TimeFinder Snap allows for the creation of up to 128 independent snaps of database very quickly, and since the creation of the copy is done at the storage array level using the SYMCLI, the SAP Production database server is not even involved, and therefore there is no CPU overhead requirement or implication whatsoever.


Then once the copying has been done, you have a lot more work to do in order for the copied version to become usable. Of course, unless the copy is purely for backup purposes, either to tape or disk, or as a point in time copy or snapshot, SAP customers need to perform quite a bit of post cloning activities (such as changing the instance name from PRD to QAS, changing the path names for data files, turning off printers and cancelling current print jobs, cancelling existing pre-scheduled batch jobs, redirecting RFC destinations, and so on) before the copy of the SAP database can be turned over to the group which had requested it.

 

This post-cloning work is often done manually be SAP Basis personnel, and it is very difficult, and almost impossible, to automate with scripts since many of these tasks involves login into SAP using the SAP GUI and executing transactions.  SAP customers often lament that this post-cloning work can take days and it's the reason why making SAP system copy is such a painful exercise.


In the next blog post, I will discuss what’s involved in these SAP post cloning tasks, including when to execute or not the infamous BDLS job (which could take days no matter what), what is commonly being done by SAP customers, and how various vendors including EMC handle the pre and post-cloning tasks.


In subsequent blog posts, I will review if and how snapshots of SAP Production database can be used as point in time (PIT) copies for easy recovery, but I will also discuss how PIT can perhaps be much better handled by RecoverPoint CDP.


Next blog post: what are some of the post-cloning activities and how are they being handled?

 

Stay tuned!

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Solutions Group

clicktotweet.png

Implementing FAST VP for SAP on VMAX, how to get started?


EMC’s FAST VP on VMAX has certainly been a game changer as it allows the SAP customer, for the first time, to put data in the right tier for both cost savings as well as performance.  But a lot of people are wondering how to get started?  How would SAP data which has often been meticulously laid out on Thick LUN behave in the Virtual Provisioning environment, and what will happen when chunks of SAP data are being moved by FAST VP among the various storage tiers?


As the author of the FAST VP for SAP use case and the Project Manager for the Proven Solutions and White Paper designed to support the launch of Enginuity 5875 (code named Danube, after that lovely river in Austria), I had a unique perspective in what it takes to get going.


I also had the privileges of working with a great team of EMC experts to assist our reference and showcase customer to deploy FAST VP in their Production environment, and I would like to introduce the members of this superb team to you:

  1. John Burkhalter, EMC SSpeed, Symmetrix Corporate Systems Engineer and FAST VP guru
  2. Allan Stone, EMC SSpeed, SAP Solutions Architect and SAP storage performance guru
  3. Andrew Chen, SAP Solutions Engineer at the Hopkinton Solutions Engineering Center and SAP application performance guru
  4. Brian Redmond, EMC Sspeed, Symm Champion West
  5. Jeff Kucharski, ATC for the showcase customer
  6. Tim Nguyen, your humble blogger

The results of our work will be published in a FAST VP for SAP Best Practices Guide in the near future.


I would like to share with you all some of this information in this blog, which is the first in a multi-part series on FAST VP for SAP.  Please feel free to react, comment, and even challenges our theories and recommendations, as I wish to have a vibrant dialogue on this important EMC technology.


For this first blog post, I will discuss how we defined Storage Pool and Storage Groups on a virtually provisionned VMAX 5875, with the Q2 Service Release update.


Subsequent blog posts will discuss what we did to gather the BEFORE and AFTER metrics of this customer’s SAP environment, both from the SAP performance perspective (using SAP tools to measure response times) and from the IOPS perspective (using EMC tools) in order to evaluate the effectiveness of FAST VP so that a TCO analysis can be performed.


We will cover the tools and techniques used in details, as well as how migration of SAP data from thick LUNs on a DMX4 was migrated over to the virtual provisioning environment on the VMAX in future blog posts.

 


BEST PRACTICES for Storage Pools and Storage Groups under control of FAST VP in a SAP environment


No matter if SAP data is on thick LUNs or virtually provisioned LUNs, there are certain ground rules that must be followed.  The first ground rule is that database redo logs must always be separate from the rest of the SAP database.  This rule is particularly true in Oracle environments, but would certainly apply to Microsoft SQL Server and IBM DB2 environments as well.


Accordingly, we have the following Thin Pool and FAST VP tier recommendations:


Storage Pools:  can be defined using SMC (Symmetrix Management Console) during the initial preparation of the VMAX, to allow it to receive data.

  1. Four separate virtually provisioned thin pools
    • FC RAID 1 for database redo logs
    • FC RAID 5, either 3+1 (recommended) or 7+1 for SAP data files
    • EFD RAID 5 3+1 or 7+1
    • SATA RAID 6 14+2
  2. All thin devices should be bound to one of the FC pools initially, before FAST policies are applied and FAST turned on.  As mentioned earlier, database redo logs should be bound to the FC RAID 1 thin pool while all other data types should be bound to FC RAID 5
  3. Meta volumes should be stripped
  4. All virtually provisioned thin pools should be associated to a FAST VP tier for reporting & dashboard purposes, regardless of whether or not that thin pool will be under FAST VP control (for example, the FC RAID 1 for database redo logs thin pool should never be under FAST VP control – more on this topic in future blog posts)

 

Storage Groups: can also be defined using SMC, to hold the data to be retiered by FAST VP policies on a per server basis.

  1. Each database server will have two Storage Groups, one for data files which will be under FAST VP control, and the other for redo logs which will not be under FAST VP control.  For example, there will be 8 Storage Groups if storage is being provisioned for 4 database servers
  2. SAP application servers will have a single Storage Group, which could even be on SATA, and should not be put under FAST VP control
  3. There are certain SAP applications, such as SCM, APO, and LiveCache which should not be put under FAST VP control, and therefore will have a single Storage Group

 

In upcoming blogs, I will discuss how FAST VP policies can be defined, how they should be applied, and which SAP application work better with FAST and which SAP application will not benefit from FAST at all, and why that is so.

 

Stay tuned!

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Solutions Group

Filter Blog

By date:
By tag: