Find Communities by: Category | Product

1 2 3 Previous Next

Everything SAP at EMC

32 Posts
Kygau

Come Visit Us @ EMCWorld 2016!

Posted by Kygau Apr 8, 2016

clicktotweet.png                                                     emcworldbooth.png  

 

"Modernize IT for SAP" @ EMC World 2016

 

The most exciting time in tech is coming soon—EMC World 2016 returns to Las Vegas May 1st -5th  at The Venetian!

 

We hope you will join us and learn how EMC is helping SAP customers Modernize IT from a data center centric strategy to a business-services strategy, continuing to be the market’s proven Transformation Leader.

 

For those planning to come, we would like to give you a head start in planning your time with us. Below, I have outlined EMC World venues for SAP that I hope you find of interest.

 

Meet with EMC SAP Subject Matter Experts  on the Show Floor

 

The SAP team at EMC is looking forward to sharing our experiences, best practices and solution for SAP including:

  • Deploy Managed Cloud Services for SAP
  • Deploy Cloud IaaS for SAP
  • Set the Data Center Foundation

We are located in the VCE Solutions Pavilion, Booth #739!

 

Bird of a Feather Sessions for SAP

Best Practices For Deploying Mission-Critical Applications In Enterprise-Class Clouds

Tuesday, May 3, 1:30 PM - 2:30 PM

Attend this session to discuss the best practices for moving mission-critical applications to enterprise-class clouds with the technical experts from Virtustream. Interact with the people who built the application management platform for sophisticated applications such as SAP so enterprises could realize the benefits of cloud.

 

Choose The Right Cloud Solution For Your SAP Landscape

Tuesday, May 3, 1:30 PM - 2:30 PM

This interactive session discusses the various cloud options (e.g. on-prem, off-prem, hybrid) to efficiently run your mission critical SAP applications. We will answer question and provide guidance.

 

Lecture Sessions for SAP

SAP HANA Made Simple With Converged Infrastructure

Monday, May 2, 3:00 PM - 4:00 PM

Thursday, May 5, 1:00 PM - 2:00 PM

SAP HANA is a truly transformational in-memory DBMS,  providing  the ability to run  OLAP and OLTP systems on the same platform. And now with  S4 HANA,  companies are looking to  mainstream HANA in the datacenter. IT needs highly resilient infrastructure for this mission critical platform that can scale for HANA and non-HANA workloads. Learn about VCE Vblock advantages and best practices  for running  SAP HANA workloads.

 

Archiving and Transitioning SAP with InfoArchive

Tuesday, May 3, 12:00PM -1:00 PM

This session will provide you with a practical and technical understanding of archiving SAP enterprise applications with InfoArchive. We will discuss how InfoArchive can help reduce expenses in transitioning to SAP HANA, how to cost effectively retire legacy or redundant SAP instances, and howSAP records can be managed and preserved at a business object level in an open XML format.

Deploying SAP With vSphere 6 & Latest Solutions In The VMware SDDC

Monday, May 2, 1:30 PM - 2:30 PM

Tuesday, May 3, 12:00 PM - 1:00 PM

This session covers different use cases of SAP Netweaver and HANA with these latest features and solutions. Topics will include: - How NSX can manage security of the multi-tier SAP landscape - New opportunities for high availability: workload validation of SAP Central Services in a multi-vCPU VM protected with FT ; Third party cluster solutions for virtual SAP - Monitoring SAP and EMC with vRealize Operations 6.x - SAPon VMware  sizing  to help estimate VMware vCPU resources based on business requirements - Storage design on EMC  and impact of virtual volumes for SAP databases

 

Game Changing SAP Best Practices For HANA & Traditional SAP, Consolidation, Converged Infrastructure, & iCDM On XtremIO

Monday, May 2, 1:30 PM - 2:30 PM

Thursday, May 5, 1:00 PM - 2:00 PM

99% of the Fortune 100 run their business on SAP. Almost 90% of those SAP architectures are built on an EMC SAP Data platform. Consolidation, reduced complexity & performance are primary focal points for these businesses. As is reducing cycles spent on infrastructure management to empower more focus on business innovation via SAP. In this session you will hear from customers who are perfect examples of this new Game-changing All-Flash driven SAP mantra. They will show how they accelerated SAP performance In Production with EMC XtremIO All Flash Storage, by as much as 110%. They will show one of the worlds most advanced virtualized vHANA architectures on VCE Converged Infrastructure. The customer teams will share best practices & lessons learned on reducingSAP costs via XtremIO Virtual Copy (XVC snapshots) & deduplication; accelerating performance and how EMC empowers their NextGen SAP strategies

 

Data Domain: Best Practices For Database Backup (Microsoft, SAP, DB2)

Tuesday, May 3, 3:00 PM - 4:00 PM

Thursday, May 5, 11:30 AM -12:30 PM

Get an in depth look at how Data Domain provides advanced integration with leading enterprise applications and databases for unparalleled performance and control. Learn how application owners can gain 50% faster backup and complete control of backup and recovery. This session focuses on best practices for backing up Microsoft SQL, SAP, IBM DB2 and more

 

ProtectPoint: What's New In 2016 - 20x Faster Backup For VMAX & XtremIO

Monday, May 2, 8:30 AM - 9:30 AM

Wednesday, May 4, 8:30 AM - 9:30 AM

Discover how ProtectPoint significantly reduces cost and complexity by sending data directly from primary storage to Data Domain - eliminating the need for a traditional backup application.

Virtustream: Deploy SAP With Virtual HANA In Minutes With Virtustream xStream App Director

Tuesday, May 3, 3:00 PM - 4:00 PM

Wednesday, May 4, 8:30 AM - 9:30 AM

Virtustream’s xStream App Director allows centralized management, visibility, and control of SAP landscapes by simplifying the automation and control of SAP into secure virtual private and hybrid clouds. Find out how it can help manage SAP systems through the automation and application controls of time-consuming and repetitive processes.

Virtustream xStream: Cloud Management For Mission-Critical Enterprise Applications

Monday, May 2, 4:30 PM - 5:30 PM

Wednesday, May 4, 1:30 PM - 2:30 PM

Virtustream xStream cloud management platform delivers the industry’s first fully automated management platform for mission-critical enterprise applications such as SAP. Attend this session to learn about how xStream offers a cloud management platform focused on mission-critical application management and optimization for private, public, and hybrid cloud environment.

Explore Virtustream xStream with App Director Service Module (Hands-on Lab)

TBD

Explore Virtustream xStream with App Director Service Module. Learn how to configure a cloud tenant, roles and users in xStream and how to create service offerings, manage VM instances and provision SAP systems. In addition, lab users can report consumption and manage resources, migrate a system from cloud A to cloud B and automate SAP operations with xStream App Director Service Module.

 

I hope this summary is helpful!  If you have any questions just let us know and we will be happy to help out.

 

See you!

Kai-Yin Gau

EMC Solutions Marketing

email_button-30.pngfacebook_button-30.pnglinkedin_button-30.pngtwitter_button-25.png

xtremio-rack-close-up-2.jpgThe XtremIO storage platform is a different beast.  It was engineered to fully leverage the unique capabilities and characteristics of flash storage media.  The result is a platform that eliminates many potential problems that we had to design around with previous generations of storage systems.  For example, the storage industry has been making strong recommendations for years that application architects refrain from mixing production workloads for transaction processing and reporting as well as production and non-production systems on one set of infrastructure.  It was a necessary complication that most large organizations lived with.

 

Enter XtremIO.  The nearly flat I/O response characteristics over the entire rated performance profile for XtremIO eliminates the need to isolate decision support, transaction processing or development/test landscapes to ensure consistent performance. The big challenge that we face now is getting customers comfortable with changing years of industry recognized best practices.

 

xtremio chart.png

ComputerWeekly.com recently published an article about a Thailand-based IT distributor, SiS Distribution, that realized incredible improvements in their SAP operations using XtremIO all-flash storage.  SiS Distribution uses SAP ECC and SAP Business Warehouse to manage its sales, inventory, finance and accounting system.

 

As their business grew, SIS Distribution was challenged with long data transfer times between their ECC and BW systems.  The growth also lead to a unpredictable query and response times with the higher number of ECC transactions and slow process time. As more users accessed the system during the day the business had to run reports after mid-night so they would not impact daily work needs and hope they would be available by 8am the next morning.


Server infrastructure and high-speed networking improvements were made but they did not adequately address the performance issues.  They were faced with two possible solutions:  1) reduce the richness of the user data transfer requirements (not a long term option for a growing business) or 2) explore All Flash Array (AFA) technology as their platform for the SAP systems.  They decided to implement EMC’s XtremIO All Flash Array X-Brick solution.


XtremIO beastly perf.pngWith XtremIO’s always-on inline data deduplication, inline compression and thin provisioning features, SIS was able to manage 1.4TB of SAP consolidated data with 500GB of physical XtremIO storage.   This left plenty of head room on their single 5TB X-Brick for further consolidation.  When the capacity of that investment is being fully utilized, SIS can non-disruptively scale out with additional X-Bricks with no application downtime. 


After moving to XtemIO,  application response time was reduced from 20 milliseconds down to only 3 milliseconds.  SIS previously had to be limit the number of concurrent business users requesting reports during business hours and they still took up to an hour to process  .After moving to XtremIO, Sales teams now have instant access to all reports with no delays. SiS Distribution was also able to also reduce backup time from eight hours down to only three hours and their SAP system is now able to handle it’s 150 full time employees plus an additional 1000 sellers at any one time with no performance issues. 


SIS has taken an additional step to reduce complexity in their SAP landscape management that was virtually unthinkable even a few years ago - they use the same storage environment for both production and testing.  Using XtremIO’s Virtual Copy technology, their IT department can now create copies of actual full SAP databases and generate additional workloads for real-life testing scenarios.


SAP offers a full suite of software for operations and analytics.  In order to take full advantage of that capability you need a data platform that can reduce complexity and offer reliable performance and protection.  SiS found the best platform for achieving the needs of managing their business with SAP is EMC XtremIO.

 

Please check out the full ComputerWeekly.com article here: Thai firm chooses flash storage to speed up supply chain  You can use the links below to follow EMC on Twitter and get more information about XtremIO.

 

Thanks for reading,

Phil Hummel @GotDisk

Dave Simmons @ComputerDigest

 

Follow EMC on TwitterGet More Information
emc_logo_bigger.jpgstore_open.png

clicktotweet.png

Running virtualized SAP HANA in Production on a Vblock in a Hybrid Cloud – Why Not?

 

So, 2014 has been quite a year!  We have had so much excitement around virtualized SAP HANA being certified for Production use, the wonderful market acceptance of XtremIO for SAP, the release of EMC Hybrid Cloud for SAP, and of course, the amazing Project RUBICON where a team of experts from Deloitte, EMC, VMware, and Cisco worked together to prove out that it is possible to have business continuance and disaster recovery for virtualized SAP HANA over long distance, e.g. over 500km!

 

As I look back at the year, it is clear that certain topics are hotter than others, and here are some interesting statistics from our EMC Community Network tracking team about how many of you have read what I wrote:

  1. My post “Tim Nguyen's blog on long distance recovery for virtualized SAP HANA made simple” was read an astounding 3,465 times!  Thank you for referring this post to so many people, and I am in awe of the power of social media
  2. My post “Tim Nguyen's blog post on redefining SAP infrastructure with XtremIO - Part 1” was read 1,120 times
  3. My post “Tim Nguyen's blog on the momentum for virtualized SAP HANA in Production: we have crossed the Rubicon!” was read 840 times
  4. My post “Tim Nguyen's blog on virtualized SAP HANA in Production & why the future is bright for very large HANA model” was read 792 times
  5. My post “Tim Nguyen's blog how EMC innovations have changed the state-of-the-art of SAP infrastructure” was read 729 times

 

I am incredibly humbled and proud of your interest and support in the many blog posts that I created, and I hope to continue to keep you all entertained, informed, and amused in 2015!

 

I would also like to take this opportunity thank Chad Sakac who had “pushed” me almost 3 years ago when I was working on his team to blog and share with you all what we do at EMC in creating solutions for SAP customers, and Chad’s own widely read blog Virtual Geek is a constant source of inspiration to me!

 

So, there is no question that quite a few of you are very interested in virtualized SAP HANA and Project RUBICON, since it offered real answers to nagging questions like “how can I offer long distance BC/DR for an In-Memory database like SAP HANA”.  So I’m happy to share that the world class team of experts from Deloitte, EMC, VMware, Cisco, and now VCE, have just completed Project RUBICON Phase 2.

 

Just like we did in Phase 1 when we wanted to have definitive answers to the question of long distance BC/DR for SAP HANA, for Phase 2, we aim to conclusively provide answers to the question “can customers confidently run virtualized SAP HANA on a Vblock in Production” and even in a Hybrid Cloud setting.   After all, WHY NOT?

 

So, if SAP and VMware have already certified SAP HANA to run on VMware in Production, what’s the concern? Well, as always, things are not so simple!  In addition, there are more SAP customers running in Production on the Vblock than ever before, so naturally customers are asking the question “why do I need to buy a HANA appliance when I can and should be running my HANA on the Vblock as well?”

 

SAP customers are typically very cautious and conservative, since after all, SAP is a truly mission critical application which requires “guarantees” in the areas of performance, stability, and scalability, so of course there is a level of skepticism on running something as resources intensive as SAP HANA in a converged infrastructure as the Vblock, where the compute, network and storage infrastructure is shared with many other SAP and non-SAP applications.

 

Many people reason that with a bare metal, dedicated SAP HANA appliance, the hardware vendor working with SAP has more or less guaranteed the performance and stability of that certified appliance – true enough.  So with a physical appliance, we checked off 2 out of the 3 boxes, as a physical appliance really does not scale very well, unfortunately.  You will need to take downtime to add more compute and storage capacity to an appliance if you need more capability.

 

We all know that thanks to VMware, a Vblock can be reconfigured on-the-fly to add more vCPU, vRAM, and so on to increase the capacity and capability to the virtualized SAP HANA appliance WITHOUT taking any downtime!  Now, can we “guarantee” the performance and stability of the virtualized SAP HANA appliance as if it was a bare metal, dedicated SAP HANA appliance?

 

The answer is ABSOLUTELY YES, and the Project RUBICON team set out to prove it!  Here is what we have done:

  1. We configured a virtualized SAP HANA appliance on the Vblock, and then proceeded to “certify” it by running the HWCCT test, the same one use by SAP to certify all SAP HANA physical appliances.  VCE as the newest stakeholder of Project RUBICON was responsible for the design & certification of the virtualized SAP HANA appliance on the Vblock, as well as for the creation of the test plan and for the definition of the performance thresholds to be monitored

  2. We then used 3 market-leading tools to monitor, alert on, and report on the performance and stability of virtualized SAP HANA appliance on the Vblock as massive Production-level workload was applied to it:
    1. SAP IT Process Automation by Cisco (also referred to as Cisco ITPA, and sold by SAP) is used to monitor, alert on, and report on the performance and stability of the computing and network tiers. Cisco ITPA is fully HANA-aware, thanks to its SAP HANA extensions, so it works in conjunction with the SAP Solution Manager used by practically all SAP customers around the world
    2. Blue Medora vCenter Operations Management Pack for SAP (also referred to as Blue Medora vCOPS) is used to monitor, alert on, and report on the performance and stability of the computing, network, and even storage tiers SPECIFICALLY for SAP HANA.  Those SAP customers who are VMware-centric and therefore more comfortable working with vCenter and vCOPS will likely prefer this tool and approach to monitoring not only their SAP HANA instances but also their SAP NetWeaver instances
    3. EMC ViPR SRM is used to monitor, alert on, and report on the performance and stability of the storage tier, although it can also monitor the network and VMware components.  But EMC ViPR SRM’s greatest contribution is in the monitoring, alerting, and reporting on the performance and stability of the EMC RecoverPoint appliances.  Those of you familiar with Project RUBICON Phase 1 will recall that RecoverPoint plays a crucial role in enabling the long distance BC/DR for virtualized SAP HANA, so “guaranteeing” the performance and stability of this key component of the architecture is a must!

  3. Finally, we set out to create a massive Production-level workload to run through the virtualized SAP HANA appliance on the Vblock so that we can be satisfied that not only the virtualized SAP HANA appliance is up to the task, but that any issue on performance and stability will be caught and documented, and that remedial actions can quickly be applied.  Working with Worksoft and its amazing Worksoft Performance tool designed for performance and load testing across SAP business processes, the Deloitte team recreated a “month-end closing” workload for Project RUBICON Phase 2!  Why not?
    1. Why not have thousands of SAP users (at least 1,000 dialog users anyhow) going at the virtualized SAP HANA appliance on the Vblock to place orders, view stock availability, ship products, and so on?
    2. Why not also have massive and resources intensive MRP jobs run in the background to summarize and consolidate all the activities?
    3. Why not create tens of gigabytes of SAP HANA logs to see if there will be any issue with the infrastructure keeping up with logs replication to the remote site?

  4. To top it all off, why not relocate some or all of this SAP HANA workload from Suwanee (Deloitte) to Durham (EMC pretending to act as the cloud service provider) in a Hybrid Cloud fashion over the VPN and a distance of 550km?  And why not use vCAC to provision remote Deloitte's CloudService Fabric endpoints in Durham from Suwanee and monitor them?

 

Why not?

 

The Project RUBICON Phase 2 team of Deloitte, EMC, VMware (with Blue Medora), Cisco, and VCE, along with Worksoft, did ALL that and more and the amazing results will be shared with you in my upcoming blog posts, along with a white paper and demo video. Looks like 2015 will be even more exciting than 2014?  Happy New Year

 

See you again in 2015!


Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP


clicktotweet.png


EMC Hybrid Cloud for SAP: redefining simplicity with intelligent KPIs monitoring with ViPR SRM

 

Back in October 2014, I attended the SAP TechEd && d-code conference in Las Vegas, where SAP boldly talked about the “next steps to deliver innovation in the cloud with the SAP HANA platform” so that customers can truly innovate and simplify their business and development process, and SAP reaffirmed that running SAP HANA in TDI mode continues to gain momentum everywhere as lots of customers are wanting to hear.

 

During SAP TechEd && d-code Las Vegas, EMC Global Solutions Marketing launched the EMC Hybrid Cloud for SAP Technical Demo Video to supplement the white paper released at VMworld in San Francisco in August (check out my blog post from that event).  This 8-minutes video explains in simple terms how the EMC Hybrid Cloud for SAP (EHC for SAP for short) can be the bridge to the future to enable IT transformation while helping customers redefine simplicity, choice, & agility in deploying SAP landscapes in either on-premises cloud, off-premises cloud, or both.

 

When people discuss and debate the merits of implementing virtualized SAP environment, in the cloud so to speak, the conversation is often centered around the ease and simplicity of provisioning, for example, a new SAP sandbox or test environment can be stood up in minutes instead of weeks.  But running SAP in the cloud, regardless of it is on-premises, off-premises, or both in a hybrid cloud fashion, provide benefits that go far beyond provisioning!  In fact, the powerful capabilities offered in the areas of monitoring, workload relocation, and multi-tenancy chargeback will soon be the more interesting points to consider and understand!

 

Many customers and experts agreed that performance monitoring, alerting, and compliance reporting are afterthoughts, often put in place after some sort of crisis has occurred which caused an outage or a disruption to the business.  Since SAP is such a mission-critical system, you must be able to have end-to-end monitoring of all KPIs (key performance indicators) involving the compute, network, and storage tiers on the same pane of glass in order to quickly react to any issues.  EHC for SAP incorporates key monitoring tools offering unparalleled monitoring capabilities for your SAP cloud environment:  EMC ViPR SRM and VMware vCenter Operations with the Blue Medora plug-in.

 

Let me spend the rest of this blog post discussing with you one of the key tools integral to EHC for SAP: EMC ViPR SRM which provides comprehensive monitoring and alerting on not only the storage tier, but also on the compute tier and network tier.  For EHC for SAP however, ViPR SRM focuses primarily on the critical storage tier of the cloud infrastructure, including such critical components for long distance BC/DR such as EMC RecoverPoint.

 

People reason that since “it’s in the cloud”, there should no longer be any worry regarding storage since it’s now someone else’s problem, right?  Well that may be true if you are talking about a public cloud, but if it’s a private cloud running on your premises, then you DO in fact have to worry about the performance, stability, and availability of your storage platform.

 

And since it’s a cloud environment, your storage is a shared resource servicing hundreds or thousands of SAP virtual machines, which makes it even harder to pinpoint which SAP environment and virtual machines are being impacted by a particular problem.

 

EMC ViPR SRM offers not only unparalleled insight into the popular EMC storage platform for SAP such as the VMAX, VNX, and XtremIO (and some popular SAP storage platforms such as Hitachi, IBM, and others), but it can provide visibility into the network and compute tiers as well.  As previously mentioned, for EHC for SAP, ViPR SRM concentrates on monitoring the critical storage tier and you can drill down to the storage processor level and LUN level if needed as well as view the complex interaction of the data stores and the replication solutions for data protection and disaster recovery – you can easily perform root cause analysis to troubleshoot any problem and also have the necessary reporting to show that key SLAs (Service Level Agreements) have been met or even exceeded.

 

One could ask why anyone should care about a tool for visualizing, analyzing, and optimizing storage resources when one is running SAP in a cloud environment. Well, for a lot people, SAP on cloud typically means that it’s virtualized SAP running on VMware (the market share leader in SAP installations) and every VMware virtual machine needs a support VMDK data store which itself is a group of files!  So yes, a tool for visualizing, analyzing, and optimizing storage resources when one is running SAP in a cloud environment is not only relevant, it is an absolute necessity to assure availability, performance, and resiliency of the cloud infrastructure both in a private cloud setting as well as hybrid cloud setting, where long distance disaster recovery and workload relocation are key drivers for adoption.


I know that details in the screen shot below are hard to read, but I wanted to provide a "blurry" glimpse of the richness of the EMC ViPR SRM console as integrated into EHC for SAP – however, you can download the EHC for SAP white paper to get a better view or go to the EMC ViPR SRM page on EMC.com for more details.


EMC ViPR SRM on EHC for SAP.png

 


In the screen shot, EMC ViPR SRM easily allows Cloud Administrators to drill down to the following components of the storage tier:

  1. Storage Capacity: this one is obvious, as you need to know if the cloud environments being hosted on a particular array will run out of space
  2. Storage Path Details: this data point is crucial to deal with performance issues due to bottleneck in the data storage path
  3. Storage Performance: another obvious one, which would be useful to redistribute storage work load as needed to provide scalable performance required in a cloud environment
  4. Performance CPU of the array front end processor or engine: this metric provides more details on how the array is behaving, useful in capacity planning and performance optimization
  5. Performance Memory of the array front end processor or engine: another needed metric to better understand how the array is behaving, useful in capacity planning and performance optimization
  6. Events: now this feature is essential to Cloud Admins so that they can be alerted in case of any issue which may impact the performance of the EHC for SAP.

 

 

There is no question that EMC ViPR SRM brings unmatched Alerting and Reporting capabilities, with several hundreds of counters and metrics for not only EMC storage arrays, but also for non-EMC arrays as well as Cisco servers and switches, Brocade equipment, VMware products, and more.

 

We will be showcasing the alerting & reporting capabilities of ViPR SRM in my upcoming blog posts detailing the results of the Production level load testing to be in Project RUBICON Phase 2, and I will particularly highlight ViPR SRM crucial role in monitoring EMC RecoverPoint.  Those of you who are familiar with my previous blog posts on Project RUBICON will recall that RecoverPoint plays a crucial role in enabling long distance BC/DR of a Cloud Environment as well as in extending a Private Cloud into a Hybrid Cloud be allowing the seamless relocation of virtualized SAP workload from one data center and back.

 

To be continued…

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

Time to redefine SAP Infrastructure with XtremIO!  It’s not just about better performance

 

In my previous blog post “Time to redefine SAP Infrastructure with XtremIO – Part 1”, I had discussed how XtremIO for SAP is a game changer for EMC’s customers by offering more than twice faster SAP application performance than a typical enterprise-class legacy storage array with Enterprise Flash Drives in its disks composition.


This improvement in performance was observed in the interaction between SAP ECC and SAP BW through BW process chains used for the Infocube extraction performed by practically every SAP customer, and also in long running batch jobs, and this performance improvement is expected to be even more dramatic when migrating from aging Fibre Channel-only legacy storage.


EMC just released the white paper entitled “Redefining SAP Infrastructure with EMC XtremIO”, and in it, we documented numerous examples of performance improvements but I really believe that SAP customers implementing their entire SAP environment on EMC XtremIO will benefit far more than getting better performance!  


Customers who have decided to migrate their entire SAP environment (that’s Production, DEV, QA, UAT, Consolidation, in short everything) to XtremIO will achieve significant consolidation of their ever sprawling SAP landscape which result in a not-so-insignificant reduction of actual storage space being consumed, so that in the end, XtremIO will bring dramatically lower TCO benefits - TCO stands for Total Cost of Ownership which is the financial benefit analysis used to determine the total economic value of an investment, and it includes the total cost of acquisition and operating costs.


Indeed, the 4 key areas for redefining SAP infrastructure with XtremIO are Improving Performance, Better Consolidation, Lower TCO, and Leveraging Simplicity.

 

Why XtremIO’s Fundamentally Different Architecture Matters

XtremIO has been designed from the ground up to be different than traditional storage array architecture, and it is a clean break from the past.  XtremIO provides Data Services components which allows its operation and underlying technologies to be completely different than a legacy array configured with all flash drives.


Indeed, with its “always running” Data Services such as thin provisioning, deduplication, inline compression, data protection, data-at-rest encryption, and writable snapshots, XtremIO allows for the consolidation of SAP environments resulting in significant savings due to reduced storage space consumption, optimum agility, and improved availability every SAP application and for each user, during both peak and normal times.


What makes XtremIO different is that incoming data streams are “finger printed” into the XtremIO Service Processor which compares that data for duplication in-memory and only writes that data once to the flash drive in a compressed format. Therefore, XtremIO Data Services enable inline deduplication and inline compression to work together at the time data is initially ingested, which is significantly different than on legacy storage environments where dedup and compression are done only after the data has already been put on the disks, thus increasing storage array processing overhead.

 

Here are links to White Papers on XtremIO that further explain those Data Services:

 

http://info.xtremio.com/rs/xtremio/images/H12453-1-so-pdf-xtremio-unstoppable_Data_Reduction.pdf

 

http://info.xtremio.com/rs/xtremio/images/Introduction-to-XtremIO-Snapshots_H13035_Rev-01_Draft_2014-06-29_1.pdf

 

http://info.xtremio.com/rs/xtremio/images/White-Paper_XtremIO_Data-at-Rest-Encryption_H13038-1_Rev-02.pdf

 

http://info.xtremio.com/rs/xtremio/images/White-Paper_Introduction-to-XtremIO-Storage-Array_Ver-3-0_H11752-5_Rev-06_Draft_2014-07-02_1.pdf

 

 

Always-on Inline Deduplication & Compression can provide up to 6-to-1 data reduction on the average

It’s a well-known fact that SAP environments and databases share significant amounts of the same information because they are often duplicate copies of a source SAP instance. Because XtremIO always-on inline dedup is smart enough to only store the difference between a Production (PRD) copy and the User Acceptance Test (UAT) copy, it allows for tremendous savings since it avoids storing duplicate copies of those data files, resulting in smaller storage capacity being consumed.  XtremIO also offers data compression as a standard, real-time, always-on and automatic Data Services capability, without any performance penalty.


With XtremIO’s inline dedup and inline data compression working together, EMC customers with SAP on XtremIO have consistently achieved a 6-to-1 data reduction ratio on average.  This means that on an X-Bricks cluster with 30TB usable space, 6:1 inline dedup and data compression will yield a theoretically possible 180TB of effective usable space! As always, the benefits of consolidation with dedup and compression will vary based on the kind of data in your SAP databases.


 

Other benefits which have made XtremIO so compelling for the SAP customer

In my earlier blog post “Time to redefine SAP Infrastructure with XtremIO – Part 1”, I had briefly mentioned Chad Sakac’s fascinating blog post on XtremIO’s snapshot technology that really works.   Indeed, customers have confirmed that they have found XtremIO’s snapshot solutions to be easy to use with no performance penalty which allows for everything from quick creation of new SAP environments for training or testing, to actual process improvements in how logical database corruptions can be more easily and quickly dealt with.  Once again, this snapshot technology is different in that it is enabled by XtremIO’s innovative Data Services mentioned earlier.


Customers have also reported that since XtremIO clusters have a much smaller footprint than the typical legacy storage, they are able to use less power and cooling while at the same time, using less floor space in their data centers, resulting in a significantly lower OPEX.  One customer has experienced annual OPEX savings of over 35% in power consumption & cooling costs since his legacy storage was fairly modern, while another customer with much older and less energy efficient legacy storage reported a whopping 82% annual savings in OPEX due to reduced power and cooling costs!


Then we have Simplicity! SAP has been exhorting its customers to “Simplify” so perhaps we can explore how XtremIO can help to simplify the increased complexity of operating an SAP environment.


With XtremIO’s simpler & easier to use GUI, several SAP customers estimated they realized a 90% improvement in installation time due to XtremIO’s simplicity in standing up their cluster. The setup and optimization phases of the XtremIO implementation were all completed in 30-minutes compared to the typical more than a day needed to configure and setup time traditional storage array systems to support the demanding SAP workload.   

 

In addition, there is simpler zoning to allocate data resources since XtremIO scans the bus and the GUI is very intuitive to use – customers have been pleasantly surprised that zoning work was done in one hour vs. four hours on legacy storage.   The table below eloquently illustrates how XtremIO can truly simplify the often complex and tedious tasks of implementing new storage arrays.

 

Simplicity for:

Legacy Storage

XtremIO Array

Improvements with XtremIO

Implementations

>3 Days

<1 Day

+66%

Setup, Tuning & Layout

>8 Hours

30 Minutes

+94%

Zoning Work

4 Hours

1 Hour

+75%

 

It is important to remember that each SAP customer will benefit in their own ways from leveraging Simplicity through XtremIO, but every SAP customer can quickly take advantage of the fantastic benefits of running SAP on XtremIO without any modification to SAP code or any need to tune the database!


Curious? Intrigued?  Download the white paper to get more details and look for the upcoming video showcasing the capabilities of XtremIO on SAP, and of course, contact your EMC representative or business partner for a demo and conversation.


Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

Time to redefine SAP Infrastructure with XtremIO!

 

I just recently had the privilege of working with one of EMC’s many innovative and forward-thinking customers to help lead the pre-Production planning and acceptance testing of their new SAP on XtremIO infrastructure, and I am delighted to share some of the fantastic results in this blog post.

 

EMC’s XtremIO has achieved market leader status very quickly since its launch earlier this year, and its scale-out clustered design that grows capacity and performance linearly to meet any requirement is ideal for the demanding workload required by the typical SAP customer.

 

According to IDC, EMC leads the overall storage market, and with our growing installed base of 18,000 SAP customers worldwide we understand the real world needs to SAP infrastructure better than anyone - I had frequently blogged how EMC’s innovations have continuously advanced the state-of-the-art for SAP infrastructure, and this is yet another proof point.  Customers running their SAP on XtremIO will benefit from all the familiar & traditional EMC technological necessities such as PowerPath, but also from innovative & exclusive XtremIO functionalities such as a new age metadata driven architecture which enables significant dedup so needed in a typical SAP landscape, and a snapshot technology that really works – please take a quick look at Chad Sakac’s fascinating blog post on this topic.

 

All right, enough of the marketing set-up!  Let us get into the details of our customer’s pre-Production readiness testing which was done purely from a SAP application and Oracle database perspective!

 

We all know that any all-flash array (AFA) will run fast and faster, and it is more or less a foregone conclusion that XtremIO, like any AFA, should be able to speed up the performance of individual SAP applications like SAP ECC, SAP BW, and so on, but our customer was more interested in learning about the interaction between the SAP applications which is so common in today’s complex SAP environments.  For instance, what is the impact of XtremIO on a cube extraction from ECC to BW and the subdequent loading of the InfoPackage into a BW Data Target?

 

 

MORE THAN TWICE FASTER SAP ECC TO SAP BW PROCESS CHAIN INTERACTION

Working closely with our customer’s SAP BW reporting team, we set out to explore if XtremIO would have any impact on the regularly performed task of extracting data from SAP ECC (the Infosource) using a BW process chain driven by an InfoPackage, culminating in the loading of the extracted data into a SAP BW Data Target.  SAP customers run this cube extraction operation quite often, in most cases once a day or even twice a day, even though in some case, the performance overhead of this operation can be crippling on the Production SAP ECC source system.  This impact on performance on the SAP ECC Production system is the reason why people have to break up their cube extractions into smaller chunks, which can make the duration of the entire data transfer from ECC to BW take longer than desired.

 

I am happy to share the results on one process chain test, which involved Production Planning Control, a key component of any ERP system:

  1. The process chain involves the update of the SAP BW Data Target ZPP_ORDR and ZPP_ORDRC in SAP BW from the SAP ECC Infosource ZPP PROD ORDERS, using the InfoPackage FT D PP PROD ORDERS
  2. For those of you not familiar with the arcane workings of a cube extraction, the process chain is launched from SAP BW, and it launches the program ZPP_PROD_ORDERS which reads data from a set of predefined of data tables in SAP ECC
  3. The process then hands off to the BW InfoPackage FT D PP PROD ORDERS which puts the extracted data into the PSA (persistent staging area) on the BW system
  4. Next, data is loaded from the PSA to the SAP BW Data Targets ZPP_ORDR and ZPP_ORDRC to complete the cube extraction process in order to allow reporting using fresh data to begin.  Needless to say, all these activities require extensive I/O activities on the SAP storage sub-system, and this is where XtremIO delivered its magic!
  5. For this test, we found that the process chain selected 91,000 records and the load program was executed in 5 threads (during the time slice that we monitored)
  6. THE RESULTS: the process chain ran more than twice faster on XtremIO, an improvement of over 130% in DB Time from a robust EMC enterprise class storage array with EFD in its configuration
  7. And this significant improvement was achieved by simply moving SAP ECC and SAP BW unchanged over to XtremIO!  There was no code change, database tuning, and changes in operational process of any kind needed!

 

For those of you who have read my blog posts on FAST VP, DB Time is the SAP ST03N metric which is used to measure any true performance increase in the storage subsystem.  DB Time measures the time it takes for data to be read from storage into the buffers of the database server (in this case it’s Oracle), and it is a far more accurate measurement than the Average Response Time metric (sometime referred to as run time) which involves the multiple tiers of a modern SAP systems, such as the network tier, the app servers tier, the database server tier, and of course the storage tier.

 

This kind of results has caused our customer to rethink their cube extraction strategy, and they are exploring the possibility of combining SAP BW process chains to pull in more data faster from SAP ECC into SAP BW so that their business users can run reports from fresh data on a timelier basis.  That’s the kind of business win that can easily be understood by a C-level person as it does not involve such arcane notions as IOPS, latency, and even DB Time

 

 

SIGNIFICANT IMPROVEMENT IN LONG RUNNING BATCH JOBS ALSO

Every SAP customer has long running batch jobs, and some are more painful than others.  Many of these long running batch jobs are reorganizational in nature so they have to perform full table scans in order to do update, delete, and insert into the database, and they have nothing to do with running reports, yet, they do impact the overall performance of the SAP ECC system.  But these maintenance batch jobs have to be run, and as you may have already guessed it, their impact on Production is so great that they typically have to be run on weekends.

 

Working with the SAP Basis team at our customer, we ran tests on the SAP ECC batch job CAL BC ARCH MM_EBAN, which is used to archive out old Purchase Requisitions so that the information is still available but not found in the active tables – this Materials Management job is run so that the performance of the Production SAP ECC system can be improved for business users working with Purchase Requisitions on a day to day basis.  This job take more than a day to run, so it’s run monthly but it competes for resources with other large monthly jobs.  The Basis and functional teams also told us that not running this job at least once a month is not an option since the performance of the Production SAP ECC deteriorates too much during day to day operations.

 

Once again, I am happy to report that our test of using an unmodified copy of the Production SAP ECC system on XtremIO resulted in 122% improvement on DB Time, that’s twice faster than on the current EMC enterprise storage system with EFD!

 

We examined the details of this long running job under Oracle and found these amazing facts:

  1. The job read almost 75 million rows, of which almost 3,000 rows were direct reads and over 74 million rows were sequential reads
  2. The job performed updates on over 1,000 rows, inserts on over 38,000 rows, and deleted 200 rows
  3. Once again, XtremIO was able to perform its magic whenever there is a significant amount of I/O work load put on a SAP database, and in this case, the DB Time was twice faster than before, resulting in a faster overall run time for this long running and painful job

 

Once again, our customer commented that this type of application-specific performance improvements is easily understood by their business users and by its IT leadership. We at EMC are absolutely delighted that we were given the opportunity to work with this customer to redefine its SAP infrastructure using EMC XtremIO in a simple, painless and non-disruptive way. All that was needed was a simple restore from DataDomain of their SAP Production environments on XtremIO and they are all set to go!

 

BTW,  please take a look at Axel Streichardt's blog post on how XtremIO redefines the operational parameters of running SAP on flash, with not only performance improvements but also significant OPEX and CAPEX reductions as well as overall process improvements.

 

As you can imagine, I have a lot more to share on how XtremIO is leading the transformation of EMC’s customers SAP infrastructure – we are working on a white paper, a video, and much more, so please do stay tuned

 

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

The WHY, the WHAT, & the HOW of the EMC Hybrid Cloud for SAP, launched at VMworld 2014 in San Francisco

 

I recently attended VMworld 2014 in San Francisco where the theme was “No Limits” and where 22,000 visitors from 80+ countries congregated to hear VMware CEO Pat Gelsinger discuss everything from the Software-Defined Data Center (SDDC) to the Hybrid Cloud, with the announcement that vCHS (vCloud Hybrid Service) has been given a more elegant & marketable name, vCloud Air.

VMworld 2014.png

There were of course announcements upon announcements, but what stood out for me as a SAP guy at a Virtualization conference were the announcements of VMware EVO:RAIL, VMware NSX 6.1, vSphere 6 Enhancements now in public Beta, and the tight coupling of the Blue Medora vCenter Operations Management Pack for both SAP CCMS and SAP HANA and vCenter Operations Manager (vCOPS).

 

Underscoring the great progress that converged infrastructure like VCE’s Vblock has made in deploying mission-critical applications like SAP, VMware launched EVO:RAIL, a software product and reference architecture (they called it ‘Build Recipe’) which allows VMware partners to easily build hyper-converged infrastructure appliance 100% powered by VMware software - check out VMware Chief Technologist Duncan Epping's blog on this cool new offering.  The list of partners planning to offer EVO:RAIL hardware is long (Dell, Fujitsu, SuperMicro), and EMC of course is very prominent on that list.

 

In fact, Chad Sakac (EMC Senior Vice President of Global Systems Engineering) devoted significant time during his standing room only session to explain EMC’s approach to our EVO:RAIL appliance which will take full advantage of EMC’s richest portfolio of storage products in the market today and which will push the boundaries of the SDDC with our innovative technologies such as EMC ViPR and EMC ScaleIO.  I will discuss what EVO:RAIL may mean to the SAP customer at a later time, but you can read Chad’s passionate blog on EVO:RAIL here.

 

I was super busy the entire week due to:

  1. The launch of Project RUBICON video demo – many of you will remember my blog posts on this fantastic collaboration between Deloitte, EMC, VMware, and Cisco to showcase how long distance BC/DR for virtualized SAP HANA can be done. Project RUBICON just entered Phase 2 and I will have a lot more exciting things to share in future blog posts
  2. The launch of the EMC Hybrid Cloud for SAP (EHC for SAP) white paper, which leverages all key VMware and EMC technologies including the vCloud Suite, EMC ViPR SRM (Storage Resource Manager), EMC storage platforms, and the aforementioned VMware NSX and the Blue Medora plug-in for SAP CCMS and SAP HANA

 

For the rest of this blog spot, I will discuss the WHY, WHAT, and HOW of the EHC for SAP, and you can also get more details by reading the white paper as well as visit the EHC for SAP Resources Page for the PDF version of the presentation and the Solutions Guide.

 

The WHY of the EMC Hybrid Cloud for SAP

Let me cut to the chase: as an IT leader, how would you like to regain complete control of your business user’s cloud computing environment and have full control of the future direction of your company’s cloud strategy?  I actually “borrowed” that sentence verbatim from the Executive Summary of the EMC Hybrid Cloud for SAP white paper, since it so succinctly explains the WHY of EHC for SAP.

 

You know all about it, because your business users have wanted to be more agile in order to address their most urgent business requirements, and they cannot wait the typical 6 to 8 weeks for IT to procure the hardware, rack & stack it, install the OS, the applications, and so on, before they can begin doing anything with their new SAP environment.  Your business users want a new Sandbox or DEV environment to be provisioned in minutes instead of days, and they want to be able to do it with the pleasant user experience of buying Infrastructure as a Service (IaaS) with a public cloud supplier like Amazon and its vaunted AWS for SAP (Amazon Web Services for SAP).

 

If you are or have been in the unfortunate position of having your business users already bypassing you by using Amazon and similar public cloud providers for their SAP environments, then you need to find a way to quickly regain that control.  I would suggest to you that the EMC Hybrid Cloud for SAP could serve as the foundation for your evolving cloud strategy, by starting with a Private Cloud but being able to very quickly incorporate the Public cloud into your SAP environment for a Hybrid Cloud implementation.

 

Now, if you will be deploying a Private Cloud to satisfy your business users’ requirement for more agility, why would you need a Hybrid Cloud?  Answer: for the same reason as before, e.g. more agility and flexibility to add capacity on demand, to move workload from on premise to off-premise and back, and so on, because your Private Cloud may not have sufficient capacity to support a sudden increase in workload demand.

 

 

The WHAT of the EMC Hybrid Cloud for SAP

I would like to invite you to download and read the white paper for more details on the features, capabilities, and benefits of the EMC Hybrid Cloud for SAP, but here are some key points:

  1. Self-service provisioning, with the same ease and pleasant user experience of provisioning a SAP environment on the public cloud.  However, instead of pulling out your credit card, the vCAC (vCloud Automation Center) based portal will check for your cost center and entitlement guidelines instead in order to provision a SAP logon-ready environment in your Private Cloud in minutes
  2. Self-service provisioning of SAP end points in the public cloud as an extension to the SAP Private Cloud.  Again, using that familiar vCAC-based portal, you should be able to quickly provision a SAP logon-ready environment at a public cloud service provider of your choice and have that environment be linked to the on premise SAP landscape
  3. Self-service provisioning of critical SAP services beyond just a virtual machine with the OS, the database, and the SAP application loaded and up and running. We’re talking about such important matters such as backup as a service, DR as a service, and so on…
  4. Ease of onboarding and migration of existing SAP virtual machines already running in your data center.  If you have your SAP already running on VMware, let’s make sure that these VMs (many of them may already be in Production) can take advantage of the full capabilities of the EHC for SAP, such as end-to-end monitoring, multi-tenancy, chargeback, not to mention the aforementioned backup as a service and so on…
  5. Capability to monitor the SAP applications on an end-to-end basis, not only at the VM & network level with VMware vCenter Operations Manager (vCOPS) and the storage level with EMC ViPR SRM (Storage Resource Management) tools, but also down to the commonly used SAP Basis transactions like ST03N, ST02, DB02, and so on, thanks to the Blue Medora plug-in to vCOPS.  And of course, you will want to monitor both your Private Cloud and Public Cloud environments from a single pane of glass
  6. Capability to easily scale resources by moving virtualized SAP work load from the Private Cloud to the Public Cloud over a VPN, like has been clearly demonstrated with Project RUBICON.  I invite those of you not familiar with Project RUBICON to read my many blog posts, watch the demo video, and read the white paper to see how efficient EMC and VMware technologies can make this task simple
  7. Chargeback and multi-tenancy: while chapters can be written on this topic, let’s just simply say that EHC for SAP allows for easy charge back to various cost centers and it offers full multi-tenancy support with full security and privacy safeguards

 

 

The HOW of the EMC Hybrid Cloud for SAP

So we have created the EHC for SAP reference architecture, tested it in our labs, validated it with EMC IT in our SAP Production environment and with numerous other SAP customers, but a Cloud project is not about just the technology and installing software from a bunch of DVDs!  A Cloud project starts with a full understanding of the business requirements, with a full catalog of your current and future workloads, and with making the right decisions on everything from the architecture (e.g. simple things Linux vs. Windows or Oracle vs. Sybase) to selecting the right Public Cloud Services Provider.

 

To embark on your Cloud Journey, you will need the assistance as well as the skills and expertise of the EMC Federation of partners, and the full and ever expanding list of these partners can found in the white paper.  Perhaps a simple next step would be to contact EMC Global Services or one of EMC’s Cloud partners to schedule a Technical Cloud Workshop or a Cloud Readiness Assessment.  EHC for SAP is not rocket-science and it CAN be deployed very quickly, especially when you have the right people assisting you.

 

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

Additional technical details on Project RUBICON: can they be used in a SAP Hybrid Cloud design?

 

In my blog post “Long distance recovery for virtualized SAP HANA made simple: what we learned from Project RUBICON”, I had promised to share more technical details from this unique POC, so here we are again.  But before I get started, I would like to thank the thousands of you who have viewed my last blog post on RUBICON, and it is very gratifying but also very humbling to see so much interest and support for what the stakeholders for Project RUBICON (Deloitte, EMC, VMware, and Cisco) are trying to do.

 

As people become more familiar with the POC’s design and architecture, I often get asked if this sort of looks like a Hybrid Cloud design.  To be honest, our initial design goal was to prove out that a long distance disaster recovery of virtual SAP HANA instances was possible, and of course we did that.  But a closer review showed the following interesting details:

 

  1. Long distance connectivity was established between two DIFFERENT companies, Deloitte and EMC, when the VPN tunnel was built between the 2 data centers 500 km (341 miles) apart: this would be very similar to a Private Cloud environment being connected to a Public Cloud Service Provider
  2. SAP virtual machines, both HANA and NetWeaver, were created in one data center (which could easily be a Private Cloud environment) and replicated (or “moved”) to another data center (which again could be at a Public Cloud Service Provider)
  3. After running for some time duration at the second data center, those same SAP virtual machines can easily be moved back to the original data center, similar to the concept of that SAP workload being moved from the Public Cloud back to the Private Cloud
  4. Administrators with the right security credentials & toolsets can manage both cloud environments simultaneously, including the all-important automation component to make the workload relocation simple and seamless

 

Now, let me be clear: I am NOT saying that what was built to support Project RUBICON Phase 1 is a Hybrid Cloud, because it lacks some of the basic Hybrid Cloud features such as a portal for self-service provisioning, support for multi-tenancy, support for chargeback and costing, and so on.  But I would say that many of the technical components of Project RUBICON could absolutely be used in a SAP Hybrid Cloud design and the architecture.

 

 

Some lessons learned from the creation of the VPN tunnel

Dedicated long distance WAN link can be expensive, and being that Project RUBICON was a POC and not a Production-level implementation, we had to find a way to keep WAN costs under control.  After much research and discussion between the Deloitte IT team and the EMC IT team, it was agreed that since the Deloitte site in Suwanee, GA, has an Internet connection of 2GB burstable to 20GB and the EMC site in Durham, NC, has a 1GB Internet connection burstable to 5GB, a VPN would work for this POC.

 

While building a VPN tunnel between 2 companies is not really rocket-science, this challenging and exacting work is best left to network and security professionals, so I am not going to tell you the steps by steps on how to create the tunnel using the Cisco ASA 5545-X series of firewall appliance installed in the Durham side. Rather, I will share with you some lessons learned from the implementation of the firewall (the list below is by no mean comprehensive):

 

  1. Involve your IT Global Security team early in the process to discuss what is it that needs to be accomplished so that they can come up with a design which can be socialized with the external party.  For example, for Project RUBICON, we initially thought that allowing RecoverPoint and VMware SRM traffic to flow through was sufficient, but the final design was far more complex as it involved SSH, RDP, DNS, and much more
  2. Engage the IT & Security teams from both organizations for an open dialogue between the parties.  This may seems to be so evident, but getting the people with the right expertise and the right level of authority to make decisions is not always so simple
  3. Beware of overlapping and duplicate IP addresses block being used.  When you connect data centers from 2 separate companies, there is a strong likelihood that non-routable IP addresses such as 10.103.xxx.xxx or 192.168.xxx.xxx used for everything from VMs to storage arrays to switches in one data center could already be used in the other data center and the need to change the IP addresses block at one of the 2 data centers will delay and complicate things
  4. Work with the IT Security team to build a scalable and highly available infrastructure to support the VPN tunnel, as if the VPN goes down then the link will be lost and the long distance disaster recovery will no longer work.  And by highly available infrastructure for the VPN, I am talking about having redundant VPN endpoints and not necessarily redundant WAN links
  5. Implement WAN traffic monitoring and QoS solutions to make sure that the data replication over the VPN is not overwhelming other important traffic, but also that the data replication traffic is getting sent in a reasonable rate as to not negatively impact RPO

 

 

Some lessons learned on configuring EMC RecoverPoint to work with VMware SRM

As I mentioned in my earlier blog post, the tight integration between VMware SRM and EMC RecoverPoint is part of the secret sauce for the success of Project RUBICON Phase 1 – this integration is mature and well understood, and it used by thousands of EMC customers worldwide.  Once again, while configuring these products is not rocket science, it would be best to involve professional services experts from EMC and VMware, working alongside storage and SAN experts who handle the provisioning of the LUNs. So here are a few lessons learned and interesting observations that we collected during our quick and painless implementation of EMC RecoverPoint, VMware SRM, and the integration between the 2 products:

 

  1. Plan out RecoverPoint consistency groups carefully in order to take advantage of the capability of VMware SRM to perform the failover in a very granular fashion. In other words, if every virtual machines were to be put on one LUN, then only one consistency group can be built and be associated with one VMware SRM Protection Group – this approach limits the granularity of the failover considerably since every SAP application will need to be part of the failover
  2. The preferred way is to group the SAP applications which are working together in a consistency group and associate it with a protection group.  For example, the ‘SAP complex environment’ is made up of SAP ECC, SAP SLT, and the SAP HANA sidecar running CO-PA and these apps need to be grouped into one consistency group, which would be different from the ‘SAP data mart environment’ with a consistency group made up of a SAP HANA data mart and a SAP BOBJ reporting engine & portal.  With this approach, the SAP complex environment can be moved from Suwanee to Durham while the SAP HANA data mart environment continues to run in Suwanee.  Put it another way, this use case comes closer to a hybrid cloud workload relocation than a disaster recovery due to VMware SRM being able to handle the granularity of the failover or workload movement
  3. At a minimum, RecoverPoint journals volumes should be allocated 20% of the space being protected, which are referred to as data volumes.  We could also have a design where the journal volume on Site B (e.g. Durham) is larger than the one at Site A (e.g. Suwanee) to allow for more flexibility to recover further back in time
  4. The initial replication of all the data volumes & log volumes in Suwanee (approximately 6TB) to the VMAX in Durham took approximately 16 hours over the VPN tunnel, although your mileage may vary depending on traffic on the Internet
  5. With VMware SRM’s Planned Migration functionality, VMware SRM knows not to restart the VMs in the remote site until all the data volumes and log volumes have been replicated.  This method increases the Recovery Time Objective (RTO) but will keep data loss to a minimum, and therefore, this method can be used to relocate workload from one data center to another.  One could argue that it is conceptually similar to moving workload in a SAP Hybrid Cloud environment, even if the Private Cloud environment is several hundred or even thousands of kilometers apart from the Public Cloud environment

 

So you see that there are so many interesting technical details from what would be considered the “secret sauce” used in Project RUBICON, namely the combination of virtualized SAP HANA and other SAP applications running in TDI mode, EMC RecoverPoint, and VMware SRM.


As VMworld 2014 kicks off on Monday August 25th, 2014 in San Francisco, I will share with you some exciting announcements involving components of that secret sauce, and they promise to be tasty

 

Please stay tuned!

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

Long distance recovery for virtualized SAP HANA made simple: what we learned from Project RUBICON


I just returned from SAPPHIRE 2014 in Orlando where SAP CEO Bill McDermott told the assembled 25,000 people that “SAP wants to run simple!” and he then proclaimed that “We can, we will, beat complexity” to help SAP customer “run simple” too.

 

Run simple!  Can it be applied to a solution for providing long distance (as in over 100 miles) business continuance & disaster recovery (BC/DR) for SAP HANA?

 

In my global travel these past 12 months talking to customers, one thing is very clear: the lack of a solid, simple, and easy to understand long distance disaster recovery solution is making it difficult for customers to take SAP Suite on HANA into Production in large scale.  


In my pre-SAPPHIRE blog post “Virtualized SAP HANA in Production: the momentum continues and we have crossed the Rubicon!”, I had invited those of you going to SAPPHIRE to join us on Wednesday June 4th, 2014 at 4:15PM ET in Theater 2 in the ASUGHub to hear the results of our Proof of Concept (POC) called Project RUBICON.

 

Deloitte, EMC, VMware, & Cisco worked closely together to prove out that you can recover virtualized SAP HANA instances at longer distance, in this case, it is 550 km (341 miles) which happens to be the distance between the Deloitte datacenter in Suwanee, GA (outside of Atlanta) and the EMC datacenter in Durham, NC.

 

The POC planning team felt that the results and outcome of Project RUBICON needs to be discussed more in business and applications terms and less in technical terms, and therefore, Deloitte Consulting is the key partner in this project in which the RTO and RPO metrics must be relatable to a business user.

 

Deloitte has an existing VCE Vblock System 300 in its datacenter in Suwanee which is used to show its clients how they could benefit from Deloitte’s Cloud ServiceFabric, a new private cloud solution based on VMware vCloud Suite, launched in May 2014.  To start Project RUBICON, EMC, VMware, & Cisco built a DR site at a distance far enough to make this project ‘real world’ (we did NOT want to do any distance simulation), and we chose the EMC datacenter in Durham, located 550 km away.

 

With Deloitte bringing its SAP HANA application expertise to the team, the remaining stakeholders, EMC, VMware, and Cisco invested almost $2M in equipment and expertise to build the brand new Project RUBICON lab in Durham.  This POC aims to demonstrate how BC/DR, a key component of any SAP application implementation strategy, is an integral part of the Cloud ServiceFabric offering.

 

The Deloitte team created a SAP HANA data mart running on a 512GB virtual machine (VM) supported by a SAP BOBJ (Business Objects) VM of smaller size for this POC.  In addition, a SAP Business Suite on HANA virtual machine was also created to be part of the test and demo for SAPPHIRE.  The ‘disaster’ was to abruptly disrupt the VMs in Suwanee where a sales report was being run in BOBJ using 250 million records of data already existing in the HANA data mart, and also disrupting a currently running data load of an additional 200 million records into the data mart.


Let’s first review the high level architecture created for Project RUBICON:


High Level Architecture RUBICON.png

The Vblock in Suwanee is powered by Cisco UCS servers with EMC VNX5300 storage, and the POC was conducted primarily on a pair of Cisco UCS B440M2 set up in TDI mode – in other words, the VMDK files for the SAP HANA & other supporting virtual machines (VMs) reside on specific LUNs protected by a pair of Gen5 RecoverPoint appliances (RPAs).

 

In Durham, a V+C+E environment was built, with a pair of Cisco UCS B440M2 set up in TDI mode on a VMAX 20K, & the LUNs set up to receive the data stores from Suwanee as replicated by RecoverPoint over a VPN tunnel between the sites.

 

Unisphere RecoverPoint Suwanee to Durham replication.png


VMware vSphere 5.5 was used at both sides since that is the required version supporting virtualized SAP HANA, but it was VMware SRM (Site Recovery Manager) and its integration with RecoverPoint which was the key piece of the puzzle which provided the astounding results of this POC.

 

Essentially, the POC team created an “Easy Button” in VMware SRM which can be pushed (in this case, mouse-clicked) once a disaster has been declared by someone with authority to make that decision.  Once the Easy Button has been pushed, VMware SRM takes over and completely automates the entire recovery & orderly restart of the VMs in Durham – this automation eliminates the needs to consult and implement complicated and error-prone DR run books.  It was amazingly simple to watch the progress of the recovery on the vCenter console, without any human intervention, until we were notified on the console that SAP HANA is up and running again without any error!


So what were the results?  Let’s take a look at the table below, which was shown at our ASUG session at SAPPHIRE, perhaps the first time that any concrete metrics were being discussed by anyone publicly regarding long distance BC/DR for SAP HANA.


Results of Project RUBICON tests.png


The results of the POC, especially the RTO and RPO metrics were simply astounding!

  1. It took under 15 minutes for an end-to-end recovery & restart of the SAP HANA data mart (512GB VM) and the supporting SAP BOBJ portal in Durham, which is now the Production site.  We (as the Business Users) were able to login and immediately attempt to rerun an existing sales report which was interrupted by the ‘disaster’
  2. The initial rerun of the report in Durham was slower than its baseline in Suwanee (29 seconds vs. 10 seconds) which is to  be expected since the 250 million records needed to be read into SAP HANA from the Persistence Layer
  3. But subsequent runs reduced the report run time to 18 seconds, and eventually to the 10 seconds like in Suwanee
  4. Inspection of the SAP HANA data mart showed that the disrupted data load only resulted in less than 5% of in-flight data (the data in SAP HANA not yet committed to logs in the Persistence Layer) – this is remarkable given the asynchronous nature and the long distance (550km) of the data replication, and a powerful testament to RecoverPoint’s efficient data compression
  5. It also took under 15 minutes for the HANA developer to login to the HANA Studio to resume the data load, proving that development teams can quickly resume their work in Durham
  6. The SAP Basis and infrastructure team inspected every component of the V+C+E infrastructure in Durham after the automated recovery by VMware SRM, and could not find a single error or fault
  7. Once all the tests and inspections were completed, the team decided to have VMware SRM orchestrate a fallback to Suwanee from Durham in order to make Suwanee the Production site once again.  This ‘reprotection action’ (using VMware terminology) took roughly 15 minutes to perform after VMware SRM instructed EMC RecoverPoint to reverse the direction of the data replication
  8. We tested the failover from Suwanee to Durham, and then the failback from Durham to Suwanee 3 times, and each time, the results were consistent!  It was quick, simple, and fully automated


In my almost 18 years career in SAP Basis, I have chosen to specialize in 2 areas: application performance and disaster recovery, and I have always known that planning BC/DR is hard and costly for any SAP application, but to do it with SAP HANA which is an in-memory database posed new challenges!


One key design goal was to show that this BC/DR solution can even be implemented in a hybrid cloud scenario, and therefore, we set up a VPN tunnel between Deloitte and EMC to take advantage of both companies existing Internet connectivity.  You can therefore imagine the security concerns from both parties as firewall rules were modified by the security teams for this POC, but in the end, it all worked! 


It is worth noting that the initial replication of the approximately 6TB of VMs from Suwanee to Durham took about 16 hours – after that initial replication, delta replication of the logs took significantly less time.  Obviously, your mileage will vary especially if a VPN is involved since you will be at mercy of the throughput of the Internet, but the cost of creating a VPN is far less than that of a dedicated leased line.


At the end of our presentation at SAPPHIRE, one customer came up and told me that he has Suite on HANA in Production since July 2013 on an appliance, but without meaningful BC/DR, and so he has been constantly worried!  He stated his delight at finding a real world solution with actual metrics, and best of all, it is a solution that he can implement immediately!


I am proud to be part of a team of passionate, dedicated, and talented people from 4 great companies, Deloitte, EMC, VMware, and Cisco, who have made Phase 1 of Project RUBICON such a success, and you can get more details of our work in the PDF of our presentation below as well as in a white paper.


But I still have a lot more to share with you all about Project RUBICON, so please do stay tuned!

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP


clicktotweet.png

Virtualized SAP HANA in Production: the momentum continues and we have crossed the Rubicon!

By now, you all know that VMware CEO Pat Gelsinger announced at EMC World on May 6th, 2014 that the “first and only hypervisor for Production use of HANA is VMware” and he mentioned EMC IT’s implementation of SAP BPC on virtualized HANA (in Production since November 2013) along with AMG-Mercedes in Production with virtualized SAP HANA with a 1TB configuration as examples of things to come.

 

Indeed, merely 10 minutes after it’s official that SAP HANA virtualized on VMware vSphere 5.5 is fully supported in Production, Approyo announced that it has taken into Production the first virtualized SAP HANA customer on its VMware Cloud infrastructure – if you have not yet heard about Approyo, then you should find out more about this company which solely focuses on SAP HANA work as a SAP Startup Focus member led by Chris Carter and Marcus Retrac, both former SAP employees.

 

That same day, the enterprise-class cloud software and services provider Virtustream announced that it too has a customer running virtualized SAP HANA in Production on the Virtustream’s xStream Enterprise Cloud platform.  SAP is an investor in Virtustream which itself is a member of the SAP Cloud Technology Advisory Board and the SAP Cloud Benchmarking Group.

 

Then the next day on May 7th, Paul Roche, CIO of Network Services and ASUG Board Member explained in this blog post on ASUGnews “Why the VMware vSphere and SAP HANA Announcement in Big News”, and I would like to except a few of Mr. Roche’s points to whet your appetite before you read his blog:

 

For many SAP customers—and for Network Services, specifically—one of the greatest challenges to adopting HANA is the cost of the HANA appliances, whether you choose to deploy on-premise or in a hosted model. For us, the hardware costs to procure and implement HANA were far greater than the SAP HANA license costs

 

Now with the vSphere 5.5 news, the ability to run productive SAP systems in a virtualized VMware environment creates a number of benefits for SAP customers. For instance, if you decide to run HANA on-premise, you can reduce the number of HANA appliances that you need to purchase by leveraging a larger, single appliance to run multiple, virtual systems… Overall, this new ability to virtualize production systems should significantly drive down costs for on-premise hardware systems or in cloud-provider monthly fees. And that’s great news for SAP HANA customers

 

 

Nowhere in his blog post did Paul Roche discuss any technical reason to make his point on why virtualized SAP HANA is a big deal!  Instead, he focused completely on the economic benefits of costs reduction and incredible flexibility compared to running HANA on an appliance, and if you read my EMC World 2014 blog post “The future of SAP HANA implementation in the Enterprise is here, and it’s looking brighter than ever for Petabyte-HANA!” then you will recognize that they are the same points that I made!

 

 

SAPPHIRE 2014 EMC.jpeg.jpg

 

 

If EMC World was when Production support for SAP HANA virtualized on VMware was announced, then SAPPHIRE 2014 in Orlando on June 3rd to June 5th, 2014 will be its coming out party!

 

Wednesday June 4th is a key date for people wanting to know more about why virtualized SAP HANA in Production is a reality:

 

  1. At 3:00PM ET, in my capacity as an ASUG Market Leader for the Enterprise Architecture SIG, I will introduce Bill Reid and Mike Harding of EMC IT for the ASUG session “Lessons Learned on Enterprise Architecture decisions for SAP HANA Deployment at EMC IT” – Bill and Mike will provide lessons learned on how EMC IT made key decisions on choosing the right Enterprise Architecture deployment option for its SAP HANA sidecar implementation on a HANA appliance and subsequently for its SAP BPC on virtualized SAP HANA running on a Vblock.  For those of you going to SAPPHIRE, it’s session ID 1008, which will be held in room S310H, South Concourse, Level 3
  2. Then at 4:15PM ET, you can go to Theater 2 in the ASUGHub on the show floor to attend the session “Deloitte, EMC, VMware, & Cisco – Virtual SAP HANA Disaster Recovery (DR) on VMware vSphere” (session ID 4211). After all, you can’t run SAP HANA in Production without a solid DR plan in place, right?

 

 

But to date with SAP HANA in Production on appliances, whatever Business Continuity for SAP HANA is referred to as Disaster Tolerance (DT) and not “true” DR since it is only works over a sync distance, which for us non-techies means a distance of roughly 100km or so.

 

So Deloitte, EMC, VMware, and Cisco decided to combine force and launched Project RUBICON to prove out that you can recover virtualized SAP HANA instances at longer distance, and in this case, it is 550km which happens to be the distance between the Deloitte datacenter in Suwanee, GA (outside of Atlanta) and the EMC datacenter in Durham, NC. This presentation may in fact be the first time that DR for SAP HANA over async distance (greater than 100km) will be discussed publicly complete with supporting metrics and how-to, and it is in fact virtualized SAP HANA on VMware which was the key enabler for this capability to go over long distances.

 

The stakeholders in Project RUBICON invested in a brand new lab in Durham to be the DR site to the Vblock in the Deloitte data center in Georgia, and the virtualized SAP HANA TDI set up in Durham uses readily available products and solutions such as the EMC VMAX, EMC RecoverPoint, Cisco B440 servers, and the VMware suite of products, in other words, everything in the POC that is Project RUBICON can be implemented right now and not in the future!

 

I have had the great privilege of being a member of the Project RUBICON team, and this group of fantastically talented and dedicated people will discuss not only how the DR for SAP HANA will work 550km away from the source data center, but also the RTO and RPO metrics along with lessons learned from Phase 1 of this amazing POC.

 

So, I hope to see you all at these 2 SAPPHIRE sessions in Orlando, as well as other virtualized SAP HANA sessions in the VMware and Cisco booths - what we have to share with you is groundbreaking!

 

I hope that you will also agree with me that we have indeed crossed the Rubicon!  For reference, the idiom “Crossing the Rubicon” means to pass a point of NO RETURN, like when Julius Caesar defiantly led his army into Italy from Gaul by crossing the Rubicon, a small river near Ravenna in Italy – it was considered a point of no return for Caesar since if he had failed in taking Rome, he would have been executed!

 

When you have someone like Reinhard Breyer, CIO of AMG-Mercedes, said that "we believe that virtualized SAP HANA with VMware vSphere could be the key to our future, as we move to cut operational costs and simplify our data center operations", we know that the momentum forward for deploying virtualized SAP HANA in Production cannot be stopped as we have crossed the point of no-return!

 

I will be sure to share details of Project RUBICON in upcoming blog posts! To be continued…



Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP



clicktotweet.png

The future of SAP HANA implementation in the Enterprise is here, and it’s looking brighter than ever for Petabyte-HANA!

 

This week, I am coming to you once again from EMC World in Las Vegas, and as usual EMC delivered one tantalizing announcement after another for SAP HANA customers:

  • On Monday, it was the acquisition of the secret chip maker DSSD headed by the legendary Andy Bechtolsheim (the father of the SPARC chip) and by Bill Moore (formerly Sun’s chief storage engineer and previously employee number one at 3PAR) – I will put this acquisition in context on SAP HANA in the Enterprise at the end of this blog post
  • Then on Tuesday, SAP and VMware announced full support for SAP HANA for Production use on VMware vSphere 5.5, confirming what I have been speculating in my blog post “I have seen the future of SAP HANA implementation in the Enterprise – Part 2” back in early January 2014

 

First, let me discuss the long awaited announcement that you can now run SAP HANA virtualized on VMware in Production with SAP and VMware’s full support – the press release can found here and relevant technical details (e.g. 1TB and 64 vCPU and so on) can be found on OSS note 1995460. You can also read Axel Streichardt's blog on this momentous announcement here.

 

In my view, this announcement literally changes the game in that it offers unprecedented flexibility to the customer especially when deployed in conjunction with SAP HANA on TDI:

 

  1. Customers can now deploy virtualized SAP HANA instances in Production on the same converged infrastructure (or servers) running their other SAP Production instances, thus reducing network latency and potentially mitigating network performance impact during peak processing periods
  2. A running HANA instance running on vSphere 5.5 can be moved from one ESX host to another using vMotion while still running, thus causing no downtime or operational disruption to deal with anything from hardware maintenance to obtaining additional compute resource on the fly
  3. VMware Distributed Resource Scheduler (DRS) allows for a SAP HANA infrastructure which aligns with business goals while dynamically allocate compute resources and guaranteeing performance level for peak workload processing – for example, non-HANA VMs can be migrated from the ESX host to give maximum compute horsepower to Production instances
  4. Virtualized HANA allows for very rapid provisioning of new HANA instance by the use of VM templates and VMware host profiles

 

The flexibility offered by virtualized HANA makes running HANA on an appliance look impossibly limiting, overly confining, and too restrictive to the point of being impractical in some scenarios.  Let’s see how much longer the SAP HANA appliance will continue to be chosen for Production deployment.

 

Just prior to the announcement for full support for SAP HANA for Production use on VMware vSphere 5.5 during the keynote by Pat Gelsinger and Paul Maritz, Bob Goldsand of VMware and Mike Harding of EMC IT presented the technical details of that Production support as well as the virtualized SAP BPC on HANA running in Production on a Vblock at EMC IT.  I will now use information from their presentation to further illustrate the 4 points above:

 

  1. By implementing virtualized SAP BPC on HANA in Production on the same Vblock hosting SAP ECC, SAP BW and other key SAP applications, EMC IT was able to reduce the TCO and time to deploy SAP HANA by 80%!  It was a simple and straightforward virtualized SAP HANA TDI implementation on a VMAX, using established and well understood data center best practices that significantly reduced validation testing
  2. EMC IT was able to use vMotion to live migrate a 512GB SAP HANA instance in under 10 minutes! And that is a HANA instance with heavy workload being run while it is being moved from one ESX host to another, so there was NO DOWNTIME of any kind.  Try to beat that with any SAP HANA recovery method!
  3. EMC IT conducted an ROI analysis of virtualized HANA vs. our existing HANA appliance running a sidecar, and concluded that ROI was increased by another 40% since VMware HA was used with vMotion and DRS to allow maximum availability, but without resorting to a standby HANA node required in most HANA DT or DR scenario. Additionally, DRS rules allow for the maximum utilization of the resources on the Vblock to support EMC’s business goals
  4. New instances of virtualized HANA can quickly and easily be provisioned by cloning in whatever environment where there is available resources, and then those instances can later be repositioned onto the right RUN environment using vMotion

 

You can find out more details about the successful implementation of SAP BPC on HANA virtualized on vSphere 5.5 in Production at EMC IT by reading Mike Harding’s blog.  It is worth noting that EMCs’ senior Finance leadership team has reported that not only forecasting runs were improved from 12 hours to 3 hours thus enabling numerous “what if” analysis scenarios, but that new data loads into the BPC instance went from 40 minutes to 1 minute which allowed for more iterations and therefore more accurate forecasts!

 

Now, let me tell you that Bob and Mike also provided us with a few more interesting revelations during their presentation:

 

  1. First, if you want to take advantage of all that wonderful capabilities offered by vMotion, then you must use shared storage on BLOCK MODE.  This means that SAP HANA running on GPFS, even if virtualized, will not work with vMotion. EMC will be glad to help retrofit those multitudes of IBM X-series servers with internal SSD running HANA in appliance mode with our market-leading storage platforms under TDI
  2. Second, EMC IT has been using Sybase IQ for ILM using extended storage in Production with Smart Data Access (SDA) for extended tables – basically, this approach allows EMC IT to offload cold data from HANA to Sybase IQ in order to better control the HANA memory footprint (currently 200GB on a 512GB VM running on a Cisco B440), but this approach can allow EMC IT to “exceed” the 512GB size since the HANA data tables can be seamlessly extended to their “extensions” in Sybase IQ. This is SAP-supplied technology and therefore fully supported
  3. Since GoLive with SAP BPC on HANA on VMware, EMC IT has ‘archived’ around 1 billion records to Sybase IQ, proving that this SAP-provided ILM strategy is real and very practical.
  4. So, you can see why I said earlier that the future of SAP HANA implementation in the Enterprise is here, and it does NOT involve the use of a SAP HANA appliance

 

Now, let me briefly discuss why the outlook of SAP HANA in the Enterprise is brighter than ever, especially as SAP and EMC are collaborating more and more to surprise the world on what can be done with Big Data, and I mean TRULY Big Data as in Petabyte on HANA!  Now, why would anyone needs Petabyte-HANA?  The answer is very simply that phenomenon called the “Internet of Things” which will drive the needs to ingest, store, and analyze the unprecedentedly massive amount of data stemming from megatrends like social media, mobile, and so on.  SAP has already been demoing how the combination of SAP HANA and Sybase ESP and Sybase IQ can provide potential solutions to this new need.

 

Remember that big announcement by Joe Tucci on Monday of EMC’s acquisition of DSSD?  Well, magically, SAP CEO Bill McDermott appeared alongside Joe via a video hookup from Philadelphia to not only say nice things that EMC’s acquisition, but he also acknowledged that SAP have been working for some time now with technologists from DSSD, which will now become a business unit within EMC.

 

So exactly what is going on? What is it that DSSD will be releasing? Well, there was NO product announcement made on Monday, but a quick Goggle search provided numerous tantalizing explications, and I am quoting Gigaom’s Stacey Higginbotham who described what DSSD was working on:

… the startup is building a new type of chip — they said it’s really a module, not a chip — that combines a small amount of processing power with a lot of densely-packed memory. The module runs a pared-down version of Linux designed for storing information on flash memory, and is aimed at big data and other workloads where reading and writing information to disk bogs down the application

 

So, you saw how I explained earlier you can combine SAP HANA and Sybase IQ using Extended Tables to ‘exceed’ the physical memory of the SAP HANA server, right?  When do you think that we will see a server with one petabyte of main memory?  I don’t think anytime soon, do you agree?

 

But what CAN be done is to put Sybase IQ or even Hadoop on super-fast, near-memory speed next generation Flash storage (like what will likely come out of the DSSD acquisition) and I do believe that we should be able to see Petabyte-HANA in the very near future.


Let’s all stay tuned in, shall we?


Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

SAP Data Protection: Let’s Revisit Some Old Concepts in a New Light – Part 2

 

During my first blog post SAP Data Protection: Let’s Revisit Some Old Concepts in a New Light – Part 1, I discussed how you can rethink your SAP data protection strategy with EMC’s innovative RecoverPoint.  I explained how RecoverPoint Local Protection works with SAP, most notably how LUN states can be used to ‘erase' a mistake or a functional data corruption in a DVR-like fashion, how application-aware bookmarks can be used to create key recovery points which actually means something to a SAP Basis admin (such as a Pre-Patch or a Post-Patch LUN state), and finally how RecoverPoint’s Consistency Group is practically alone in the SAP market to provide the same sync point of all the databases of a modern and federated SAP landscape in order to facilitate and shorten any disaster recovery effort.

 

In this blog post, I return to the main use case of RecoverPoint for SAP, which is the long distance, and some cases, the very long distance business continuance & disaster recovery (BC/DR) enabled by RecoverPoint Remote Replication (formerly known as RecoverPoint CRR).

 

As previously mentioned, RecoverPoint Remote Replication brings world class data compression to allow the same capabilities of RecoverPoint Local Protection to be used at a second data center located thousands of miles apart.

 

To illustrate how EMC customers have benefited from RecoverPoint Remote Replication, let me discuss the BC/DR use case at a large customer with a data center on the West Coast of the USA and another data center in the East Coast, separated by a distance of almost 2,500 miles.  This customer has approximately 10TB of SAP data in Production in the West Coast data center for 3 business units and 8TB of SAP Production data in the East Coast one for 2 business units, with a 20% data change rate on the average, so it is a busy and active SAP Production environment.

 

RecoverPoint CRR long distance.png

 

Like all prudent SAP customers concerned with being able to resume operating their business should a disaster happen, this customer conducts an annual BC/DR drill to not only practice the full recovery of their SAP environment but to also measure how long would it take to perform that operation.

 

This customer measured the amount of time that it would take for ALL their SAP systems to be restarted, along with the network infrastructure, in order to allow users to login and start working again, and not just the time that it would take to recover one or more SAP databases.  This customer’s measurement of RTO is much more real world in order to set expectation with management regarding SLAs for recovering their SAP environments.

 

 

Cutting full-scale DR exercises from days to hours

 

In 2008, when the data centers were without RecoverPoint, the annual DR exercise took 80 hours, with a team of approximately 20 people participating.  The recovery of the SAP environment was done using a combination of logs shipping, tape restore, and PIT (point in time) database recovery.

 

In 2009, with 60% of the databases on both sides protected by RecoverPoint, the annual DR exercise was shortened to 32 hours!    Then in 2010, when all the databases on both sides were protected by RecoverPoint, the entire DR exercise took a mere 8 hours!  And that’s 8 hours from when the DR exercise was started until people were actually able to login to all SAP systems to resume their work.

 

RecoverPoint CRR time improvement.png

 

The customer explained to us that this astounding reduction in time was possible because RecoverPoint’s Consistency Groups allowed for significant time savings during the recovery of the databases, and in addition, it allowed for the parallelization of these database recoveries.    The customer further indicated that by having RecoverPoint, more regular and incremental DR tests can be conducted at either datacenter without disrupting Production at the other datacenter, and therefore increasing the overall confidence level to their businesses in their BC/DR capabilities.

 

This customer is very satisfied with how RecoverPoint Remote Replication allows for a bidirectional replication of their large SAP environment (10TB West Coast and 8TB East Coast) with an active data change rate (20%) by using a WAN pipe composed of two OC-3 links rated at 150Mb/s – did I mentioned several times already that RecoverPoint offers world class data compression?

 

You can find out all the details about this real world customer use case by downloading this white paper.

 

Finally, it is worth noting that most EMC storage platforms aimed at the SAP market (e.g. the VMAX and VNX) include built-in RecoverPoint splitters in order to simplify the implementation of this great solution, which also works storage from any vendor (IBM, HP, HDS, etc.)

 

Curious? Intrigued?  Come visit the Business Continuity & RecoverPoint for SAP resources page on Everything SAP at EMC and then make an appointment to talk things over with our SAP Specialists and our RecoverPoint experts, and you may never think about SAP data protection and quick application recovery in the same old way again.

 

More to come in my next blog posts when I will take a deep dive into how DataDomain changes data protection for SAP customers.

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

SAP Data Protection: Let’s Revisit Some Old Concepts in a New Light – Part 1

 

When it comes to talking about EMC innovations for the SAP customers, I never get tired of discussing how something as basic as protecting SAP data can take on a whole new look and even a whole new dimension when a few of EMC’s innovations are used.

 

I have previously blogged extensively on VPLEX, so I will not talk about VPLEX in this post even though I still have a lot more to say on how SAP customers can benefit from this EMC game changer - it will be for another time.  In this post, I will explore how you can rethink your SAP data protection strategy with RecoverPoint, to be followed with additional blog posts where I will take a deeper look at DataDomain.

 

RECOVERPOINT FOR SAP – NOT JUST FOR LONG DISTANCE DR/BC

In this previous blog post, I discussed how RecoverPoint for SAP is unique in the marketplace to offer a complete federation of the myriad of SAP applications in a typical SAP customer environment by using Consistency Groups, and it provides world class data compression as well as bookmarks and journaling capabilities to allow for sites to be located thousands of miles apart to be used in a DR/BC scenario – this capability to offer long distance BC/DR is called CRR (Continuous Remote Replication).  But what exactly is being continuously replicated remotely?

 

To fully understand the magic of CRR, let me first explain the key concept of CDP (Continuous Data Protection) which is the foundation of this great solution.  SAP NetWeaver is more or less a database application, running on top of  Oracle (both RAC and single instance), Microsoft SQL Server, IBM DB2, Sybase ASE & IQ, and so on, therefore data must naturally be protected so that it can be used to restart the database should a problem occurs.

 

Any good DBA knows how to recover the database to a Point in Time, the so-called PIT Recovery.  The idea is simple: play back the database redo logs to a particular point in time where the database can be safely restarted, for example, if a problem (such as a table has been corrupted accidentally) occurred at 12:05PM today, perhaps we can recover the database to 12:02PM and work to resolve the issue?

 

Yes, it’s simple, but if you need to go to your back up (on tape or disk) to start a restore before you can apply the logs, then that operation can take time!  This image below shows the recovery time tradeoffs between using tape, disk/snaps, and continuous replication.

Recovery Tradeoff -RecoverPoint.png

 

 

THE CAPABILITY TO ‘ERASE’ A MISTAKE OR FUNCTIONAL DATA CORRUPTION

So how can RecoverPoint and continuous replication offer not only a very short amount of time to restore the database but also a very short amount of time to also apply the redo logs?

 

The answer: you can direct RecoverPoint to present to your database server the ‘LUN state’ of the storage of a particular point in time in the recent past, in this case it would be the automated ‘bookmark’ previously taken by RecoverPoint in its journal! This ‘LUN state’ can immediately be mounted to a server and the database recovery operation can start practically immediately.

 

Many people compare this capability to what can be done with a DVR (digital video recorder) such as the TiVo where you can in effect ‘rewind’ the ‘live’ TV program in order to watch a particular event again and again.  This capability to take you back in time, and in this case, taking the ‘LUN state’ of the storage attached to your database server back to an earlier time, is what makes continuous replication so magically effective!

 

In effect, you can more or less ‘erase’ the mistake of a table being dropped inadvertently or of the corrupted tables which were caused by a program crashing.  Obviously, I have offered a greatly simplified explanation on how RecoverPoint continuous data protection works, so for all of you techies who want to fully understand how it all work under the hood, you can go as deep as you like by perusing these RecoverPoint resources pages.

 

Now, enough of all this LUN state business!  What does any of this has anything to do with SAP?   Let me put all this in a SAP context by showing you how RecoverPoint allows for the recovery to any point in time with application-aware bookmarks.

CDP bookmarks - RecoverPoint.png

 

In this image above, you can see that RecoverPoint set up application-aware bookmarks that are tightly coupled with significant events (or LUN states) that a lot of SAP Basis admins can easily identify with, such as a Pre-Patch, a Post-Patch, or a key Checkpoint of a long running job!  So if a database crash was to occur while a patch is running, the LUN state at the Pre-Patch stage can easily & rapidly be mounted so that the database recovery can begin quickly and if the bookmark was taken not too long before the crash occurred, then the application of the redo logs will be minimal.

 

All right, so I have explained to you how RecoverPoint can help quickly recover a single SAP database, but in the modern SAP customer environment, you no longer have just one SAP application like when I started back in 1996 when all that we worked with was SAP R/3.  Today, it’s not at all uncommon for customers to have SAP ECC (or ERP), SAP BW, SAP SCM, SAP CRM, SAP SRM, SAP EP (Portal), SAP XI, and Solutions Manager running in their landscape and most important of all, these apps are talking to one another as shown in the diagram below:

SAP cross system dependencies - RecoverPoint.png

It’s this federation of SAP apps that makes the recovery of the modern SAP environment so much more difficult and time consuming, because each of these SAP applications must be at the same sync point in order for the recovery of the application to be successful. Of course, given sufficient time, you can recover all these various SAP databases in order to restart everything together, but RecoverPoint’s Consistency Group allows you to effectively eliminate the guesswork of getting all the databases to the same sync point! This is because with a Consistency Group, all the LUN states for each of the SAP databases are in fact at the same sync point at ALL times, thus saving you precious time during the recovery effort of your federated SAP application landscape.

 

So there you have it: an old and rather basic concept of data protection revisited with a modern and game changing EMC innovation, RecoverPoint for SAP.

 

Curious? Intrigued?  Come visit the Business Continuity & RecoverPoint for SAP resources page on Everything SAP at EMC and then make an appointment to talk things over with our SAP Specialists and our RecoverPoint experts, and you may never think about SAP data protection and quick application recovery in the same old way again.

 

More to come in my next blog posts when I will take a deep dive into how DataDomain changes data protection for SAP customers.

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

How EMC innovations helped advance the state-of-the-art for SAP infrastructure?

 

I recently celebrated my fifth year at EMC, and during my tenure here as SAP Technical Evangelist, I have had the pleasure and the privilege of working with some of our seminal technologies and solutions which I believe have played a significant role in advancing the state of the art for SAP infrastructure.

 

Over the holidays, I reflected back on how our customers have benefited from key EMC innovations in the SAP Production environment, so I thought that I would share a few insights with you about the amazing things that EMC customers have done.

 

VPLEX Metro for SAP and Oracle RAC

Many of you will remember that I have blogged extensively about this game changing solution, the only kind in the SAP market today which offers an Active-Active, always-on, SAP environment!  We’re not talking about a disaster recovery solution here, because there is NO recovery to be done, ever, even if one of the 2 data centers was to literally go up in smoke!

 

Our customer Dow Corning, the world leader in silicones and silicon-based technology based in Midland, Michigan, implemented VPLEX Metro for SAP and Oracle RAC in Production in 2012 as part of its IT transformation effort, and the results have been stunning:

  1. Achieved continuous SAP system availability with 2 data centers located sufficiently apart to assure an always-on SAP environment thanks to the SAP Active-Active implementation on Oracle RAC, thus reducing the complexity of any disaster recovery or failover scenario
  2. Achieved maximum computing resource utilization because VMware DRS (Distributed Resources Scheduler) and vMotion can automatically move SAP workload from one site to another to take advantage of available resource to handle increase computing needs
  3. Achieved maximum flexibility with VMware provisioning because new VMs or even entire SAP environments can be created at either site depending on what’s available, and then easily “positioned” for active workload at the chosen site easily later on
  4. Achieved storage federation between the 2 sites with VMAX for Tier 1 and VNX for Tier 2 applications, with simplified provisioning of new storage requirements
  5. Achieved active-standby for applications not running on Oracle RAC, which means that RTO for those applications during a fail-over is very short

 

I invite you to watch this video in which Mike O’Keefe, Director of Global IT Infrastructure at Dow Corning, discussed how VPLEX has made a difference in his company.

 

VPLEX Metro for SAP is not limited to just environments using Oracle RAC, although if you want Active-Active, then Oracle RAC must be used (we are working to qualify IBM DB2 Pure Scale, so please do stay tuned).  Indeed, our customer Adelaide Brighton Limited in Australia implemented VPLEX Metro for SAP with Microsoft SQL Server, and its SAP environment can be recovered in minutes in case of the failover from one site to another.

 

 

RecoverPoint for SAP

Since SAP is such a mission critical application, customers are always pre-occupied with having a viable business continuance / disaster recovery (BC/DR for short) plan.  Implementing such a plan is not a trivial task, and implementing such a plan between 2 sites located thousands of miles apart becomes an even more daunting project.  If you add in the fact that in today’s modern SAP landscape, you have several applications such as ECC, BW, CRM, SRM, EP, and so on, which must be recovered at the same sync point before people can start working again, then the challenge becomes even greater!

 

EMC RecoverPoint for SAP is alone in the market to offer a complete federation of the myriad of SAP applications in a typical SAP customer environment by using Consistency Groups, and it offers world class data compression as well as journaling capabilities to allow for sites to be located thousands of miles apart like our customers Columbia Sportswear is doing between Oregon to Colorado and James Hardie is doing between California and Illinois – Columbia Sportswear is a world leader in outdoor apparels which are trendy and comfortable, while James Hardie is the world leader in fiber cement siding and backerboard, and I invite you to watch Mike Leeper of Columbia Sportswear and Steve Killian of James Hardie explain in video how EMC innovations have helped their companies.

 

As most SAP customers have multiple storage vendors and storage types in their environment, EMC solutions and technologies work well in this multi-vendor world: RecoverPoint and VPLEX work with EMC and non-EMC storage, which allows the customer the flexibility to use the right storage for the right application at the right location.

 

 

FAST VP for VMAX and VNX2

Finally, one EMC innovation which is very near and dear to my heart is FAST (Fully Automated Storage Tiering) which was first put in Production by our customers on the VMAX, and now on the VNX2.  Once again, EMC is alone in the market in being able to increase SAP application performance while decreasing CAPEX and OPEX and the SAME time, and FAST works with Virtual Provisioning (VP) for better performance and resource utilization and hence we call this game changing innovation FAST VP for SAP.

 

I have blogged extensively on FAST VP and the phenomenal TCO results that our customer Callaway Golf has achieved, and similar TCO and ROI numbers were also observed at our customers Eli Lilly and TXU Energy, among others as today, FAST VP on VMAX is in Production at the biggest SAP customers in the world.   Once again, I invite you to this video in which Chinh Van, Director of Global SAP Infrastructure at Callaway Golf, explained how EMC technologies such as FAST VP helped drive innovations at his company.  I also invite you to watch a 10-minutes video demo of FAST VP on the VMAX in action.

 

 

I hope that I have presented sufficient information on how EMC innovations have changed the state-of-the-art for SAP infrastructure, and we have many more innovations to come.  Perhaps the best way for you to find out more is to join us for a day at our SAP Week at EMC which are usually held at Executive Briefing Centers (EBC) in either a SAP or EMC location.  For the first quarter of 2014, we have 3 such sessions:

  1. Atlanta, Georgia, February 11-12, 2014 at SAP America Executive Briefing Center
  2. Paris, France, March 6, 2014 (most presentations and sessions will be in French) at the EMC office in Bezons
  3. Santa Clara, California, March 4-6, 2014 at the EMC Executive Briefing Center

 

 

I hope to see you at one of these sessions!

 

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

clicktotweet.png

I have seen the future of SAP HANA implementation in the Enterprise – Part 2

 

In my earlier blog post “I have seen the future of SAP HANA implementation in the Enterprise – Part 1”, I had presented some of the facts as to why the future of SAP HANA implementation in the Enterprise may be done without using a SAP HANA appliance; and instead, that SAP HANA deployment in Production will be done under the guidelines of the Tailored Datacenter Integration (TDI for short) and perhaps even using Virtualized SAP HANA with VMware.

 

In this blog post, I will first discuss in details what running SAP HANA on TDI means to you the customer as well as the very strict requirements for such an implementation (such as no support for commodity hardware as an example), and other key points to keep in mind.  Then, I will be review and comment on the stunning testing results on testing of virtualized SAP HANA on VMware as disclosed by Bob Goldsand in his standing-room-only session ITM246 “The Power of Virtualizing SAP HANA with VMware” at SAP TechEd’13 in Las Vegas.

 

 

Running SAP HANA under the Tailored Datacenter Integration (TDI) program

 

In response to concerns from its largest customers regarding the “rigidity” of the implementation model for SAP HANA (e.g. you can only use a certified appliance in bare metal mode), SAP quietly announced at SAPPHIRE 2013 in Orlando that its MaxAttention customers will be able to choose to deploy SAP HANA on their enterprise storage and infrastructure and not have to implement a SAP HANA appliance in order to be supported while running in Production.  Since that announcement in May 2013, SAP has had over 50 pilot customers and the program is now widely available.

 

As expected, the TDI program comes with numerous requirements and stipulations, and there is a lot of fine print which need to be read:

 

  1. First, the components to be used in the SAP HANA environment under TDI must be certified by SAP, so this means that both the servers and the storage will need to be certified
  2. Second, it is the customer who is now responsible for the integration and the support of the SAP HANA environment under TDI.  This requirement is not at all surprising if the customer is already in the MaxAttention program where the customer IT team partners closely with SAP Active Global Support (AGS) by having a Center of Excellence in house, where SAP AGS personnel can be “embedded” within the customer’s IT organization.  So, it would appear that TDI may not be a possibility for just any SAP customer (there has been unconfirmed reports that the customer wishing to implement SAP HANA under TDI could ask SAP AGS to waive the requirement to be a MaxAttention customer, but this may or may not be applicable to your individual situation, so do check carefully before you can assume that TDI is for you)
  3. So in effect, more flexibility requires more responsibilities as SAP is shifting the task of coordinating, integrating, and supporting hardware from various vendors to the customer wishing to run SAP HANA in TDI mode.  This is why the close collaboration between the customer’s IT staff and SAP AGS under MaxAttention is crucial to making TDI work
  4. So far, SAP has certified servers from IBM, HP, Cisco, and their usual HANA server partners, as well as storage from EMC, IBM, NetApp, and their usual HANA storage partners, but the task of making sure that there is enough resource to successfully implement SAP HANA in Production under TDI is left to the customer to figure out with the various hardware vendors.  As an example, an EMC customer can certainly use an existing VMAX to implement SAP HANA in TDI mode, but EMC Professional Services may need to be engaged to assess if that VMAX has enough performance resources available to support the SAP HANA environment
  5. Additionally, the customer must also negotiate with its hardware partners which file system will be implemented and that’s not a trivial as you think.  For example, IBM chooses to run SAP HANA on GPFS on its appliance, whereas Cisco-EMC chooses block mode on its appliance.  So what will we be running if the customer wants to use IBM servers on EMC VMAX storage?  Will it be GPFS or will it be block mode?  These will be interesting discussions, at least until TDI reaches a certain level of maturity
  6. SAP just started the process of certifying the SAN (aka the Enterprise Network) for use with SAP HANA running under TDI, and this process is expected to be completed in early 2014.  So, you can see that there is a lot more ground work to be done with all the various stakeholders before TDI can be used by the average SAP customer, and this explains why TDI is really only for MaxAttention customers.

 

In summary, TDI is a very exciting and interesting possibility for the SAP customer who has well-defined standards for its enterprise infrastructure along with processes and procedures, and that SAP customer can deploy SAP HANA in Production without using an appliance.  But as previously stated, more flexibility also comes with more responsibilities and integration/coordination work!  That said, I firmly believe that more and more SAP customers will be running SAP HANA under TDI in 2014 and beyond.

 

 

Running SAP HANA virtualized under VMware in Production

 

Running SAP HANA under TDI certainly offers the SAP customer additional flexibilities for leveraging the existing investment in SAP infrastructure, processes and procedures, and so far, the discussion has been around doing this integration with physical, bare metal hardware.

 

But things become a lot more interesting, and the possibilities more tantalizing, when we steer the discussion toward running SAP HANA virtualized under VMware.  To be sure, SAP NetWeaver fully virtualized on VMware has been running in Production for a very long time now, and the benefits of running SAP on VMware are well understood.  Up until now, SAP has allowed HANA running virtualized on VMware to be used in non-Production mode only, with significant restrictions such as not allowing key VMware functionalities such as vMotion or DRS (Dynamic Resources Scheduling) from being turned on, and such restrictions effectively made SAP HANA on VMware more or less useless!

 

However, things have changed quite a bit in the last few months, and because this blog post is getting already rather long, let me spare you all with the suspense and present to you the following facts which make running SAP HANA virtualized on VMware THE thing to do in 2014, especially when you do it in conjunction with TDI.

  1. First, VMware has been working behind the scene, but very hard, with SAP, HP, IBM, and VCE to get SAP HANA on VMware to be supported in Production, and with full support for vMotion and DRS.  In fact, VMware has staffed a support team at SAP HQ in Walldorf to meet SAP’s stringent Production support requirements
  2. Second, at SAP TechEd’13 in Las Vegas, VMware and SAP jointly presented the fruits of their collaboration in a standing room only session where Prakash Darji, Global VP for Data Warehouse Solutions and SAP HANA Platform at SAP introduced Bob Goldsand of VMware
  3. Bob Goldsand has been the lead Technical Architect for the virtualized SAP HANA project at VMware and here are what he shared with the audience:
    • Successful tests have been conducted so that SAP HANA can be run virtualized on VMware with up to 1TB of memory in Scale Up mode, with vSphere 5.5 as the recommended version
    • Performance and load tests using the Composite Benchmark Transaction Processing & Operational Reporting (CBTR), the SAP BW Enhanced Mixed Load Benchmark, and the SAP-H Data Warehousing Workload showed that the delta in performance between native (bare metal) and virtualized HANA to be under the acceptable 10% threshold, so there is essentially no risk involved in running SAP HANA virtualized from a performance perspective
    • Scale Out support is being worked on and is planned for Q1 2014 – more details to come
    • The SAP HANA virtual machines will need to run on certified hardware, the same hardware which would be used to run SAP HANA in bare metal mode – no surprise there, but this means that commodity hardware cannot and should not be used in Production for such a key application as SAP HANA
    • Full support for Live Migration of SAP HANA virtual machines using vMotion, thus opening the door for very interesting possibilities on disaster recovery & business continuance, far more practical than the lazy restart
    • Full support for Distributed Resource Scheduler (DRS) and DRS Business Rules which can preserve the allocation of all virtual resources during a live migration thus maintain a consistent performance level.  Once again, BC/DR for SAP HANA will be far more interesting and practical when virtualized under VMware
    • Full support for High Availability using VMware HA, but only for the Scale Up model – this simplifies the implementation of HA for SAP HANA quite a bit and at a lower cost
    • VMware and SAP are planning on formalizing support for Production of SAP HANA virtualized on VMware in Q1 2014 – please check OSS note 1788665 and other relevant OSS notes for any update on this topic

 

To close, I recommend that you review the ASUG webcast on “New Enterprise Architecture Requirements to Run SAP Business Suite on HANA” jointly presented by Moses Nicholson of Deloitte Consulting LLP and by Bob Goldsand of VMware – this webcast outlines a very compelling storyline on this new way of implementing SAP HANA in Production.

 

So, you can see that we are entering an entirely new era for deploying SAP HANA in Production.  While the appliance model remains an option, customers have a lot more flexibilities with TDI and with virtualized SAP HANA on VMware.

 

Stay tuned!

 

Tim K. Nguyen

SAP Global Technical Evangelist & EMC Cloud Architect

EMC Global Solutions Marketing - SAP

Filter Blog

By date:
By tag: