Find Communities by: Category | Product

VMAX

4 Posts
AudreyOSull

VMAX – Flash Forward!

Posted by AudreyOSull Feb 29, 2016

Flash adoption continues to accelerate as organizations look to modernize their data centers. At the heart of this new modern data center is VMAX All Flash, specifically engineered to drive IT’s transformation into this new era. By delivering higher performance, predictable latency, increased density, reduced power and cooling, and improved customer experience, it’s pretty clear why users are excited for the newest revolution of VMAX. 

 

The introduction of high capacity SSD’s now delivers these technology benefits with an improved overall TCO that’s hard to ignore. The results are attractive economics that will quickly have users moving to deploy VMAX All Flash systems going forward.

 

Adv Tech.png

 

 

What’s New?

 

The new VMAX All Flash provides a proven platform to optimize Flash Drive technologies. The system’s architecture delivers gigantic scale allowing users to consolidate tens of thousands of workloads to drive massive levels of consolidation and efficiency. Multicore CPU’s drive high IOPS and massive front and back end bandwidth move large data sets, allowing performance hungry apps to take advantage of flash technology. VMAX All Flash Hypermax OS has been “Flash-tuned”. The result is an average response time of under 500 microseconds (#PunchIt) for a transactional workload such as a database with a typical mix or read/writes, block sizes, and cache hits. This translates into more transactions and faster queries to satisfy demanding business workloads.

 

There’s also some really cool “secret sauce” developed in the code to support high capacity flash drives. The large global cache services the majority of IO from cache versus back end flash drive. All writes are serviced from cache to reduce the cell wear and extend the useful life of the flash drive.

 

Secret Sauce.png

 

But there’s also a unique “write folding” operation that helps avoids unnecessary disk IO’s when hosts re-write an address range. It allows all writes to be cached so as apps write and rewrite the same data (such as a database updates and logs), the amount of writes sent to the backend flash drives are minimized, further reducing cell wear. In a typical VMAX app environment, 30-50% writes can be rewrites.

 

In addition, intelligent “write coalescing” helps out big by merging subsequent host writes to the same addresses from different times into 1 large write. Not only does is make writes to Flash more efficient, it also helps minimize the number of IO’s required to read and write RAID parity protection. The end result is data written to Flash in chunks 2 to 5 times the size of the average host IO’s, providing a huge reduction in overall backend IO and drive wear.

 

VMAX All Flash also includes advanced drive wear analytics optimized for high capacity SSD’s to make sure writes are distributed across the entire flash pool to balance the load and avoid burring out individual drives. Not only does it help manage the SSD’s in the storage pools, it makes it easy to add and rebalance additional storage into the system.

 

The new Unisphere gets “All Flash-ified” as well. Service levels are simplified to all <1ms response time by either choosing “Diamond” or “Optimized.” Diamond applies a performance target and uses workload planner to measure front end, back end, cache, and remote links to advise if new apps will fit. Optimize allows apps to be balanced across remaining resources that do not require a performance target, to provide “Good”, such as test/dev and inactive apps.

Good Bye Growing Pains…

NewUnisphere.png

 

Another key capability of the VMAX All Flash is the new packaging and licensing models.  Simplified modular building block configuration and appliance based software packaging remove the friction and complexity to make it easier for users to consume the capacity, performance, and high end data services.

 

In a nut shell, users can start small with a single V-Brick that includes the software. V-brick and drive upgrades are prepacked to simplify ordering and budgeting. For flexibility, capacity upgrades on new V-bricks are not required to be identical to the existing V-bricks, meaning capacity adds can be more granular to better fit a user’s requirement. In addition, having a single flash tier to manage also makes sizing for different workloads with a range of IO skews (what % of data generating is what % of IO) simple, since the entire workload runs on all flash.

 

The VMAX All Flash can get big (really BIG) by easily adding capacity for growth without having to deal with capacity based software licensing charges. This makes it easy for users to consume the replication, management, and app integrations data services, maximizing the value of their storage.  From there scale out V-bricks to add performance, connectivity, and throughput. This allows both capacity and performance to be scaled independently.

 

Where’s the Beef…What About Data Services?

 

One of the key data services of VMAX All Flash is SRDF (Symmetrix Remote Data Facility). With 20+ years of engineering innovation and proven global distances of over 12,000 kilometers, it supports the world’s most mission critical applications at scale. Fun Fact, SRDF has the industry’s high attach rate for replication with a 70% penetration rate. So most users don’t buy VMAX. They buy SRDF and run it on VMAX. Big change here is the super simplification. For example, configuring SRDF Metro takes less than 2 minutes.

 

Cloud.jpg

 

Another key VMAX All Flash Data service is local snapshot replication.  SnapVX is new snapshot technology designed specifically for performance and scale. Up to 64,000 devices can have 256 snaps each, allowing the system to support 16 million snaps. We have seen examples of snapshots under extreme IO loads can be created and mounted with nearly undetectable performance impact. The space savings for making snaps is comparable to arrays that dedupe by user pointer based copied to eliminate the need for full copies to support test, dev, training, ect… And app awareness via Appsync make copy data management the killer replication tool for DBA’s who want to create and manage their own copies.

 

What’s Next?

 

Whats Next.jpg

 

The roadmap for later this year includes inline compression and provides the best space savings “bang for the buck” for typical VMAX workloads such as DB’s and transactional systems. It supports all data services including encryption and replication. For highly compressible data sets, it provides higher utilization of Flash capacity and reduces the cost per usable TB, all without compromising functionality.

 

And migrating to VMAX All Flash gets super easy and transparent. Support for non-disruptive migrations all managed via Unisphere will help accelerate tech refreshes of older VMAX systems by eliminating the “friction” of having to migrate apps to a new systems without requiring down time or having to involve the app and server admins. The storage admin simply picks the LUNs to migrate, copies the data and device presentation, enables the new channels, and removes the old device, DONE!

 

VMAX All Flash Rocks The Modern Data Center

 

Cloud2.jpg

 

This year will be a big year for data center modernization initiatives, and at the center of many is VMAX. Engineered to support the densest Flash possible now allow users to leverage the efficiencies and benefits of all flash systems without having to compromise the capabilities and value that high end systems deliver to support their mission critical apps at scale.

Buckle up, folks, and get ready to #FlashForward.

 

Blog by Scott Delandy

One of the biggest values of virtualization to a business is the ability to get more out of the physical resources in the data center.  When it comes to making the case to the company’s controller of the purse strings, it is directly related to the increase in return on investment virtualization can deliver.  The value of delivering IT services for businesses to achieve more revenue is a compelling and relevant attractor.

 

While the output is great for business, virtualization also makes the IT administrators’ core assignments that much more complex.  Pre-virtualization, an administrator can conceivably mentally or manually map relationships between compute, storage, and networking without too much difficulty. If issues arise, he or she has the ability to the look at this map and understand the chain of events how the issue may impact other parts of the data center.  However, with virtualization, that no longer is possible.

 

The resources of a single physical server are logically distributed to multiple virtual machines. Just storage devices alone, a single physical server can be attached to many more storage devices than if it were acting as a single server.  Add that to shared networking ports and shared compute resources, the infrastructure map just got exponentially more complex for monitoring, troubleshooting and active management.

 

pic1.png

One can draw similarities to technology that has impacted the way people over time have developed and managed social relationships.  Before technology like telephones became part of a consumer’s life, their social relationships were few but strong and personal.  As technology enabled them to develop more relationships, their individual networks expanded and transcended geography and demographics.  Just like virtualization, an expanded network can bring immense value but it also introduces complexities.  How can you ensure everyone knows who you are, what you believe, and what you find important?  It’s much easier to do that when every person you know you see everyday versus doing the same by relying on your contact book to manage all the people we now know to some degree.

 

Virtualization definitely delivers value to the business, but it also creates a headache for the IT team that needs to manage it.  That’s where software intelligence can address their pain points.  Software provides the ability to consume information much faster, associate components, analyze situations, and recommend actions for an administrator to mitigate potential issues.  Software intelligence allows administrators to work smarter and more efficiently, and a software-defined data center enables virtualization to be sustainable as the business grows.

 

Virtualization does not change the core requirements of having compute, networking, and storage.  Thus storage is a vital cog to maximizing the benefits of virtualization while making the lives of administrators easier.

 

Specific to how storage brings benefit to a software-defined data center, I will focus on the VMAX3 in these two areas:

  1. Management and orchestration
  2. Monitoring and remediation

 

How can VMAX3 make it easier for administrators or the management software to leverage the value of the storage array?

 

How can VMAX3 enable administrators or the management software to learn the health of the storage devices and maintain health of the environment?

 

First, storage arrays have been around much longer than virtualization has. Companies value operational efficiencies, data integrity, and data protection that storage arrays have traditionally provided.  Adding virtualization into the data center is meant to compound those positive effects, but providing all these capabilities for the IT administration team does not mean the outcome is easily achievable. Technology is great, but unless you have talent and time to work through all the intricacies of it, it’s not really usable for most companies.  Technology companies like Apple and Google have been able hide the complexities of technology and simplify the application at the surface for the general population to realize value that can provide positive impacts to their lives.

pic2.png

VMAX3 and VMware strive to hide the complexities of storage topologies and assortment of configurations to make virtualization a great investment.  VMAX3 has great technologies like replication, backup and restore, and encryption.  As more administrators look to the virtualization layer as the single point of management of their infrastructure, VMAX3 ensure their value can be realized at that layer.  VMware’s core management console for virtualization is the vSphere vCenter. There are integration points for VMAX3 active management including the storage features to simplify management functions into one console.  Another example is VMAX3 SRDF/Metro which enables active/active storage service between VMAX3s in different locations.  Now it can be leveraged by VMware across two data centers using vSphere as a Stretch Cluster.

pic3.png

Though provisioning IT resources is important, most IT administrators spend their time monitoring and remediating issues.  VMware vRealize Operations offer a rich array of tools to monitor the virtualized environment end-to-end with the simplicity that allows IT administrators to easily identify the health of the environment and address issues both reactively for issues that have already taken place and proactively to avoid potential issues.  VMAX3’s integration points with vRealize Operations provides critical information about its storage layout and its corresponding metrics like performance and utilization to ensure vRealize Operations can incorporate VMAX3 storage into their monitoring and remediation capabilities.

 

We live in the age of the internet where it’s no longer about how much you know that separates you from the rest, it’s about how you can apply that knowledge.  In this age of IT, most IT vendors deliver fairly similar capabilities, but the ones that separate from the rest are the ones that make the application simpler and easier to realize value.  VMAX3 delivers on that with its industry-leading enterprise storage capabilities and just as importantly its integration with VMware for organizations to realize that value.

As any of us who work with enterprise businesses know well, data migrations have always been challenging events for enterprise customers. The complexity and size of these very large storage environments makes planning for, scheduling, and executing migrations extremely difficult. One of the most useful things that a storage vendor can supply is a simple, self-service migration tool that allows its users to migrate data at a time that is easiest and most convenient for them.

 

EMC provides this capability with Open Replicator, which is a comprehensive utility that allows users to copy device data to or from various types of storage arrays within a storage area network (SAN) infrastructure. It can be used to migrate data from EMC arrays and a large variety of qualified third-party arrays from many vendors to VMAX Family arrays.  EMC provides migration software such as Open Replicator at no charge and without any time limit on the right-to-use the software for migrations.

 

Migrations using Open Replicator are generally performed using a hot pull operation. In an Open Replicator migration, one of the pre-migration tasks involves cutting the migrating host over to the target control array prior to beginning the migration. The migration is considered “hot” because after this short outage, which is normally only a few minutes, the entire migration occurs with the host and applications online.

one.png

Figure 1. Hot Pull Migration with Hosts Online

 

This short disruption is the only one required when performing an Open Replicator migration. The host can resume production as soon as the migration starts and will have full access to the data on the control volume (which is the migration target) while the data is copying, remaining online during and through the conclusion of the migration.

 

The migration is a “pull” operation because the target of the migration is the control device, or the device that exists on the VMAX array running Open Replicator, and the data is being pulled toward the control devices.

 

Configuring and performing a migration using Open replicator is simple and intuitive and can be completed in only a handful of steps. Open Replicator migrations, are created, managed, and monitored under Data Protection in the Unisphere GUI.

 

New migration sessions are configured under SAN View. This shows all of the remote volumes accessible on the SAN and what local and remote storage controller ports they are available on. Open Replicator does not require any special type of director and uses the same front-end ports that it uses to access its volumes, meaning that the existing cabling can be used with no special configuration required other than zoning the VMAX FA ports to the external storage controller ports

 

Migrations, which are configured in only four simple steps, are started with the selection of the volumes to be migrated.

two.png

 

After clicking “Create Copy Session”, the self-guided wizard opens and shows the four steps that will be taken to configure the four migration sessions. The total number of sessions allowed is 512.

 

In this case, the operation we are choosing is a hot pull.

three.png

 

Clicking “Next” allows the volumes to be paired. The source and target devices are chosen and the “Add Pair” button is clicked for each of the choices.

four.png

 

The session can be given a name that helps the user identify it quickly.  For a hot pull migration operation, the copy option, which is the default, can be chosen so that the data begins copying as soon as the migration is activated. Pre-copy can also be chosen so that the data copy begins immediately and does not wait for the sessions to be activated.

 

With donor update, host writes to the migration target devices, which are called control devices in Open Replicator, are copied back to the source devices, which are known as remote devices. This allows the user to halt the migration and go back to the source volumes for any reason while maintaining any data that has been written to the target volumes after the host cutover to the target array.

 

Front-end zero detection can be chosen so that any all zero thin device extents are not copied from the target to the source., which is an important space saving feature that adds value in the all thin provisioned VMAX3.

five.png

 

After confirming that the session is configured properly in the fourth screen, clicking finish will complete the setup and allow the copy session to be configured.

 

The copy session has been created and can be activated and monitored from the “Open Replicator Sessions” screen. It is that simple to set up a migration using Open Replicator and Unisphere.

 

The final step, from the “Open Replicator Sessions” screen, is to select all of the sessions that were created and activate them.

  six.png

 

The session status, which is now CopyInProgress,can be monitored from this screen.

 

Once the copy is complete, the status changes to Copied and the migration is complete. The host is still in production on the target storage and the migration can still be failed back to the source array if necessary. To complete the migration, select the device pairs and click Terminate. With donor update selected, a force action is needed to acknowledge that the replication back to the source volumes will be stopped when the session is terminated.

 

The migration is now complete and the data was migrated with the host online. When all volumes from the source array have been migrated, the source array can be removed from the SAN. It is that simple to migrate data onto a VMAX3.

 

Along with Open Replicator, EMC also offers a solution that allows completely online data migration in qualified host environments running PowerPath. EMC PowerPath Migration Enabler (PPME) is a hybrid migration solution that provides the ability to perform completely non-disruptive migrations while leveraging another underlying technology, such as Open Replicator, TimeFinder/Clone, or Host Copy. PPME provides a solution with virtually no impact to host resources when utilizing array-based replication. PPME can eliminate application disruption caused by the migration, reducing migration risk, and simplifying migration operations.

Written by Daniel Chiu - EMC Business & Solutions Development

 

IT-as-a-Service may be a buzzword for most folks but for most companies in the future, it will become the model by which your company will consume and utilize technology services to produce for your company.  There are many reasons but I will focus on the ability for companies to respond to market demands.

 

The advancement of technology has enabled countless people in every company to deliver business value.  Before when in-depth knowledge was primarily possessed by specialists and experts, technology like content management and search engines was introduced to proliferate and unchain the treasured data for everyone’s consumption. Examples we see everyday are Office 365 and Google Apps.  With the proliferation of data comes a need to discern all this content into substantive information.  Data is only important if it turns into something of value.  Software intelligence like data analytics complete with a simple interface enables many people to do quick and streamlined analysis that can lead to innovation that benefits their organizations.  Data acquisition is no longer a bottleneck. Neither is information acquisition. The speed at which companies can generate data and turn that into pertinent information is so much faster than it was even ten years ago.  It is now up to the company’s IT infrastructure to help turn that into business value.

img1.png

One of the critical factors in turning opportunity into business success has become latency, the ability to respond to the market quickly, and more importantly faster than your competition.  As the hurdles to acquire technology resources lessen, the field of competition will grow and time-to-market becomes very vital to seize opportunities.

 

IT organizations are tasked to deliver to these business demands and market dynamics.  With flat budgets and out-of-control growth of data center, the existing model to deliver the quality of service in need is nearly impossible.  Business units resort to what we call shadow IT, by acquiring technology services through the cloud like Amazon Web Services outside the purvey of IT to mitigate latency and speed the process at the expense of control, governance, and compliance.

 

Speed-to-business is a very significant driver for why IT today needs to transform itself.  It’s not only a challenge but a cultural shift for the entire company to change the image of IT as just not a deliverer of hardware and software but a consult to the business on how to respond to the market faster.  While this shift will not be achieved easily or quickly, this is indeed a journey that delivers many benefits including cost efficiencies, business effectiveness, and security.

 

At EMC, we have been on that journey to transform our own IT organization to have a more consultative role with the business.  EMC IT has become integral to the business and its goals are aligned with our business objectives.  In many cases, IT teams have consolidated and defined more business-focused roles for their staff.  In that approach, IT has developed new levels of expertise in technology areas such as virtualization, cloud, data science, security risk analysis, and core application environments such as SAP, Microsoft, Salesforce.com, and Software-as-a-Service applications. img2.png

Let’s take a look a few key areas of EMC IT that has transformed to deliver IT-as-a-Service:

 

  • Cloud Infrastructure Transformation – the goal was to have a software-defined data center that can automate and orchestrate 50% of the 30PB data center workloads. Shifting to this model would reduce deployment time and reduce complexities produced by traditional manual IT processes and customizations made over time.  The primary steps to do it were to:
    • Standardize processes for repeatability
    • Shift to a self-service model for scalability, customer-centric experience

The enabling technologies were converged infrastructure, in the form of VCE Vblocks which help simplify deployment and implementation as well as virtualization to make the best use of the physical resources.  The results were compelling; what took dozens of manual steps over a period of 45 days to deploy environments became a streamlined automated process that took just 30 minutes.  While automation is important, agile development process will further improve operations.

 

  • Business Analytics-as-a-Service – analytics has been a major tool to help the business save or make more money.  The need for analysts to run these reports and queries is there but often IT is unable to deliver the quality of service they require leading to shadow IT that can endanger corporate data.  EMC IT has implemented Greenplum Data Computing Appliances, hired data scientists, and utilized the private cloud to deliver sandboxes for analysts to load data for analysis as well as opportunities to consult with experts for advanced statistical techniques.

 

  • Standardizing Business Applications for Agility – historically, EMC had heavily customized its legacy ERP application to satisfy specialized and evolving business requirements.  The constant customization results in difficulties in maintaining and upgrading the system as well as delays to respond to change.  The standardization to SAP as the new ERP platform with minimal customization required executive sponsorship, a strong control board, and focused execution to delivery on time and on point.  The solution comprised of Vblocks utilizing VMware software and VMAX Storage with Site Recovery Manager and Symmetrix Remote Data Facility for synchronous replication and disaster recovery.  As a result, we now have a system that can adapt to business changes more quickly and easily and reap efficiency gains through the virtualized infrastructure.  Sales orders that took 25 minutes before now takes 5 minutes.  Initial IT cost savings is at $11 million and the number will grow in the years ahead.

img3.png

While every IT is different, the need to respond to the evolving market dynamics is common throughout. IT needs to focus not just about operational latency but its impact to business latency.  Changes ahead will require a cultural shift to how IT will become part of the business process as well a change in how IT as an organization is viewed.