Find Communities by: Category | Product

1 2 3 4 Previous Next

Everything Oracle at Dell EMC

58 Posts authored by: Indranil

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

In 2016, Dell EMC announced new VMAX All Flash products --> VMAX 250F. The VMAX 250F as depicted in Figure 1 is ideally suited for customers with modest capacity requirements who still want to take advantage of the enterprise capabilities of VMAX all flash with high performance, availability, reliability and inbuilt data protection.

 

abcd1.png

Figure 1 : VMAX 250F


It has the following benefits:-

  • Reliable performance of over 1 million IOPS
  • Starts at 11 TB and scales to 1 PB effective flash capacity with data reduction.
  • Ensures uptime of 99.9999% availability
  • Orderable with advanced replication, data encryption, storage management, data protection, and access to cloud storage tiering.
  • Scales out to 2 V-Bricks
  • Advanced data services and multi-controller architecture.
  • Response time is under 1 ms.
  • Uses virtual provisioning to create new storage devices with ease and in seconds.
  • Storage can scale-out with V-Bricks.

 

In this blog I am going to discuss the best practices of using Oracle on the VMAX 250F storage arrays.  Firstly, let me talk about the Servers and storage connectivity best practices with special reference to Oracle database. From the Oracle database performance and HA perspective we should connect HBAs to at least two VMAX ports, preferably on different directors as shown in Figure 2. This increases the overall system availability.

 

As far as HBAs and VMAX ports/directors are concerned, the following best practices may be followed :-

 

  1. When zoning host initiators to storage target ports, ensure that each pair is on different switches for best availability.
  2. Use two or more HBAs for each database node for better availability and scalability(Figure 2).
  3. DellEMC Powerpath software can help in automatic failover.
  4. Spread connections across engines / directors first, then on the same director.
  5. Get increased availability if the connections are spread "wide" first.
  6. Connect at least two HBAs across redundant fabrics for high availability.
  7. Each HBA port should be zoned/masked to two VMAX ports(Figure 2)

 

In extension to pt. 1 above, it will be better to avoid Inter-Switch links (ISLs).These links can prevent the shared resources with unpredictable utilization and can create bottlenecks when a few ISL paths can’t sustain many servers-to-storage paths as shown in Figure 2. The ISLs can become performance bottlenecks here as the total throughput is limited by the number of connections.

 

abcd1.png

Figure : 2 ISLs between 2 Fibre Channel switches

 

Lastly, the number and speed of HBA ports should support planned bandwidth/IOPS. FC negotiates to lowest components’ speed between host, storage, and switch ports as seen in Figure 2.

 

In summary, oracle database is a complex database to manage and maintain the required level of performance. To achieve a reasonable level of performance, the optimal tuning of the Server and storage parameters are extremely important. In the next blog, I will talk about the database architectural considerations that we should take into account to get an enhanced performance in the Oracle database.

 

 

 

 

Follow us on Twitter:
EMCOracle.png


Tweet this document:

Want to learn about the oracle best practices for VMAX 250F in a three blog series ? Pls. click here for Part 1--> http://bit.ly/2f15hZm


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

As mentioned in my last blog, DEW’17 was all about product announcements and future directions that DELLEMC is planning to race ahead. In the build to buy continuum debate, DELLEMC has announced many products on ready nodes, ready bundles and ready systems. Before we delve deep into the concepts, let us understand each term separately.  As per Chad Sakac’s blog ,

 

Ready Nodes = software + server;

Ready Bundles = software + servers/network/storage;

Ready Systems = software + CI/HCI.

 

If I take the first category, the offerings are summarized in figure 1.

abcd1.png

Figure 1 : Different offerings of Ready nodes.

 

Digital transformation is the corporate strategy for many organizations today. Database Consolidation, security, performance enhancement, apps enablement are the main areas where Ready bundles will come handy. Ready bundle simplifies the implementation. Dell EMC’s ready bundle turns complexity into simplicity. By following the best practices, it delivers expert designed sizing and validated solutions which are just the right configuration for a typical organization. It also offers exceptional performance, cost saving, easy to order, faster time to value. The DELLEMC ready bundle for RedHat OpenStack platform can be visualized in figure 2.

 

abcd1.png

Figure 2: DellEMC Ready Bundle for Red Hat Openstack platform.

 

In summary, the key benefits of DELLEMC’s Ready Bundle are as follows:-


Superior Performance: - Faster transaction and query processing through expert-designed solution built on DELL EMC hardware. Creates dynamic business results with agile, open and flexible ready bundles.

 

Significant Cost saving:- Significant cost reduction with hardware resource consolidation and oracle database’s licensing savings. This will definitely increase the profit margin through out of factory OEM licensing, storage attach and configuration enrichment.


Faster time to value:-  Pre-architected and validated to shorten design cycle and reduce implementation risk. The system builder which is a tool for DELLEMC ready bundles, generates solutions of various sizes based on the customer requirements with bundle ordering with L1/L2 support in the later stages. Customers can unlock efficiency with JetPack cutting-edge cloud automation.

 

Future-ready scalability:-Exceptional scalability with Oracle Database, SQL Server 2016, SAP Hana etc. It is designed for growth with easy, non-disruptive & modular scalability.

 

Better manageability and protection:- it is easy to manage and easy to deploy. Simplified and management . Ready Bundles also provide better protection like disaster recovery and backup solutions with Recoverpoint.

 

The third option is ready Systems. Any solutions that are built on Vblock, VxRack,VxBlock and VxRail (Converged or Hyper converged Architecture)can be termed as Ready Systems. In this category we have at present Ready System for SAP HANA on Vblock(Converged Architecture). The architecture is depicted in Figure 3.

abcd1.png

 

Figure 3: Demonstration of Ready System

 

Converged Infrastructure provides the speed and ability to power SAP HANA while remaining cost effective. Vblock Systems seamlessly integrate leading compute, network, and storage technologies to provide an optimized converged infrastructure solution that ensures secure and predictable performance through pre-engineered, modular infrastructure. Vblock Systems provide the highest levels of virtualization and application performance. Organizations can choose between procuring SAP certified appliance hardware for SAP HANA or leveraging SAP HANA tailored data center integration (TDI) to capitalize on existing infrastructure investments. SAP HANA TDI offers the ability to deploy SAP HANA software on standard Vblock Systems to leverage the benefits of VCE’s converged infrastructure solutions. Vblock 700 family systems provide a pre-engineered, pre-validated, and pre-tested platform on which to run SAP HANA. This platform comprises the storage, networking, compute, and optionally, the virtualization layer. It is supported as a single product, meaning there is no need to worry if the support issue was caused by a networking, storage or compute component. The platform provides a stable and reliable base for an SAP HANA installation. With SAP HANA TDI, organizations can consolidate SAP HANA workloads alongside both SAP and non-SAP workloads on the Vblock Systems shared infrastructure.

 

Hope, you could now appreciate the differences between ready nodes, bundles and systems along with its applications and implementations.

 

 

Follow us on Twitter:
EMCOracle.png


Tweet this document:

Want to update yourself about the top product announcements in Dell EMC World 2017 ? Click here --> http://bit.ly/2raiKlk


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

Recently, the DELLEMC World got concluded with lots of launches and product announcements like Dell 14G PowerEdge Servers. More than 13,000 people converged in Las Vegas this week for Dell EMC World, which is the annual tech conference for Round Rock-based Dell Technologies as can be seen in Figure 1.

abcd1.png

Figure 1 : Dell EMC world 2017

 

In this event, many great products were showcased and also many big announcements were made. The biggest of all the announcements is the release of the DellEMC 14G next generation servers. In this product portfolio, we found some salient features like exponential increase in low latency of NVMe storage and a 50% increase GPU-compute density.Also, there is also a tighter coupling with software defined storage (SDS) environments and Dell’s new OpenManage Enterprise console which is the GUI consolidation of different products that are spread across different areas in a typical data center. This interface is called open Manage console which potentially can control and provision resources across the DELL-EMC environments. It is a single pane of glass to manage all Dell EMC compute and storage units.

Let us now talk a look into some top products that were showcased during the conference. The first product which has a bright future and has a huge potential based on the earlier spectacular run rate  by the earlier versions is XtremIO X2 as depicted in Figure 2.

abcd1.png

Figure 2 : XtremIO X2 box

 

XtremIO X2 offers a 25 percent increase in the data reduction capabilities compared with the previous version, and supports scale-up in addition to scale-out to let customers add additional storage densities to previously deployed nodes while cutting total cost of ownership by half on a per-gigabyte basis. It delivers low latency; unmatched storage efficiency with inline all-the-time data services; rich application integrated copy services; and unprecedented management simplicity.  The USPs for this product can be summarized as follows:-

  • 4 to 20 times data reduction using inline deduplication and compression, XtremIO Virtual Copies, and thin provisioning. Gains two times more XtremIO Virtual Copies.
  • Enhanced Integrated Copy Data Management (iCDM).
  • Multi-dimensional scaling.
  • Elegant software driven performance improvements.
  • Dramatically lower TCO with efficiency guarantees.
  • Enables unique copy data management capabilities to unlock business agility through improved workflows.
  • Reduces costs by one third and provides smaller, incremental scaling options.
  • Provides two times longer storage product lifecycles than traditional arrays.

 

The next product which comes to my mind is “Next Generation Unity” as depicted in figure 3.


abcd1.png

Figure 3 : DELLEMC Unity bo

 

DELLEMC Unity is the midrange storage portfolio and is optimized for all-flash performance and simplicity. It utilizes CPU and memory optimally with 16 times the storage density, with up to 80 15.3-TB SSDs, and a 30 percent performance improvement over the current generation. They can handle a maximum file size of 256 TB compared with the current 64 TB. Installation takes less than10 minutes, All new Dell EMC Unity models include 4x larger file system with inline file compression, iCDM with snapshot mobility, simpler mapped RAID protection and support for external encryption key management via KMIP (Key Management Interoperability Protocol). Additionally, Dell EMC Unity features an 8x increase in density and 8x more effective file system capacity than its predecessor, 33% faster than previous generations.

In the data protection arena, the product that got released in DELLEMC World 2017 and worth mentioning is the Integrated Data Protection Appliance as depicted in figure 4.

abcd1.png

Figure 4 : Data Protection Suite

 

The Integrated Data Protection Appliance is a new turnkey offering that combines the company's Data Domain backup appliance with a new version of its Data Protection Suite software. The appliance allows customers to go from installation to back up in less than three hours. Dell EMC protects more than 150 PB of data in the cloud, which is 2x better than the closest competitor. Data Protection Suite introduces Disaster Recovery and Protection Storage in the cloud. This appliance takes up to 10X faster time vis-à-vis any other traditional build on other data protection alternatives and 20% faster performance than the closest competitor. It also brings together industry-leading deduplication (average 55:1 deduplication rate) for data residing both on premise and in the cloud.

 

There are many more products which got unleashed in DELLEMC World and are worth mentioning. I will discuss more about those products in my next blog.

 




Follow us on Twitter:
EMCOracle.png


Tweet this document:

Want to update yourself about the top product announcements in Dell EMC World 2017 ? Click here -->  http://bit.ly/2qg183A


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

With Dell EMC VxRack powered by VMware Cloud Foundation, Dell EMC and VMware have built a turnkey hyperconverged solution designed to support both traditional and cloud-native database workloads. Dell EMC VxRack was designed to enable enterprise companies and service providers to more easily and cost-effectively deploy the foundation for a complete software-defined private cloud that provides the flexibility and scalability required to support oracle databases that are becoming more complex and resource-hungry which in turn impacting the DB performance further. In this regard, Dell EMC is developing hyperconverged platforms like VxRack that meet the demanding and ever-changing requirements of businesses and databases which are used on the top of VxRack because of umteen benefits. In this endeavor, DELL EMC has also published a paper where it demonstrates the efficacies of using Oracle DB in VxRack environment.


What

8 pages product overview: “Advantages of Dell EMC VxRack Systems for Oracle :

Databases”  Highlights benefits including:

  • Easy Provisioning— Use automation for efficient installation, running, and improving performance of Oracle databases.
  • Ease of Use – Preconfigured, loaded, tested, and fully optimized IT stack, which is delivered from the factory as a fully assembled solution to help run Oracle databases
  • Building blocks for growth – Use step-sized building blocks for future data-center environments.
  • Compelling Economics - Use a single support vendor and automated updates to lower the total cost of ownership (TCO) ,OPEX and CAPEX.
  • Oracle licensing on VxRack Systems - Understand about oracle DB licensing under VxRack environment.

When

Available immediately to general public on emc.com: Download here

Why

Conversation starter for sales and pre-sales to use with prospects.  Reference hyperlink in email or print and leave behind

Who

  • Marketing communications: Sam Lucido and team
  • Editing and production: Indranil Chakrabarti, Colleen Jones, Sam Lucido
  • Sponsorship: Keith Miracle

Next

A blog to explain the paper. Click here. To get a presentation for this paper, click here.

Questions

Indranil Chakrabarti, Sam Lucido

 


Follow us on Twitter:
EMCOracle.png


Tweet this document:

Want to get a summary  about various advantages of running the Oracle databases on VxRack Systems ? Please click here --> http://bit.ly/2oYvrPH

 


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

When we look at any typical organization’s pain points, we see that maintaining a stable performance comes at its top priority/challenges with exponential increase in the volume of data along with other following challenges:-

 

  • Complexity in handling structured & unstructured data with varied business requirements, performance SLAs etc.
  • Different direct and indirect constraints imposed on database Scalability due to various isolated restriction on data center components like servers, networking, storage etc.
  • Manageability and access issues of database as the amount of data generated and collected are exploding.
  • Protecting the data at any cost as it will hit the reputation and goodwill of a company not to mention about the impending costs.
  • Ways to ingest/consume data by eliminating inefficiencies, automating processes and improving productivity.

 

When I looked into the above challenges, I was trying to think of a single vendor solution which will take care of all the issues mentioned above in a cost effective manner which will not only reduce the Capex and Opex substantially but will also facilitate the Oracle DBAs to contain the daily firefighting.  At this moment I can only think of using Hyperconverged infrastructure(HCI) which will take care of all the issues mentioned above. Within the HCI domain, the Dell EMC VxRack is considered to be the best choice.

Let me now introduce VxRack as depicted in Figure 1. VxRack is a turnkey hyper-converged solution powered by VMware Cloud Foundation. These systems consist of pre-loaded software and compute, storage, and network components in a hyper-converged stack co-engineered by Dell EMC and VMware.VxRack capable of supporting both traditional and cloud-native workloads. VxRack starts with as few as 8 Dell PowerEdge server nodes and scale up by adding nodes one at a time, or scale out by adding racks. The domain of a single VxRack system can be 190+ nodes, spanning more than 8 cabinets, with PowerEdge R630 or R730xd systems that bring greater capacity and 40 percent more CPU performance without costing more money. Anyone who isn’t a hyper-scale provider and/or doesn’t have a vast “hardware as service” team, VxRack is for them.

 

abcd1.png

Figure 1: VxRack Systems

 

 

 

There are 2 flavors of VxRack which work equally well with Oracle databases namely:-

1. Dell EMC VxRack System FLEX (VxRack FLEX)

2. Dell EMC VxRack System SDDC (VxRack SDDC)

 

The benefits of using VxRack Flex are as follows:-

 

  • Asymmetric scale: Configure the hyperconverged infrastructure to add compute and storage capabilities to address specific workload requirements of Oracle databases.
  • Bare metal: Use a physical Oracle database infrastructure to fetch data from the hard disk (remain physical) in a hyperconverged infrastructure.
  • Easy provisioning: Use automation to simplify management and storage lifecycle capabilities for efficient installation, running, and performance of Oracle databases.
  • Flexible personality support: Provision for multiple hypervisors and dedicated storage.
  • Elastic: Grow the system with more flexibility and options, as and when required, without application interruption or downtime.
  • Building blocks for growth: Use step-sized building blocks for future data-center environments.
  • Ready to use: Use a preconfigured, loaded, tested, and fully optimized IT stack, which is delivered from the factory as a fully assembled and supported solution to help run Oracle databases with minimal tuning of database configuration and performance parameters.
  • Compelling economics: Use a single support vendor, built-in management reporting, and automated updates to lower the total cost of ownership (TCO) and significantly improve your OPEX and CAPEX

 

The benefits of using VxRack SDDC are as follows:-


  • Dedicated Resource Pool : Workload domain feature that enables the creation of separate pools of resources that are dedicated to the Oracle databases
  • Lower TCO: Lower cost because the databases are part of a larger standardized infrastructure
  • Better Granularity level management: Ability to specify capacity, performance, and availability characteristics in an isolated workload domain.

 

VxRack Systems are ideal for Oracle databases because the infrastructure eliminates hardware complexities, and provides flexibility, performance, and protection at enterprise levels. For example, with the current VxRack System setup, database environment can be protected using Avamar software with the Data Domain system, Data Domain with Dell EMC DD Boos software for Oracle, and EMC RecoverPoint for Virtual Machines. Enterprise-class support for the hyperconverged infrastructure with lifecycle-system assurance ensures the best system performance. Other benefits of using VxRack Systems include:

 

  • All-flash storage configurations maximize performance for mission-critical workloads. You can deliver sub millisecond performance for physical reads and writes for Oracle databases.
  • Extreme scalability, enabling growth and performance by adding nodes. Adding nodes is seamless and non-disruptive to an existing VxRack System, making expansion easy.
  • The advantages of using the VxRack System with Oracle databases include:
  • A hyperconverged solution accelerates standardization, speeds up deployment, and simplifies IT operations.
  • Support for the entire integrated stack is through a single support organization for all layers, which helps to refocus investment from OPEX to CAPEX.
  • The database infrastructure ensures continued compliance with Oracle database licensing.

 

In my next blog, I will talk more about the details of Oracle Database licensing on VxRack and its components.

 


Follow us on Twitter:
EMCOracle.png


Tweet this document:

Want to unravel the power of Oracle Database in VxRack Systems? Please click here --> http://bit.ly/2oRF8Qx

 


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

 

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

In my previous blog , I was talking about five important features of oracle database 12.2. In this blog I am going to continue the discussion and talk about 5 more important features for the oracle database version 12.2. Lets start with the sixth item now.

 

The sixth feature is the real time refresh . In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is done at the statement level, so it does not update the MV itself. If we query the MV directly, we can also use the FRESH_MV hint to do the same and get the up-to-date data. During the process of retrieving the recent data, Oracle merges the MV log data with the MV itself to return the correct result.

 

The seventh is the Big Data Management System Infrastructure which takes care of the 4V’s of Big Data   volume, variety, velocity and veracity. This feature allows more users of Hadoop to combine map-reduce processing with the essential database qualities that many applications require. It can access with any language like REST,Java,SQL,Python,R, Scala etc. it can also do analysis of any type of data like SQl , Spark, Graph, Spatial , Machine Learning etc. It can use databases like NoSQL, traditional database and Hadoop data which can be accessed by External tables. External tables are used by both SQL*Loader and Oracle Data Pump, and thus also by the ORACLE_LOADER and ORACLE_DATAPUMP access drivers. The Hadoop Distributed File System (HDFS) and Apache Hive are the other two most important data sources here.

 

The eighth is the hot cloning of a PDB without application outages/downtime. Cloning of a pluggable database (PDB) resolves the issue of setting the source system to read-only mode before creating a full or snapshot clone of a PDB. With this feature, you can now clone your production PDBs and create point-in-time copies for development or testing without any application outage. This feature eliminates the need to create application outages for the purpose of cloning.

 

The nineth is the In-Memory FastStart. In-Memory Column Store allows objects (for example, tables, partitions, and subpartitions) to be populated in-memory in a compressed columnar format. Until now, the columnar format has only been available in-memory. That meant that after a database restart, the In-Memory Column Store would have to be populated from scratch using a multiple step process that converts traditional row formatted data into the compressed columnar format and placed in-memory. In-Memory FastStart enables data to be repopulated into the In-Memory Column Store at a much faster rate than previously possible by saving a copy of the data currently populated in the In-Memory Column Store on disk in its compressed columnar format. In-Memory FastStart significantly reduces the time it takes to repopulate data into the In-Memory Column Store after a system restart. This allows businesses to start taking advantage of the analytic query performance benefits of accessing data in a columnar format faster.

 

The tenth is the Improving Security Posture of the Database. DB Vault is an important security feature in Oracle Database.When we implement DB vault on an existing application, there is always a chance the application will break due to some DBA/human errors. DB vault 12.2 can be enabled in simulation mode to take care of this issue in 12.2. While in simulation mode, nothing will break and subsequently can be fixed with details from the log where we see all the blocked state. Hence, we can work proactively and make sure that everything works and fix all the problems before the actual implementation.Encryption, decryption, and rekeying of existing tablespaces with Transparent Data Encryption (TDE) tablespace is possible now. A TDE tablespace can be deployed, encrypted for migration to an encrypted tablespace without any stoppages. This feature enables automated deep rotation of data encryption keys used by TDE tablespace encryption in the background again without any downtime. Encryption is important, but until now, had to be done offline for existing data. Oracle 12.2 allows online encryption and re-key operations. The command will effect a datafile, copying it to a new encrypted one, then will change the database to start using the new encrypted file, and everything will be done online. Now it is possible to encrypt internal tablespaces (system, sysaux, undo, etc) as well.

 

Summary

In the last 2 blogs I talked about the top 10 features for the new release for oracle 12.2. As I told in my last blog, the new features look very promising but are available in the cloud version presently. We hope it will work well in its on premise versions as well.

 

 

 

 

 


Follow us on Twitter:
EMCOracle.png


Tweet this document:

Want to know about the new features of oracle 12.2 ? If yes, click here -->  http://bit.ly/2gPdvmB

 


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

 

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

Oracle has released its latest version of its database in cloud version.  I am discussing some ten important features of its latest version.


Firstly, let me talk about sharding as it is the most important feature in this release.

Sharding is an application-managed scaling technique using many independent databases. It is basically the horizontal partitioning of data across independent databases Data which holds the many partitions or splits into multiple databases are called shards which are distributed as data volumes . Each database holds a subset (either range or hash) of the data. Also each shard has its own CPU, memory, flash & disk and holds a portion of rows from partitioned tables and replicated for high availability and sclability. It is like the "scale-up" vertical partitioning, followed by "scaled-out" horizontal partitioning. Sharding is the dominant approach for scaling massive websites. Shardingis used in custom applications that require extreme scalability and are willing to make a number of tradeoffs to achieve it. In sharding , application code dispatches request to a specific database based on key value and then queries are constructed on shard-key. Here, data is de-normalized to avoid cross-shard operations (no joins).It also has “fault Isolation” feature. The sharding can be illustrated in Figure 1 :-

abcd1.png

Figure 1: Pictorial representation DB sharding

 

The second important feature is the PDB level controls that help to limit I/O and are provided in the new release by MAX_IOPS parameter. We can specify the limit as either I/O requests per second or as Mbps (megabytes of I/O per second). This limit can only be applied to a PDB, not to the multitenant container database (CDB) or a non-CDB.

 

The third one is the Auto-list partitioning which is an extension of list partitioning. It enables the automatic creation of partitions for new values inserted into the partitioned table. Auto-list partitioning helps DBAs to maintain a list of partitioned tables for a large number of distinct key values that require individual partitions. It also automatically copes with the unplanned partition key values without the need of a DEFAULT.

 

The fourth is the adaptive query optimization which has brought the biggest change to the optimizer in Oracle Database 12c. It is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics when especially, existing statistics are not sufficient to generate an optimal plan. There are two distinct aspects in Adaptive Query Optimization: adaptive plans, which focuses on improving the execution of a query and adaptive statistics, which uses additional information to improve query execution plans. Below is the illustrative diagram :-

abcd1.png

Figure 2:  The components that make up the new Adaptive Query Optimization functionality

 

The fifth is the additional column incorporation while creating indexes  . Also indexes will gather extended stats by default when queries are run such as number of times the index was used, last usage time and much more. All the information is recorded in the DBA_INDEX_USAGE data dictionary view.Index monitoring before 12.2  had many problems. In 12.2, Index monitoring is now enabled by default, tracks the usage during run-time (as opposed to parse level).

 

Conclusion

The new features for 12.2 will definitely take care some of the shortcomings which existed in the earlier versions of Oracle database . I found these features helpful but we need to test/observe how it pans out in the on premise environment which will be released very shortly. In part-II of my blog, I am going to discuss some more features for 12. 2 database.

 

 

 


Follow us on Twitter:
EMCOracle.png


Tweet this document:

 

Want to know about the new features of oracle 12.2 ? If yes, click here -->  http://bit.ly/2gMxUGq


Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

VCE Vision is an advanced architecture for converged infrastructure management that discovers system health and compliance and federates the data so that  multiple system’s health and compliance can be managed from a single dashboard, as well as share that information with other IT management tools. The architecture is depicted in Figure 1.

 

 

 

abcd1.png

Figure 1 : VCE VISION Software Architecture

 

The architecture is comprised of 2 main modules: Core and Multi-Systems Management (MSM). Both reside on each VCE System’s AMP (Advanced Management Platform), but in separate virtual machines. It communicates with VCE System components’ element managers and individual devices directly over a variety of southbound protocols and methods. It also Collects the data and transforms it into functional streams –  VCE System inventory, capacity, health, logs and events ; processes and stores the data in a PostgreSQL relational database management  system. Finally, it provides the Core services and applications with a common platform  (a RabbitMQ Message Broker) to send and receive messages asynchronously and ensures the persistence of messages until delivery to a consumer passes  data northbound to the MSM (for Vision dashboard visualization)  and  to third-party tools such as: VMware vCenter Web Client via a Plug-in.

 

After knowing the architecture of the product. the automatic question that comes to our mind is why we need the above product and why it is important. The average amount of resources(around 70%) that modern IT organization requires for operations is  just to “keeping the lights on” in order to perform many routine activities like Integrating technologies , upgrading them  with new firmware, monitoring and securing them. To put into perspective, the siloed nature of traditional infrastructure and operations are largely responsible for this high cost of operations and inefficiency. In this context, a large US manufacturer recently told me that they don’t even have the time to deploy new applications to drive new business. They have also stated that they need too many maintenance windows just for upgrading firmware.  Even worse -- untested releases cause outages! They need a proper panacea for all these nightmares.

I feel that VCE Vision software solves these problems  -- so we can spend less time “keeping the lights on” in our data center and devote more resources onto new projects that grow the business as depicted in Figure 2.

abcd1.png

Figure 2 : Transition to Vision Dashboard

Each VCE System’s health and compliance status is on the dashboard directly below the top-level information as shown in figure 3.

abcd1.png

Figure 3: Single Dashboard for Vision 3.3 software.

 

 

 

From Figure 3 users can know at-a-glance about converged system health and compliance risks around all VCE systems. It also helps us proactively to address risks before they impact  businesses – or escalate them quickly if they have impacted business. In figure 3 we see that a heat map gets generated which  indicates system health issues are evolving based on the availability of system architectures’ redundant components. In this figure 3 we also see many out-of compliance components and its type in the system “Failed components” which shows components that got failed in Vision’s automated RCM (firmware/software release)  and Security hardening policy compliance audits. All the benefits are discussed in side labels of figure 3. In short we get the following benefits from the VCE Vision software :-

 

  • Ensures system stability and optimization while lowering OPEX
  • Validates successful upgrades and pinpoints drift from compliance
  • Ensures strong security posture while lowering OPEX
  • Pinpoints drift from compliance for continuous policy enforcement
  • Save hours of labor and avoid human error in the upgrade process
  • “ Instead of looking in 97 different places, I can immediately see where the pain points are and act accordingly.” à Global Communications, Hosting and Cloud Service Provider
  • “It used to take 5 days with 12 hours of downtime for system updates …through Vision…1 day with zero downtime.” à Large North American University

 

 


Follow us on Twitter:
EMCOracle.png

Tweet this document:

Want to get a fair bit of an idea about VCE Vision software ? If yes, click here --> http://bit.ly/2drcEF9

 

Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

This time I attended the Oracle Open world,2016(OOW16) and found that the emphasis is on 3 major pillars like cloud computing, big data and in-memory database. In addition, it has been announced that oracle will be working on non-volatile memory and will be engaged in improving and innovating the new features of its in-memory databases. So far as releasing the database version 12c release 2 is concerned there are many areas where we see many innovations in the multitenancy categories like  online clones, refresh and relocation. Also, a column store will be added to Active data guard for performance boosting. In OOW16 we have seen that the shift is from Disk-based to In-Memory databases, from Data Warehouse to Big Data and from On-Premise to database Optimized Cloud as shown in Figure 1. 

abcd1.png

Figure 1: Oracle’s technology direction in coming days

 

In line with the above direction, oracle made many announcements for 12.2 which is slated to be available on November , 2016. I would like to discuss some new features in the multiple areas for 12.2. Firstly, the In-Memory database will run on Active standby Data Guard. It has been claimed by Oracle that the real-time analytics has no impact on the database as it makes productive use of standby database resources. In this database version, the in-memory format can now be used in Smart Columnar Flash cache. It will also facilitate the  in-memory optimizations on data in Exadata Flash cache. We can also expect to see the improvement of the in-memory performance as it will extend server DRAM seamlessly to larger flash in storage. A new feature which is very interesting to me is the increase in the number of PDBs which is upped to 4,096 per container. We can “clone, refresh, or relocate” PDBs even while they’re running, while “isolation between tenants” in the same container has been “strengthened”  This new feature has been communicated by Mendelsohn(executive vice president for database server technologies at Oracle).

The most important announcement in this OOW16 is the development of the Non-volatile memory which is slated to be available on 2018. Oracle claims it will be a big disruption in the storage and in database markets. So far as announcements in big data are concerned, there may be a tectonic shift from the data warehousing to big Data as depicted in Figure 2.

abcd1.png

Figure 2: Transforming to Big Data

 

In Big data space the greater innovation is towards the faster SQL access for relational, Hadoop and NoSQL databases while using Oracle Big Data SQL on JSON data. By this we mean that we can join JSON with any other data source and also can apply any SQL analytics to JSON. Big Data SQL in Oracle Cloud also identifies the JSON.

With Oracle Database 12c Release 2, Oracle is introducing Oracle Database Exadata Express Cloud Service (Exadata Express), an entry-level version of the high-performance engineered system of oracle. Starting at just $175 per month for a 20GB database, it’s a entry-level database cloud service for dev/test or production databases for departments or small businesses. Exadata Express runs Oracle Database Enterprise Edition with most options and runs on Oracle’s database-optimized Oracle Exadata infrastructure. Customers can start with a small deployment on Exadata Express and scale to large database deployments on Oracle Database Cloud Service and Oracle Exadata Cloud Service. The best features are demonstrated in Figure  3.

 

abcd1.png

Figure 3 : 12c Innovations in Exadata Express Service


So far as data guard broker is concerned we have the following developments as depicted in Figure 4

abcd1.jpg

Figure 4 : Data guard Broker Enhancements

 

In summary, I observed this time and in last couple of years (in OOW) the Database technology is moving towards a new, different & innovative direction rapidly, especially in three areas viz. in-memory technology, big data analytics, and the cloud—(The same has been pronounced by Andy Mendelsohn, executive vice president for database server technologies at Oracle). Oracle is positioning and showcasing its portfolio towards cloud compatible products and it will be very interesting to note how the whole database landscape will get shaped up in terms of product quality and market competition vis-à-vis Oracle’s proclaimed competitors like Amazon .

 

 

 


Follow us on Twitter:
EMCOracle.png

Tweet this document:

 

Want to learn about the developments and announcements in this year's Oracle Open World ? Click here --> http://bit.ly/2dKHsUP

Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

CSR stands for corporate social responsibility and we owe a lot from the society who has felicitated us to be what we are. In this year, VCE (VMWARE-CISCO_EMC) team has adopted a school in Bangalore where I played a key role in adopting the school based on the understanding of the facilities, infrastructure and other challenges of the school and ultimately helped the students to gain knowledge in basic math, English, science , computer , Hindi etc. It was a collective effort from many volunteers in the VCE team to teach the kids and repair all the desktops in their computer labs with EMC technicians. It was a very fascinating experience to teach the kids and celebrate different occasions as you can see in the figure 1

csr_event.png

Figure 1 : Teaching and Spending time with School Kids

 

When I interact with these kids I feel that there are many areas where we can make a difference to these kids. So, I started teaching these under-privileged kids with a lot of passion and enthusiasm. Before the teaching exercise I did a thorough check on the requirements of the school kids, teachers, and headmaster and also for the school in general.  I found that they need a complete operational computer lab along with the basic infrastructure and comprehensive teaching that will enrich their knowledge level. This will in turn help them to become successful in their future exams and assignments. As a special request I taught them the leadership and communication skills as well. Overall, it was a very enthralling experience and I have learned how to make others successful & happy and enjoyed the joy of giving back to the society.

 

 


Follow us on Twitter:
EMCOracle.png

Tweet this document:

School Adoption program empowers down-trodden kids with greater knowledge, wisdom and social exposure. Learn how http://bit.ly/2deaV6O

 

Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

Hyper-converged architecture (HCI) brings together best-in-class hardware and software components (compute, networking, storage, virtualization, and management) to greatly simplify data center operations. When combined we get hyper-converged infrastructure, which integrates compute, software defined storage, networking, and virtualization into a single build block for the data center as shown in Figure 1.

abcd1.png

Figure 1 : Evolution of hyper Converged Architecture

 

It enables compute, storage, and networking functions to be decoupled from the underlying infrastructure and run on a common set of physical resources that are based on industry-standard x86 components. These systems are built with a modular design which makes it easy and flexible to scale within the core data center. They offer rich data services and highly resilient infrastructure components, which many of today’s mission critical enterprise applications require. Traditional converged system has also proven to deliver excellent TCO while HCI and appliances deliver different benefits as depicted in Figure 2.

abcd1.png

Figure 2 : Comparative Study between Traditional and Hyper converged Architecture

 

If organizations have remote, small data center and edge locations and don’t have highly skilled IT administrators available, then a hyper-converged appliance could be a great option.  Simple to set up and highly automated, appliances deliver simplicity and speed at a relatively low cost of entry (compared to most ref architectures or traditional SAN).  Appliances are highly configurable and are designed to start small and grow by simply adding appliances/nodes to the cluster. They provide a buy experience, which is inclusive of ongoing lifecycle management and support.

 

A rack scale system brings the same benefits as the stand alone appliance but on a larger scale. Core data centers are a perfect choice because like their name suggests- they can scale bigger and easier than stand alone appliances because they include the networking components like Cisco switches.  Again, in the buy experience, hyper-converged, software defined but scale to extreme without difficulties.

There are several key drivers for the adoption of HCI. They are


Lower TCO - is a major factor. Savings in CapEx or initial investments are lower, and operational expenses are also lower when compared to traditional SAN architectures (this includes power and cooling, ongoing system administration, and the elimination of forklift upgrades and data migrations). HCI increases the capabilities to start small and gradually scale-up by moving to software defined software to provide the lowest TCO. In fact, hyper-converged infrastructure can deliver more than 30 percent lower TCO compared to traditional SANs.


Speed and Agility – In addition to faster time to value/deployment, IT can easily add storage and compute resources and networking to meet business demands as they grow and expand. As mentioned earlier- there’s no waiting for boxes to arrive, there’s no testing, there’s none of that- up and ready in less than a day. Simply plug it in and you are ready to go.


Scale Easily - HCI allows users to start with a small deployment (several nodes), and then flexibly and efficiently scale out to support dynamic workloads and evolving business needs (100’s or 1000+ nodes depending on the system type).


Operational Simplicity - HCI is also managed from a central console that controls both compute and storage and automates many functions, which is also driving customer interest because it helps cuts down on the number of tools that have to be learned and used.

The software-defined architecture combines multiple infrastructure components in a single solution, eliminating the need to refresh multiple siloed components – consecutively or in parallel. It enables us to pay-as-you-grow approach - start with what you need today and expand incrementally rather purchasing large amount of compute and storage up front. In addition, the software-defined architecture eliminates the need for complex migrations and forklift hardware upgrades. It also addresses the typical over-provisioning and over-purchasing that occurs when technology is intended to last for multiple year cycles.

VCE addresses all workloads issues with a broad portfolio of optimized systems: blocks, racks and appliances that can be interconnected into a single pool of resources with consistent management, operations, backup, business continuity and disaster recovery capabilities.Here are some figures and data points that highlight VCE’s leadership in the converged infrastructure market as shown in Figure 3.

abcd1.png

Figure 3 : VCE HCI products leadership charts

 

In summary modern day data centers provide the following benefits:-

 

  1. Low latency- Cisco 3100’s switch on a chip provides ultra-low latency for all types of applications like oracle databases.
  2. High Availability- The Cisco Nexus 3100 platform is designed with redundant and hot-swappable power supplies and individual fan modules that can be accessed from the front panel, where status lights offer an at-a-glance view of switch operation. The switch can function properly with one failed fan and one failed power supply. For high reliability, any fan or power supply can be hot swapped during operations.
  3. Scalability - HCI allows users to start with a small deployment (several nodes), and then flexibly and efficiently scale out to support dynamic workloads.
  4. Simplify - Simplifies operations- on many fronts.. When trying to expand, once, twice, three times. Removes challenges by including all networking .
  5. Cost savings – Ultimately HCI will reduce capital expenditure of the organizations.
  6. Performance- Nonblocking line rate- all ports provide 2.5 TB per second of bidirectional bandwidth. It has Superior application response times.

 

 


Follow us on Twitter:
EMCOracle.png

Tweet this document:

 

Want to know about the new normal for the modern data center. Click here --> http://bit.ly/2cNOROC

Click here to learn more:


 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

In my last blog I was talking about many best practices that are being followed when we use vBlock with XtremIO and Oracle database. I discussed about a wide range of best practices methodologies for oracle databases on vBlock and XtremIO. Best practices are very important as we know as an oracle DBA we encounter various challenges.in Oracle environments as demonstrated in Figure 1. Oracle environments typically require high transaction rates at very low latency. At the same time, different types of activity performed against the database – online transaction processing [OLTP], online analytical processing [OLAP], and data warehousing [DW], for example – cause very different I/O sizes, randomness, and R/W ratios at the storage level. Coupled to this, is the fact that even inside the database itself, logging activity and database activity are very different in terms of I/O pattern.  In this blog, I will continue the same discussion on best practices.

abcd1.png

Figure 1: Oracle DB Workloads

 

Lets start with the ASM settings like ASM disk group layout & ASM disks per disk group. Oracle recommends separating disk groups into three parts: Data, FRA/Redo, and System. Due to the nature of Redo, a disk group can be dedicated. While the XtremIO array will perform well using a single LUN in a single disk group, it will be better to use multi-threading and parallelism to maximize performance for the database. Here, it is best to use four LUNs for the data disk group allowing the host to use simultaneous threads at different queuing points. That means that the RAC system will have four LUNs dedicated for control files and data files; 1 for Redo; 1 for archive logs, flashback logs, and RMAN backup; and one for system files. The best practice is to use four LUNS for the Data disk group. This allows the hosts/applications to use simultaneous threads at various queuing point to extract the maximum performance from the XtremIO array. The number of disk groups should be 10 or less for optimum performance.

 

Also modify /etc/sysconfig/oracleasm

# ORACLEASM_ENABELED: 'true' means to load the driver on boot.

ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.

ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.

ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.

ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning.

ORACLEASM_SCANORDER="dm"

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan.

ORACLEASM_SCANEXCLUDE="sd"

 

Allocation Unit Size

 

The default AU size (1 MB) for coarse grained and 128KB for fine grained striping works well on XtremIO Array for various database files. There is no need to modify striping recommendations provided by default templates for various Oracle DBMS file types.

File Type Striping

CONTROLFILE FINE

DATAFILE COARSE

XtremIO and Oracle Best Practices 9

ONLINELOG FINE

ARCHIVELOG COARSE

TEMPFILE COARSE

PARAMETER COARSE

FLASHBACK FINE

 

In order for ASM disk groups with various values associated to the attribute sector size (512,4096) to be mounted by an ASM instance, the parameter “_disk_sector_size_override=TRUE” has to be set in the parameter file of the database instance. Consider setting ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true in /etc/sysconfig/oracleasm. Setting this to true sets the logical block size to what is being reported by the disk ( 512 bytes.) The minimum I/O request size for database files residing in an ASM disk group is dictated by the sector size (ASM disk attribute). For ease of deployment, the recommendation is to keep the logical sector size to 512 bytes to ensure that the minimum I/O block size can be met for all types of database files. Consider skipping (not installing) ASMLIB entirely. By skipping ASMLIB, you can now create an ASM DG with 512 bytes per sector. By creating an ASM DG with 512 bytes for the sector size, you can direct the default redo log files (512 bytes block size) to this DG at least in the interim just for DBCA to complete the DB creation.

 

Regarding virtualization we have the following recommendations:-

 

Infrastructure traffic should be separated from VM traffic. This leads to better security, and isolates the traffic types effectively, improving performance. In VMware environments, the para-virtualized network adapters are the highest performing and most efficient, and should be used for all VMs. Network redundancy will ensure continued trouble-free operation even if a failure should occur. Physical NICs should be used in pairs for each server or vSwitch, with NICs of each pair assigned to separate physical switches.

 

 

 

The SAN connectivity, whether FC or iSCSI, will be important in an Oracle environment. At least two HBAs should be used per host, with at least two 8 Gb/s FC ports or 10 Gb/s iSCSI ports per physical CPU.Zoning/IP connectivity should be as broad as possible, keeping in mind the limit on path count [16] to a single device. This restriction will be significant in 6 and 8 X-Brick clusters, where the total number of available front-end ports per protocol [FC and iSCSI] exceeds 16. Use single-initiator zoning, and preferably single initiator/single target zoning. Enough bandwidth for normal operation should be ensured for the failure scenarios.Balance hosts across SCs to provide a distributed load. Keep hosts and SCs on the same switch if possible; do not allow more than two ISLs in the data path.

Hope the readers got some ideas regarding optimal implementation of Oracle database on vBlock and XtremIO storage.

 

 

Follow us on Twitter:
EMCOracle.png

Tweet this document:

Want to know the best practises for using Oracle Database on vBlock platform along with XtremIO ? Pls. read here --> http://bit.ly/1Y3kB2F

 

Click here to learn more:

store_open.png

 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

In my last blog I was talking about the vblock and its relevance to oracle database.

zz1.png

 

Customers now realize that architecting, planning, testing, verifying, deploying, and maintaining interoperability between infrastructure components saps budgets and resources while adding no value to the business. This is the reason that the customers are moving towards converged architecture. Figure 1 illustrates the above logic.

abcd1.png

Figure 1. Build vs Buying a Car/Converged Architecture

 

In this regard vBlock 540, XtremIO and Oracle database stand out to be the winner as per Gartner demonstrated in Figure 2.

abcd1.png

 

Figure 2 : Magic Quadrant for Integrated Systems

 

As per Gartner , VCE is clearly the market leader with the following distinguishing features :-

  • VCE is established position as one of the clear leaders in the integrated system market with Vblocks.
  • EMC offers factory-integrated and -validated reference architectures for enterprise mission-critical applications .

 

 

 

 

 

So, to run this factory-integrated and validated reference architecture of VCE products (like vBlock) we need to follow some best practices so that we can extract the maximum benefits from the 3 softwares viz.  vBlock , XtremIO and oracle database. So, I will try to document some of the best practices from Oracle database standpoint in this blog and in the next subsequent blog.

 

Firstly, The default block size for Oracle Redo Logs is 512 bytes. This default block size will cause redo log entries to be misaligned by the ready-modify-write modify operations (explained in the blog post by flashdba); redo log block size should therefore be set to 4 kB. In order to create redo logs with a 4 kB block size, and ASM disk groups with a 4096 byte sector size, add the option _disk_sector_size_override=TRUE to the parameter file of the database instance. Oracle’s default block size of 8k works well with XtremIO. This setting provides a great balance between IOPS and bandwidth, but can be improved upon in the right conditions. The new versions of Vblock 540 and XtremIO all flash array performance is 1.5X faster than the previous release as it has been optimized for 8k-block size. If the rows don’t fit nicely in 4k or above block sizes, it will be better to stick with the default setting. For data files, I/O requests will be in a multiple of the database block size – 4k, 8k, 16k, etc. If the starting addressable sector is aligned to a 4k boundary, optimal condition is met.

 

Secondly, Oracle controls the maximum number of blocks read in one I/O operation during a full scan using the DB_FILE_MULTIBLOCK_READ_COUNT parameter. This parameter is specified in blocks and defaults to a 1 MB read size. This value should be set to the maximum effective I/O block size divided by the database block size. If there are multiple tables with PARALLEL_DEGREE_POLICY set, reduce this to 64 kB or 128 kB. If you’re running with the default block size of 8 kB, this DB_FILE_MULTIBLOCK_READ_COUNT will need to be 8 or 16 respectively.

 

Thirdly,  XDP protects the data while on disk, there is no need to use additional redundancy; Oracle should be configured to use ASM External redundancy, and also to use the default ASM stripe sizes. Best practice is to use four LUNs for the DATA disk group to allow the host to use simultaneous threads. The REDO and FRA disk groups should use two LUNs each. The traditional complexities of storage configuration have evolved to faster resulting into more simplified provisioning as all database data receives the sub-millisecond performance of all flash array and also the protection of XDP in XtremIO and vBlock 540. For Configuring Oracle Disk Groups, the following guidelines may be followed :-

 

  • Use ASM External Redundancy as XDP protects the data inside the array.
  • Use the default ASM file type template stripe sizes.
  • Create 4 ASM disks per DATA disk group.
  • Create 2 ASM disks per REDO disk group.
  • Create 2 ASM disks per FRA disk group.
  • Allocate multiple XtremIO LUNs to each host
  • Create XtremIO LUNs with 512 byte sector size for all DB files.
  • Align data on 8kb boundaries
  • Use Eager Zeroed Thick formatting on ESXi.
  • Use consistent LUN numbers on all hosts in a hypervisor cluster.

 

Multiple LUNs should be configured – each has its own queue, and maximizing queue depths will be important in Oracle environments. This is a host requirement, and is not needed by the XtremIO array, where one LUN can easily service all the required IOPs. It is also not required to separate LUNs by I/O type, since XtremIO randomizes accesses internally. All LUNs should be created with the 512 byte sector type. LUNs accessed by a hypervisor cluster should use a consistent addressing scheme. If LUN numbers do not match, VM power up operations or migration operations may be affected. I am going to discuss some best practices of Oracle Database in the converged architecture segment in my next blog.

 

Follow us on Twitter:
EMCOracle.png

Tweet this document:

Want to know the best practises for using Oracle Database on vBlock platform along with XtremIO ? Pls. read here --> http://bit.ly/1ZPxpuZ

Click here to learn more:

store_open.png

 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

In my last blog (part 1 of this blog) I was talking about the relevance and the architecture of VxBlock with respect to the oracle database.

zz1.png

We were analyzing the main components of VxBlock which are as follows :-


 

 

  • Compute
  • Networking
  • Network virtualization
  • Storage
  • Virtualization

 

Let us now discuss each of the above components in regard to oracle databases:-


Compute

The Cisco Unified Computing System (UCS) within VxBlock systems system offers a scalable, high-performance computing platform for running Oracle databases. Oracle databases on VxBlock architecture provide built-in reliability, availability, and serviceability (RAS). This helps to ensure nonstop access to important applications and data at a lower total cost of ownership.

The UCS Servers within VxBlock are automatically configured through unified, profile-based management. UCS server within VxBlock system helps us in the following way:

  • Simplify the VxBlock infrastructure with ease of use.
  • Helps in Virtualizing Oracle databases with vsphere
  • Improve business agility
  • Increase efficiency and performance

 

Networking

Cisco networking within VxBlock accelerates and strengthens security of Oracle applications and databases. Cisco UCS inside VxBlock also extend these benefits with the addition of a high-performance x86 platform optimized for Oracle solutions. This networking architecture provides the following benefits:

  • Simplifies deployment of Oracle applications and databases
  • Improves Oracle application and database performance
  • Accelerates new implementations and upgrades
  • Enhances operational flexibility, security, and efficiency

 

Network Virtualization

Network Virtualization is implemented by VMware NSX which is the market leading implementation of network virtualization from VMware. NSX is a multi-hypervisor solution that leverages the vSwitches already present in server hypervisors across the data center. NSX coordinates the vSwitches and the network services pushed to them for connected VMs to effectively deliver a platform – or “network hypervisor” – for the creation of virtual networks.

It offers the following benefits:-

  • Compatibility with most of the application, hypervisor, network infrastructure and cloud management program. It facilitates oracle database from any potential operational bottlenecks.
  • Simplifies networking and hence better performance of DBs.
  • Helps in delivering DbaaS in the cloud.
  • Automation of Oracle Database infrastructure

 

Storage

The XtremIO Storage which is inbuilt with VxBlock 540 provides the following benefits to Oracle Database :-

  • Performance
    • Sub-millisecond Latency
    • Very high IOPS
    • Scale-out architecture
  • Efficiency
    • Inline Deduplication and Compression(Helps in on-boarding DEV,QA and other SIT databases)
    • Thin Provisioning (helps in Consolidating databases with optimum storage requirements)
    • XtremIO Data Protection(provides highest protection of Oracle data)
  • Simplicity
    • Easy to use interface.
    • Minimal design and implementation efforts
    • Simple Volume Provisioning

 

Virtualization

Virtualization in VxBlock systems is accomplished by VMWARE Vsphere. So far as Oracle Database is concerned, the following benefits are derived from the vSphere virtualization:-

  • High Performance I/O in VMware ESX
  • Resource management capabilities of VxBlock
  • Maintaining Oracle DBaaS SLAs with minimal effort and resources in the VxBlock system.
  • Server level high availability, disaster recovery, performance guarantees, provisioning time constraints and security requirements can be enforced with lesser TCO.
  • Higher licensing cost of Oracle can be reduced.

 

Conclusion

VxBlock Systems are all-flash-based converged infrastructure systems for mixed, high-performance workloads and emerging third-platform applications. Leveraging EMC XtremIO, all Flash media storage and Cisco Unified Computing System (UCS) and Application-Centric Infrastructure (ACI)-ready network, these systems deliver scale-out performance at ultralow latency. Additionally, combined with VCE Technology Extensions for EMC, VxBlock 540 is an ideal platform for big data analytics and end-user computing. In a nutshell, we derive the following benefits from VxBlock with regard to the Oracle databases :-

  1. Accelerate to cloud speed in a flash.
  2. Scale-out performance at low latency
  3. Superior Flexibility in consolidating multiple workloads without compromising performance or availability
  4. Centralized management and Support for Oracle database.
  5. Highest levels of database availability, resiliency and durability for critical applications and resources.
  6. Inbuilt and Enhanced Data Protection solutions from backup to recovery for Oracle DBs thereby ensuring business continuity.

 


 

 

Follow us on Twitter:
EMCOracle.png

Tweet this document:

Want to know how the converged architecture from VCE is beneficial for Oracle databases ? Click here for Part II --> http://bit.ly/1QKFbmn

Click here to learn more:

store_open.png

 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

 

facebook_button-30.png

twitter_button-25.png

email_button-30.png

linkedin_button-30.png

 

The VxBlock got released on 12th March, 2016. In the converged infrastructure space, VxBlock System 540 is the first to offer the industry’s first all-flash-based converged infrastructure. It is a factory-integrated and validated system in the VxBlock System 540 and it delivers scale-out performance at ultralow latency. Also it helps in consolidating and scaling data center-class applications (like Oracle applications), performance-centric databases, and emerging third-platform applications such as cloud, mobile, and social media. From the above documentation it can be concluded without doubt that the VxBlock System 540 (including EMC XtremIO storage) is the ideal for Oracle databases and applications that demand the highest throughput at the lowest latency, such as online transaction processing (OLTP) and online analytical processing (OLAP). In this blog and in the next, we are going to discuss how VxBlock systems have become a game changer for the optimal management of Oracle Databases.

To understand better how VxBlock System delivers umpteen benefits to different Oracle database environments and its underlying data centers, we need to get a peek at its(VxBlock’s) architecture. The architecture layer can be broadly classified into following layers as demonstrated in Figure 1:-

zz1.png

zz2.png

Figure 1: VxBlock Major Components

 

From Figure 1, we understand that the VxBlock systems is invariably a system of Cisco Application Centric Infrastructure (ACI) or VMware NSX (depends on customer’s selection criteria) for software-defined networking (SDN) functionality, along with Cisco networking and computing, EMC storage and data protection, and VMware virtualization and its management. Having identified the main components of the VxBlock systems, let us analyze the components (shown in Figure 1) with regard to oracle database.

To start with VxBlock is a modular platform with defined scale points that meets the higher performance and availability requirements of an enterprise’s business-critical applications as demonstrated in Figure 2.

zz3.png

Figure 2: VxBlock 540 Systems

 

In Part II of this blog, we are going to discuss the main components of VxBlock (based on Figure 1) which are

 

• Compute

• Networking

• Network virtualization

• Storage

• Virtualization


 

 

Follow us on Twitter:
EMCOracle.png

Tweet this document:

Want to know how the converged architecture from VCE is beneficial for Oracle databases ? Click here for Part I --> http://bit.ly/1Mcp7FA

Click here to learn more:

store_open.png

 

facebook_button-30.pngtwitter_button-25.pnglinkedin_button-30.pngemail_button-30.png

Filter Blog

By date:
By tag: