Find Communities by: Category | Product

1 2 3 Previous Next

Everything Oracle at Dell EMC

294 Posts


The VCE VxRail Series delivers virtualization, compute, and storage in a scalable, easy to manage, hyper-converged infrastructure appliance.


VxRail Appliance is built on Intel Xeon processor-based x86 hardware with the VxRail™ Manager software bundle, and support for other value-added software from VCE, EMC, and VMware.


The VxRail Manager software bundle includes the following:


  • VxRail Manager for deployment, configuration, and management
  • VMware vSphere, including ESXi
  • VMware vCenter Serve
  • VMware Virtual SAN for storage
  • VMware vRealize Log Insight


The VxRail Appliance


The VCE VxRail Appliance includes the appliance hardware, VxRail Manager, EMC Secure Remote

Services (ESRS), and access to qualified EMC software products.


The following VxRail Appliance models are available:


VxRail Appliance with Hybrid Nodes


  • VxRail 60
  • VxRail 120
  • VxRail 160
  • VxRail 200


VxRail Appliance with All-flash Nodes


  • VxRail 120F
  • VxRail 160F
  • VxRail 200F
  • VxRail 240F
  • VxRail 280F





The VxRail Series offers advanced features including automatic deployment, automatic scale out, fault

tolerance, and diagnostic logging.


Automatic deployment


The VxRail Manager fully automates the installation and configuration of all nodes in an appliance after

you input the basic IP address information.


Automatic Scale-out


The VxRail Series provides automated scale-out functionality by detecting a new VxRail Appliance on the

network. When a new VxRail Appliance is powered on you can add it to your existing cluster or create a

new cluster, replicating the configuration, and expanding the datastore in a cluster.


Node failure tolerance


The VxRail Series supports from 0 to 3 node failures, as defined by the VSAN FTT policy.


The VxRail Series implements the standard Virtual SAN policy of one failure by default:


  • An entire node can fail and the system will continue to function.
  • Disk failure cannot affect more than one node. One SSD can fail or as many as three HDDs on the same node can fail.
  • One network port on any node can fail without affecting the node.
  • Network failover is through the virtual switch configuration in ESXi. This is automatically configured by VxRail Manager during initial setup.


Logging and log bundles


The VxRail Series provides logging and log bundles through VxRail Manager.


VxRail Appliance cluster expansion


Your VxRail Appliance cluster can be scaled in single node increments from a minimum of four nodes up

to a maximum of 32 nodes. The VxRail Manager automated installation and scale-out features make it

easy to expand your cluster as your business demands grow.


Each VxRail Appliance holds up to four nodes. If the number of nodes in a cluster is not a multiple of four, you will have a partially populated appliance chassis in the cluster. You can use the empty slots in the chassis for future expansion.


You can mix different VxRail Appliance models in the same cluster. You must adhere to the following

guidelines when deploying a mixed cluster:


  • All appliances in the cluster must be running VxRail Manager version 3.5 or higher.
  • First-generation appliances (sold under the VSPEX BLUE name) can be in the same cluster with VxRail Appliances, as long as they are running VxRail Manager version 3.5 or higher.
  • Appliances using 1GbE networking (VxRail 60 Appliances) cannot be used in clusters with 10GbE networking.
  • 6G hybrid nodes cannot be used in clusters with 12G all-flash nodes.


VxRail Appliance cluster scalability is supported to a maximum of eight appliances or 32 nodes. However, scalability to 16 appliances or 64 nodes per cluster may be allowed. You must submit a request for product qualification (RPQ) to EMC for clusters over 32 nodes.




VxRail Data Sheet

VxRail Specification Sheet



To grow and succeed you need to differentiate yourself from your competitors. To support the business as an enabler, your IT organization must break down silos of complexity and streamline operations to keep pace with today’s on-demand business culture and users.


Dell EMC VxRai enables you to buy business outcomes. It accelerates and simplifies IT through standardization and automation (such as Oracle DBaaS…), allowing you to focus on:


•    Getting infrastructure operational as quickly as possible while leveraging existing investments and skill sets

•    Enabling (not impeding) business applications

•    Making infrastructure growth transparent to the needs of the application and users


VxRail is the only fully integrated, preconfigured, and tested HCI (Hyper Converged Infrastructure) appliance powered by VMware Virtual SAN and is the easiest and fastest way to extend a VMware environment. VxRail provides a simple, cost effective hyper-converged solution that solves a wide range of your challenges and supports most applications and workloads.  Dell EMC VxRail features purpose-built platforms that deliver data services, resiliency, and QoS, enabling faster, better, and simpler delivery of virtual desktops, business critical applications, and remote office infrastructure.


The VxRail Appliance architecture is a distributed system consisting of common modular building blocks that scale linearly from 3 to 64 nodes in a cluster. With the power of a whole Storage Area Network (SAN), it provides a simple, cost-effective hyper-converged solution that delivers multiple compute, memory, storage, network and graphics options to match any use case and cover a wide variety of applications and workloads.


Based on industry-leading VMware Virtual SAN and vSphere software and built with new 5th generation Intel™ Xeon™ processors, the Dell EMC VxRail Appliance allows customers to start small and grow, scaling capacity and performance easily and non-disruptively. Single-node scaling and storage capacity expansion provide a predictable, “pay-as-you-grow” approach for future growth as needed.


The Dell EMC VxRail Appliance comes stacked with mission-critical data services at no additional charge. Data protection technology including EMC RecoverPoint for VMs and VMware vSphere Data Protection are incorporated into the appliance, with the option of adding Data Protection Suite for VMware and Data Domain Virtual Edition (DD VE) for larger environments that require more comprehensive data protection. EMC Cloud Array also built in to seamlessly extend the Dell EMC VxRail Appliance to public and private clouds to securely expand storage capacity without limits, providing additional on demand cloud tiering included.


Running Oracle database applications with a software defined datacenter and VxRail hyper-converged architecture, all resources are virtualized so they can be automatically deployed, with little or no human involvement. Database applications can be operational in minutes, shortening time to value and dramatically reducing IT staff time spent on application provisioning and deployment.


The Dell EMC VxRail Appliance is also backed by world-class support with a single point of contact for both hardware and software, and includes Dell EMC ESRS for call home and proactive two-way remote connection for remote monitoring, diagnosis, and repair to ensure maximum availability.


In the next a few blogs, I will talk more about VxRail Technologies, benefits for Oracle database and the Oracle licensing strategy to save software costs.




VxRail Data Sheet

VxRail Specification Sheet

Ransomware is rapidly gaining popularity with cybercriminals because of the opportunity to extort small sums of money from a vast number of individuals and large sums of money from mid-size and enterprise-size organizations.


Three widely accepted strategies for protecting a business from  ransomware attacks include prevention, containment, and recovery.  Dell EMC, a leader in backup and recovery, shows in this paper how protecting backups and executing fast restoration can safeguard the business from extortion.





7 pages backup solution overview: “Isolated Backups on Dell EMC Data Domain” 

Contents including:

  • Ransomware – Cost and payment statistics
  • Types of infection vectors and ransomware
  • Data security best practices
  • Best practices for preventing ransomware attacks
  • Protecting your backup systems
  • Data Domain isolated recovery solution
  • Building a last line of defense – what you need


Available immediately to general public on Download here


Conversation starter for sales and pre-sales to use with prospects.  Reference hyperlink in email or print and leave behind


  • Marketing communications:  Sam Lucido and team
  • Editing and production: Dave Simmons, Judith Cuppage
  • Sponsorship: Keith Miracle


  1. Supporting blog for introduction of this solution paper
  2. Supporting customer facing presentation for sales and pre-sales using for prospects


Dave Simmons, Sam Lucido



Sample Tweets


Dell EMC Isolated Backup Solution on Data Domain  a last line of defense against ransomware attacks


Industrial leading backup and recovery solution on Dell EMC Data Domain for safeguarding the business for extortion  a last line of defense against ransomware attacks


Dell EMC Isolated Backup best practices on Data Domain


  Dell EMC Hadoop Big Data Solution

  Hadoop Architecture and Cluster Deployment


Apache Hadoop is an open-source software framework used for distributed storage and processing of very large data sets. It consists of computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are a common occurrence and should be automatically handled by the framework.


The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part called MapReduce. Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then transfers packaged code into nodes to process the data in parallel. This approach takes advantage of data locality – nodes manipulating the data they have access to – to allow the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.


The base Apache Hadoop framework is composed of the following modules:


  • Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
  • Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster;
  • Hadoop YARN – a resource-management platform responsible for managing computing resources in clusters and using them for scheduling of users' applications; and
  • Hadoop MapReduce – an implementation of the MapReduce programming model for large scale data processing.




Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster or installing it via a packaging system as appropriate for your operating system. It is important to divide up the hardware into functions.


Typically one machine in the cluster is designated as the NameNode and another machine the as ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load.  The rest of the machines in the cluster act as both DataNode and NodeManager. These are the slaves.


To configure the Hadoop cluster you will need to configure the environment in which the Hadoop daemons execute as well as the configuration parameters for the Hadoop daemons.


HDFS daemons are NameNode, SecondaryNameNode, and DataNode. YARN damones are ResourceManager, NodeManager, and WebAppProxy. If MapReduce is to be used, then the MapReduce Job History Server will also be running. For large installations, these are generally running on separate hosts.


Configuring Environment of Hadoop Daemons


Administrators should use the etc/hadoop/ and optionally the etc/hadoop/ and etc/hadoop/ scripts to do site-specific customization of the Hadoop daemons’ process environment.


At the very least, you must specify the JAVA_HOME so that it is correctly defined on each remote node.


Administrators can configure individual daemons using more configuration options.


Configuring the Hadoop Daemons


The following configuration should be performed by Administrators:

  • Configuring the etc/hadoop/core-site.xml and etc/hadoop/hdfs-site.xml for some important paremeters.
  • Configuring NameNode and DataNode
  • Configuring ResourceManager and NodeManager
  • Configuring History Server
  • Configuring MapReduce Applications
  • Configuring MapReduce JobHistory Server


Monitoring Health of NodeManagers


Hadoop provides a mechanism by which administrators can configure the NodeManager to run an administrator supplied script periodically to determine if a node is healthy or not.


Administrators can determine if the node is in a healthy state by performing any checks of their choice in the script. If the script detects the node to be in an unhealthy state, it must print a line to standard output beginning with the string ERROR. The NodeManager spawns the script periodically and checks its output. If the script’s output contains the string ERROR, as described above, the node’s status is reported as unhealthy and the node is black-listed by the ResourceManager. No further tasks will be assigned to this node. However, the NodeManager continues to run the script, so that if the node becomes healthy again, it will be removed from the blacklisted nodes on the ResourceManager automatically. The node’s health along with the output of the script, if it is unhealthy, is available to the administrator in the ResourceManager web interface. The time since the node was healthy is also displayed on the web interface.


Hadoop Rack Awareness


Many Hadoop components are rack-aware and take advantage of the network topology for performance and safety. Hadoop daemons obtain the rack information of the slaves in the cluster by invoking an administrator configured module. See the Rack Awareness documentation for more specific information.


It is highly recommended configuring rack awareness prior to starting HDFS.




Hadoop uses the Apache log4j via the Apache Commons Logging framework for logging. Edit the etc/hadoop/ file to customize the Hadoop daemons’ logging configuration (log-formats and so on).


Operating the Hadoop Cluster


Once all the necessary configuration is complete, distribute the files to the HADOOP_CONF_DIR directory on all the machines. This should be the same directory on all machines.


In general, it is recommended that HDFS and YARN run as separate users. In the majority of installations, HDFS processes execute as ‘hdfs’. YARN is typically using the ‘yarn’ account.


Hadoop Startup

Start the HDFS NameNode with the following command on the designated node as hdfs:

Start a HDFS DataNode with the following command on each designated node as hdfs:

Start the YARN with the following command, run on the designated ResourceManager as yarn:

Hadoop Shutdown


Stop the NameNode with the following command, run on the designated NameNode as hdfs:

Stop the ResourceManager with the following command, run on the designated ResourceManager as yarn:

Run a script to stop a DataNode as hdfs:



For more detail help, please contact Dell-EMC Expert




Welcome to The Apache Software Foundation!




Follow us on Twitter:

Tweet this document:

Click here to learn more:












In my previous blog , I was talking about five important features of oracle database 12.2. In this blog I am going to continue the discussion and talk about 5 more important features for the oracle database version 12.2. Lets start with the sixth item now.


The sixth feature is the real time refresh . In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is done at the statement level, so it does not update the MV itself. If we query the MV directly, we can also use the FRESH_MV hint to do the same and get the up-to-date data. During the process of retrieving the recent data, Oracle merges the MV log data with the MV itself to return the correct result.


The seventh is the Big Data Management System Infrastructure which takes care of the 4V’s of Big Data   volume, variety, velocity and veracity. This feature allows more users of Hadoop to combine map-reduce processing with the essential database qualities that many applications require. It can access with any language like REST,Java,SQL,Python,R, Scala etc. it can also do analysis of any type of data like SQl , Spark, Graph, Spatial , Machine Learning etc. It can use databases like NoSQL, traditional database and Hadoop data which can be accessed by External tables. External tables are used by both SQL*Loader and Oracle Data Pump, and thus also by the ORACLE_LOADER and ORACLE_DATAPUMP access drivers. The Hadoop Distributed File System (HDFS) and Apache Hive are the other two most important data sources here.


The eighth is the hot cloning of a PDB without application outages/downtime. Cloning of a pluggable database (PDB) resolves the issue of setting the source system to read-only mode before creating a full or snapshot clone of a PDB. With this feature, you can now clone your production PDBs and create point-in-time copies for development or testing without any application outage. This feature eliminates the need to create application outages for the purpose of cloning.


The nineth is the In-Memory FastStart. In-Memory Column Store allows objects (for example, tables, partitions, and subpartitions) to be populated in-memory in a compressed columnar format. Until now, the columnar format has only been available in-memory. That meant that after a database restart, the In-Memory Column Store would have to be populated from scratch using a multiple step process that converts traditional row formatted data into the compressed columnar format and placed in-memory. In-Memory FastStart enables data to be repopulated into the In-Memory Column Store at a much faster rate than previously possible by saving a copy of the data currently populated in the In-Memory Column Store on disk in its compressed columnar format. In-Memory FastStart significantly reduces the time it takes to repopulate data into the In-Memory Column Store after a system restart. This allows businesses to start taking advantage of the analytic query performance benefits of accessing data in a columnar format faster.


The tenth is the Improving Security Posture of the Database. DB Vault is an important security feature in Oracle Database.When we implement DB vault on an existing application, there is always a chance the application will break due to some DBA/human errors. DB vault 12.2 can be enabled in simulation mode to take care of this issue in 12.2. While in simulation mode, nothing will break and subsequently can be fixed with details from the log where we see all the blocked state. Hence, we can work proactively and make sure that everything works and fix all the problems before the actual implementation.Encryption, decryption, and rekeying of existing tablespaces with Transparent Data Encryption (TDE) tablespace is possible now. A TDE tablespace can be deployed, encrypted for migration to an encrypted tablespace without any stoppages. This feature enables automated deep rotation of data encryption keys used by TDE tablespace encryption in the background again without any downtime. Encryption is important, but until now, had to be done offline for existing data. Oracle 12.2 allows online encryption and re-key operations. The command will effect a datafile, copying it to a new encrypted one, then will change the database to start using the new encrypted file, and everything will be done online. Now it is possible to encrypt internal tablespaces (system, sysaux, undo, etc) as well.



In the last 2 blogs I talked about the top 10 features for the new release for oracle 12.2. As I told in my last blog, the new features look very promising but are available in the cloud version presently. We hope it will work well in its on premise versions as well.






Follow us on Twitter:

Tweet this document:

Want to know about the new features of oracle 12.2 ? If yes, click here -->


Click here to learn more:












Oracle has released its latest version of its database in cloud version.  I am discussing some ten important features of its latest version.

Firstly, let me talk about sharding as it is the most important feature in this release.

Sharding is an application-managed scaling technique using many independent databases. It is basically the horizontal partitioning of data across independent databases Data which holds the many partitions or splits into multiple databases are called shards which are distributed as data volumes . Each database holds a subset (either range or hash) of the data. Also each shard has its own CPU, memory, flash & disk and holds a portion of rows from partitioned tables and replicated for high availability and sclability. It is like the "scale-up" vertical partitioning, followed by "scaled-out" horizontal partitioning. Sharding is the dominant approach for scaling massive websites. Shardingis used in custom applications that require extreme scalability and are willing to make a number of tradeoffs to achieve it. In sharding , application code dispatches request to a specific database based on key value and then queries are constructed on shard-key. Here, data is de-normalized to avoid cross-shard operations (no joins).It also has “fault Isolation” feature. The sharding can be illustrated in Figure 1 :-


Figure 1: Pictorial representation DB sharding


The second important feature is the PDB level controls that help to limit I/O and are provided in the new release by MAX_IOPS parameter. We can specify the limit as either I/O requests per second or as Mbps (megabytes of I/O per second). This limit can only be applied to a PDB, not to the multitenant container database (CDB) or a non-CDB.


The third one is the Auto-list partitioning which is an extension of list partitioning. It enables the automatic creation of partitions for new values inserted into the partitioned table. Auto-list partitioning helps DBAs to maintain a list of partitioned tables for a large number of distinct key values that require individual partitions. It also automatically copes with the unplanned partition key values without the need of a DEFAULT.


The fourth is the adaptive query optimization which has brought the biggest change to the optimizer in Oracle Database 12c. It is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics when especially, existing statistics are not sufficient to generate an optimal plan. There are two distinct aspects in Adaptive Query Optimization: adaptive plans, which focuses on improving the execution of a query and adaptive statistics, which uses additional information to improve query execution plans. Below is the illustrative diagram :-


Figure 2:  The components that make up the new Adaptive Query Optimization functionality


The fifth is the additional column incorporation while creating indexes  . Also indexes will gather extended stats by default when queries are run such as number of times the index was used, last usage time and much more. All the information is recorded in the DBA_INDEX_USAGE data dictionary view.Index monitoring before 12.2  had many problems. In 12.2, Index monitoring is now enabled by default, tracks the usage during run-time (as opposed to parse level).



The new features for 12.2 will definitely take care some of the shortcomings which existed in the earlier versions of Oracle database . I found these features helpful but we need to test/observe how it pans out in the on premise environment which will be released very shortly. In part-II of my blog, I am going to discuss some more features for 12. 2 database.




Follow us on Twitter:

Tweet this document:


Want to know about the new features of oracle 12.2 ? If yes, click here -->

Click here to learn more:




  Dell EMC Hadoop Big Data Solution

    Simplified Hadoop DAS Environment


The digital transformation is causing churn, uncertainty and disruption for many business leaders, who must act as the pressure is increasing from all directions. Big data and analytics will be at the core to enable this transformation, with Hadoop being a foundational component of a big data and analytics solution stack.


Hortonworks is a business computer software company based in Santa Clara, California. The company focuses on the development and support of Apache Hadoop, a framework that allows for the distributed processing of large data sets across clusters of computers.


Hortonworks' product named Hortonworks Data Platform (HDP) includes Apache Hadoop and is used for storing, processing, and analyzing large volumes of data. The platform is designed to deal with data from many sources and formats. The platform includes various Apache Hadoop projects including the Hadoop Distributed File System, MapReduce, Pig, Hive, HBase and Zookeeper and additional components.


The Hortonworks Ecosystem



Comparing to commercial products, Hortonworks will provide lower initial cost based on the fact they only charge for support and don’t charge for SW license.


Dell EMC Hortonworks Hadoop Solution


Dell EMC now provide you the option to buy your Hadoop ecosystem or build it out yourself. We recognize that the upfront work to build it yourself can be time-consuming and labor intensive. One way we help is to address those challenges by providing certified Hadoop solutions and the expertise needed to accelerate your time to value.


This goal of faster time to value is at the heart of the Dell EMC Hortonworks Hadoop Solution. It simplifies the architecture, design, configuration and deployment of Hadoop environments. Dell EMC engineers have validated and certified Hortonworks Data Platform (HDP) 2.5 on Dell EMC PowerEdgeTM R730xd servers and Dell EMC Networking S4048ON and S3048ON, allowing your organization to build a Hadoop cluster without the guesswork. You can leverage the Dell EMC solution to streamline the front-end work, from server configuration to network setup to running HDP 2.5 on a certified solution.



Solution Use Cases for Hadoop


Data Discovery


Uses data discovery and visualization tools to gain deeper insights from new data types stored in Hadoop and existing data center investments.


Single View


Helps create a single view of the data as it uncovers value that might have been within reach, but scattered across multiple interactions, channels, groups and platforms.


Predictive Analytics


Captures, stores and processes volumes of  data streaming from connected devices and sensors that measure business. Data science and iterative machine-learning techniques  can make confident real-time recommendations that reduce costs, improve safety, and inform investments.



Why The Dell EMC Hortonworks Hadoop Solution?


The Dell EMC Hortonworks Hadoop Solution provides a certified architecture that gives customers confidence when deploying the Hortonworks Data Platform (HDP) on Dell EMC PowerEdge servers with Dell EMC Networking.


Decrease Time To Value


Hardware and expertise matter when building a Hadoop environment. Dell EMC does all the hard upfront work, allowing your organization to focus on delivering deeper insights and enhanced data-driven decision making.


Reduce the risk


Dell EMC helps you organization reduce this risk. We have been building Hadoop architectures since 2011. You can leverage our expertise to help fill the skills gap and build an architecture that will meet the needs of the business.


Leverage proven expertise


You can gain hands-on experience with the Dell EMC Hortonworks Hadoop Solution in a Dell EMC Customer Solution Center. These state-of-the-art technical labs enable Dell EMC customers to explore, architect, validate and build solutions, from the data center to the edge of the network, to drive toward targeted business outcomes. capabilities.


Enterprise Ready


HDP is built for enterprises. Open enterprise Hadoop provides consistent operations, with centralized management and monitoring of clusters through a single pane of glass. With HDP, security and governance is built into the platform. This ensures that security is consistently administered across data access engines.





Dell EMC Hortonworks Hadoop Solution




Follow us on Twitter:

Tweet this document:

Click here to learn more:




  Why Dell EMC Enterprise Hybrid Cloud for DBaaS - Part III

How EHC 4.0 Help Enterprises to Build Up Oracle Database as a Service


In the Part I and Part II of this series, I briefly covered why the most of organization IT choose Hybrid Cloud as their IT-as-a Service infrastructure, compared both Public and Private Cloud, and the benefit of Hybrid Cloud, and discussed on how EHC 4.0 provide cloud services.  In this Part III I will talk about how to quickly and effectively provision, manage, and migrate Oracle Database 12c as a Service with ASM on Enterprise Hybrid Cloud 4.o. This includes single-instance and RAC configurations.



The integration of Oracle DBaaS into Enterprise Hybrid Cloud fully supports Oracle users with a self-service environment. For IT, this results in an automated, repeatable process that reduces errors and improves time-to-value.


For the end-user to be able to request a multi-node clustered database on-demand, using the self-service portal is highly beneficial. It reduces significant time and effort otherwise required to deploy a cluster manually. The user simply makes the request and then receives an automated email with deployment details when the request is complete.


This Oracle DBaaS solution has been built and validated on a number of iterations and configurations of Enterprise Hybrid Cloud . The underlying physical environment is abstracted from the application layer by virtual and software-defined components.



This solution enables the deployment of Oracle DBaaS on Enterprise Hybrid Cloud:


-  The Workload Pod for Oracle DBaaS improves management of Oracle licensing compliance.


-  Application developers and service users can self-provision Oracle DBaaS on Enterprise Hybrid Cloud as:

•    A single-instance Oracle Database with Oracle Automatic Storage Management (ASM)

•    An Oracle Database with Oracle Real Application Clusters (RAC) and ASM

      This creates an Oracle 12c Container Database (CDB) with an optional Pluggable Database (PDB) in a single-tenant configuration.


-  The solution allows users to perform Oracle DBaaS Day 2 Operations:

•    Add disks to Oracle ASM Disk Groups – single-instance and RAC

•    Migrate a PDB to an Oracle 12c CDB deployed on Enterprise Hybrid Cloud

•    Add and remove Oracle DBaaS monitoring


-  Enable monitoring of the complete Oracle DBaaS solution by using Oracle Enterprise Manager Cloud Control 12c (OEM) and VMware vRealize Operations Manager.





The new face of IT:  Database automation



Requesting an Oracle Database - Fast simple form that can be completed in minutes




EHC configuration for Oracle DBaaS - Key components




Provisioning an Oracle Database preparation - Customizable by the IT organization




Oracle DBaaS deployment process - behind the scenes




PDB migration - moving a PDB from one CDB to another





Oracle DBaaS Day 2 Operations - Database customization




Day 2 operations of EHC automates routine tasks - adding new ASM disk group




Easily add or remove OEM 12c agent - EHC automates routine tasks




Day 2 operations of EHC automates routine tasks - Adding vCPU & memory




Monitoring Oracle DBaaS





Use existing Oracle tools OEM Express 12c - No need for new tools as DBAs can use what they have today




Use existing Oracle tools OEM Cloud Control 12c - no need for new tools  as DBAs can use what they have today




vRealize operation manager - EHC management




Enterprise Hybrid Cloud enables customers to build an enterprise-class, multitenant, scalable platform for complete infrastructure service lifecycle management.


This solution incorporates the following concepts:

  • Self-service and automation
  • Multi-tenancy and secure separation
  • Workload-optimized storage
  • Security and compliance
  • Monitoring and service assurance


Special thanks to Sam Lucido's slides created in presentation 'Enterprise Hybrid Cloud for Oracle'.





Enterprise Hybrid Cloud 4.0 - Oracle Database as a Service

Introducing EMC Hybid Cloud




Follow us on Twitter:

Tweet this document:

Click here to learn more:





  Dell EMC World 2016 (October 18-20 | Austin, Texas)

     Bring PowerEdge to VxRail, VxRack


Hyper-converged Infrastructure integrates IT components in a scalable rack or appliance allowing you to modernize your data center with simplified management, improved performance, and elastic scalability.



Dell EMC VxRail is the only fully integrated, preconfigured, and tested HCI appliance powered by VMware Virtual SANTM and is the easiest and fastest way to extend a VMware environment. VxRail provides a simple, cost effective hyper-converged solution that solves a wide range of your challenges and supports most applications and workloads.



Dell EMC VxRack System consists of hyper-converged rack-scale engineered systems, with integrated networking, to achieve the scalability and management requirements of traditional and cloud native workloads. The VxRack family is purposely designed to enable customers to quickly deploy Infrastructure-as-a-Service and/or Private Cloud architectures.



Now, Dell EMC is integrating the Dell PowerEdge servers into the VxRail and VxRack hyper-converged systems.  The VxRail and VxRack systems used servers from original design manufacturer Quanta, we will continue offering appliances with Quanta systems for those customers that may still want them. However, with the PowerEdge systems, we can offer a broader range of VxRail configurations, address more workloads and bring hyper-converged solutions to smaller customers,



These new configurations are powered by Intel's Xeon "Broadwell" chips as well as VMware's vSphere and VSAN technologies, delivering 40 percent more CPU performance for the same price and all-flash nodes that offer twice the storage of previous versions.


For storage-intensive workloads like data analytics and Microsoft Exchange, customers can use VxRail appliances based on the PowerEdge R730xd servers. Workloads that need graphics support can use configurations with the PowerEdge R730 systems that include GPU accelerators from Nvidia and Advanced Micro Devices.


In addition, Dell EMC is now offering an entry-level three-node model in a 3U (5.25-inch) form factor that can support up to 200 virtual machines and comes in at less than $45,000. It's aimed at smaller companies and remote or branch offices.



The larger VxRack System 1000 is based on PowerEdge R630 or R730xd systems that bring greater capacity and 40 percent more CPU performance without costing more money. Dell EMC is offering 20 new configurations, including all-flash offerings






Dell EMC Brings PowerEdge Servers to VxRail, VxRack

Follow us on Twitter:

Tweet this document:

Click here to learn more:






  Dell EMC World 2016 (October 18-20 | Austin, Texas)

     Let The Transformation Begin


Get hands-on and learn how to transform your organization from the enterprise technology leader who knows it best. Jeremy Burton, Chief Marketing Officer, Dell, shares what our customers are expecting at Dell EMC World Austin:


What do your customers say are their biggest challenges?


They don’t say it in these exact words, but I’d say it’s figuring out what they need to do to play a part in an increasingly digital world. Now, it’s not as if customers are walking into our Executive Briefing Center and saying they have no idea what to do – it’s not like that at all – many are driving a digital agenda already. But what many of our customers ask us for is how to architect these new applications, what the options are for infrastructure and best practices for transforming their IT teams. There’s also a knock-on effect on the workforce and the security posture of the organization that we can talk to, and address, as well.


What are the new technologies customers are most excited about?


From an enterprise IT standpoint, I’d say Converged and Hyper-Converged Infrastructure. Many of our customers want to move away from the work of building their own stacks of servers, networking, and storage and focus on more strategic activities that support their business. From a data center standpoint that means buying technology that makes it much simpler stand up infrastructure and deliver apps - that’s dead center for converged infrastructure. Of course Hyper-Converged is a really hot topic with customers right now – it offers the ability to start small and grow as you need, and while its mostly used for traditional workloads today, its scale-out architecture is perfectly suited for cloud-native apps as well.


What are you most excited to share with customers at this year’s Dell EMC World?


I don’t want to spoil the surprise... but, let’s just say we’ll announce new systems that take advantage of the full power and breadth of the Dell EMC portfolio in a big way. This should really raise some eyebrows – because it’s just 30 days after closing our deal - but more importantly by combing the best of the Dell and EMC portfolio’s we’re helping solve some hairy challenges our customers are facing.


Why should customers attend Dell EMC World this year?


I think I mentioned this in a blog a bit ago – but the best reason to attend is to learn! From classes to workshops to labs, for IT professionals, there’s nothing better than learning the best ways to make use of tech and then being able to get hands on with it. There will be no shortage of opportunity to do that this year. Of course, I’d be remiss not to mention Alabama Shakes, which might be reason enough to attend! Personally, I’m also excited about the F1 race on that weekend too. Only question to answer there is Rosberg or Hamilton?


What are your priorities for the coming year?


Easy one – tell our story. It will take a long time for our message to reach our customers and partners and data suggests that people need to hear something seven times before they remember it.





Dell EMC World 2016 Agenda

Follow us on Twitter:

Tweet this document:

Click here to learn more:






  Dell EMC World 2016 (October 18-20 | Austin, Texas)

     True Austin Style


Dell World, happenning in Austin Texas (October 18-20 2016) , is now Dell EMC World.  Besides the name, Dell EMC World will be bigger and better than ever, full of technical and strategy sessions, as well as a CxO event with tracks for both commercial and enterprise-sized businesses. We’ll deliver new insights across cloud, mobility, big data, IoT, security and storage—insights that you can use to transform your organization into a digital enterprise.


Please attend and maximize your experience in Dell EMC World.  With over 100 sessions and 30 plus hands-on labs, you'll walk away from Dell EMC World 2016 with the tools you need to transform your data center, security, workforce, and your business.



There are dark forces at work on the edges of IT. A wave of fear grips the data center. See an all-star cast come together to show off the real heroes of Dell EMC World: the incredible new products and solutions destined to defeat any IT issues they come up against. You'll see how the super-powered advancements in Converged Infrastructure, All-Flash Storage, Hybrid Cloud, Mobile, Big Data, and Security are transforming businesses in dramatic demos that will have you on the edge of your seat.


What Dell EMC World offer this year:


  • In-depth training and hands-on experience. You will be able to choose from technical and content-rich sessions and labs covering the latest innovations for cloud, mobility, big data, the Internet of Things, storage and security.
  • Product research and analysis. In the Solutions Showcase, you will interact the latest enterprise solutions in real-world environments to see how we can capitalize on the new technology.
  • Networking with industry experts and peers. You will learn strategies for achieving our top IT priorities and be able to compare notes with other IT professionals. We can leverage these contacts for advice and best practices for months to come.
  • Insight into important technology trends. Dell EMC World gathers thought leaders, subject matter experts and IT professionals in an immersive environment of keynotes, break-out sessions and hands-on labs.


Some Sessions for Database Workloads:


DSSD D5 Rack-Scale Flash Overview: Unprecedented storage performance for mission-critical workloads


Wednesday, October 19: 02:15 PM - 03:15 PM12B


Keep your business processing operating at peak efficiency with Dell Engineered Solutions for databases


Thursday, October 20: 09:30 AM - 10:30 AM18C


The impact of high performance Oracle workloads on the evolution of the enterprise data center


Thursday, October 20: 09:30 AM - 10:30 AM14


Some Sessions for VCE Products


Self-paced Lab: VCE Vision 3.4 Software for VCE Systems


Wednesday, October 19: 11:00 AM - 05:00 PMMezzanine 7

Thursday, October 20: 08:00 AM - 10:30 AMMezzanine 7


Self-paced Lab: VCE VxRail 3.5 a Fully Loaded Hyper-Converged Experience


Wednesday, October 19: 11:00 AM - 05:00 PMMezzanine 7

Thursday, October 20: 08:00 AM - 10:30 AMMezzanine 7


Self-paced Lab:  Unity - VMware vSphere Integration and Awareness


Wednesday, October 19: 11:00 AM - 05:00 PMMezzanine 7

Thursday, October 20: 08:00 AM - 10:30 AMMezzanine 7


OpenManage Integration for VMware vCenter


Thursday, October 20: 08:00 AM - 09:00 AMMezzanine 9


VxRail A Fully Loaded Hyper-Converged Experience


Thursday, October 20: 08:00 AM - 09:00 AMMezzanine 2


Kick off Dell EMC World in Austin style.



Grammy award winning band, Alabama Shakes




Dell EMC World 2016 Agenda

Follow us on Twitter:

Tweet this document:

Click here to learn more:





  Flash & Cloud Tiering Enabled Data Domain Accelerate Database Protection

     Industry Leading Cloud Data Protection Systems

Legacy EMC Data Domain deduplication storage systems offer a cost-effective alternative to tape that allows users to enjoy the retention and recovery benefits of inline deduplication, as well as network-efficient replication over the wide area network (WAN) DR. Data Domain systems reduce the amount of disk storage that is needed to retain and protect data by 10 to 30 times.  Data on disk is available online and onsite for longer retention periods, and restores become fast and reliable.


Dell EMC just released a new generation of Flash-Enabled Data Domain protection storage systems to deliver industry-leading speed and scalability.  Equipped with powerful Data Domain OS 6.0 & the new Data Domain Cloud Tier software, the systems can scale up 200% to 150PB of logical capacity managed by a single system with DD Cloud Tier and make Data Domain as the protection storage to natively tier de-duplicated data to public, private or hybrid clouds for long-term retention, including Dell EMC Elastic Cloud Storage and Virtustream Storage Cloud.


With throughput up to 68 TB/hour, Data Domain systems make it possible to complete more backups in less time and provide faster, more reliable restores.  The added power of flash is also enhancing the experience for users in virtual environments. In conjunction with Dell EMC Avamar software, the new family of Data Domain appliances can access protection instances of virtual machines (VMs) 20x faster. Customers are able to boot up VMs using a protection copy of the virtual machine files (VMDKs) directly from Data Domain instead of needing to restore the VM to a separate primary storage system.


The four models of the new Data Domain family:

  • DD6300
  • DD6800
  • DD9300
  • DD9800


Flash-Enabled Appliances Accelerate Database Protection


Oracle Databases – Integrate with EMC Data Domain systems via Network File System (NFS), Common Internet File System (CIFS), or EMC Data Domain Boost to keep database administrators in control of backup and recovery through Oracle Recovery Manager (RMAN). Data Domain systems integrate with Oracle databases including Oracle Exadata and Oracle Real Application Cluster (RAC) environments.


SAP – Integrates with EMC Data Domain systems via Network File System (NFS) or EMC Data Domain Boost to keep application owners in direct control of backup and recovery through SAP BR*Tools.


SAP HANA – Integrate with Data Domain systems via NFS or Data Domain Boost to keep application owners in direct control of big data backup and recovery through SAP HANA Studio


Microsoft System Center Data Protection Manager – Integrate with EMC Data Domain systems via virtual tape library (VTL).


Microsoft SQL Server – Integrate with Data Domain systems via Common Internet File System (CIFS) or EMC Data Domain Boost to keep database administrators in direct control of backup and recovery through SQL Server Management Studio.





Dell EMC Brings the Power of Flash and Cloud Tiering to Industry's #1 Protection Storage with New Family of Data Domain Appliances

Dell EMC Data Domain



Follow us on Twitter:

Tweet this document:

Click here to learn more:









VCE Vision is an advanced architecture for converged infrastructure management that discovers system health and compliance and federates the data so that  multiple system’s health and compliance can be managed from a single dashboard, as well as share that information with other IT management tools. The architecture is depicted in Figure 1.





Figure 1 : VCE VISION Software Architecture


The architecture is comprised of 2 main modules: Core and Multi-Systems Management (MSM). Both reside on each VCE System’s AMP (Advanced Management Platform), but in separate virtual machines. It communicates with VCE System components’ element managers and individual devices directly over a variety of southbound protocols and methods. It also Collects the data and transforms it into functional streams –  VCE System inventory, capacity, health, logs and events ; processes and stores the data in a PostgreSQL relational database management  system. Finally, it provides the Core services and applications with a common platform  (a RabbitMQ Message Broker) to send and receive messages asynchronously and ensures the persistence of messages until delivery to a consumer passes  data northbound to the MSM (for Vision dashboard visualization)  and  to third-party tools such as: VMware vCenter Web Client via a Plug-in.


After knowing the architecture of the product. the automatic question that comes to our mind is why we need the above product and why it is important. The average amount of resources(around 70%) that modern IT organization requires for operations is  just to “keeping the lights on” in order to perform many routine activities like Integrating technologies , upgrading them  with new firmware, monitoring and securing them. To put into perspective, the siloed nature of traditional infrastructure and operations are largely responsible for this high cost of operations and inefficiency. In this context, a large US manufacturer recently told me that they don’t even have the time to deploy new applications to drive new business. They have also stated that they need too many maintenance windows just for upgrading firmware.  Even worse -- untested releases cause outages! They need a proper panacea for all these nightmares.

I feel that VCE Vision software solves these problems  -- so we can spend less time “keeping the lights on” in our data center and devote more resources onto new projects that grow the business as depicted in Figure 2.


Figure 2 : Transition to Vision Dashboard

Each VCE System’s health and compliance status is on the dashboard directly below the top-level information as shown in figure 3.


Figure 3: Single Dashboard for Vision 3.3 software.




From Figure 3 users can know at-a-glance about converged system health and compliance risks around all VCE systems. It also helps us proactively to address risks before they impact  businesses – or escalate them quickly if they have impacted business. In figure 3 we see that a heat map gets generated which  indicates system health issues are evolving based on the availability of system architectures’ redundant components. In this figure 3 we also see many out-of compliance components and its type in the system “Failed components” which shows components that got failed in Vision’s automated RCM (firmware/software release)  and Security hardening policy compliance audits. All the benefits are discussed in side labels of figure 3. In short we get the following benefits from the VCE Vision software :-


  • Ensures system stability and optimization while lowering OPEX
  • Validates successful upgrades and pinpoints drift from compliance
  • Ensures strong security posture while lowering OPEX
  • Pinpoints drift from compliance for continuous policy enforcement
  • Save hours of labor and avoid human error in the upgrade process
  • “ Instead of looking in 97 different places, I can immediately see where the pain points are and act accordingly.” à Global Communications, Hosting and Cloud Service Provider
  • “It used to take 5 days with 12 hours of downtime for system updates …through Vision…1 day with zero downtime.” à Large North American University



Follow us on Twitter:

Tweet this document:

Want to get a fair bit of an idea about VCE Vision software ? If yes, click here -->


Click here to learn more:









This time I attended the Oracle Open world,2016(OOW16) and found that the emphasis is on 3 major pillars like cloud computing, big data and in-memory database. In addition, it has been announced that oracle will be working on non-volatile memory and will be engaged in improving and innovating the new features of its in-memory databases. So far as releasing the database version 12c release 2 is concerned there are many areas where we see many innovations in the multitenancy categories like  online clones, refresh and relocation. Also, a column store will be added to Active data guard for performance boosting. In OOW16 we have seen that the shift is from Disk-based to In-Memory databases, from Data Warehouse to Big Data and from On-Premise to database Optimized Cloud as shown in Figure 1. 


Figure 1: Oracle’s technology direction in coming days


In line with the above direction, oracle made many announcements for 12.2 which is slated to be available on November , 2016. I would like to discuss some new features in the multiple areas for 12.2. Firstly, the In-Memory database will run on Active standby Data Guard. It has been claimed by Oracle that the real-time analytics has no impact on the database as it makes productive use of standby database resources. In this database version, the in-memory format can now be used in Smart Columnar Flash cache. It will also facilitate the  in-memory optimizations on data in Exadata Flash cache. We can also expect to see the improvement of the in-memory performance as it will extend server DRAM seamlessly to larger flash in storage. A new feature which is very interesting to me is the increase in the number of PDBs which is upped to 4,096 per container. We can “clone, refresh, or relocate” PDBs even while they’re running, while “isolation between tenants” in the same container has been “strengthened”  This new feature has been communicated by Mendelsohn(executive vice president for database server technologies at Oracle).

The most important announcement in this OOW16 is the development of the Non-volatile memory which is slated to be available on 2018. Oracle claims it will be a big disruption in the storage and in database markets. So far as announcements in big data are concerned, there may be a tectonic shift from the data warehousing to big Data as depicted in Figure 2.


Figure 2: Transforming to Big Data


In Big data space the greater innovation is towards the faster SQL access for relational, Hadoop and NoSQL databases while using Oracle Big Data SQL on JSON data. By this we mean that we can join JSON with any other data source and also can apply any SQL analytics to JSON. Big Data SQL in Oracle Cloud also identifies the JSON.

With Oracle Database 12c Release 2, Oracle is introducing Oracle Database Exadata Express Cloud Service (Exadata Express), an entry-level version of the high-performance engineered system of oracle. Starting at just $175 per month for a 20GB database, it’s a entry-level database cloud service for dev/test or production databases for departments or small businesses. Exadata Express runs Oracle Database Enterprise Edition with most options and runs on Oracle’s database-optimized Oracle Exadata infrastructure. Customers can start with a small deployment on Exadata Express and scale to large database deployments on Oracle Database Cloud Service and Oracle Exadata Cloud Service. The best features are demonstrated in Figure  3.



Figure 3 : 12c Innovations in Exadata Express Service

So far as data guard broker is concerned we have the following developments as depicted in Figure 4


Figure 4 : Data guard Broker Enhancements


In summary, I observed this time and in last couple of years (in OOW) the Database technology is moving towards a new, different & innovative direction rapidly, especially in three areas viz. in-memory technology, big data analytics, and the cloud—(The same has been pronounced by Andy Mendelsohn, executive vice president for database server technologies at Oracle). Oracle is positioning and showcasing its portfolio towards cloud compatible products and it will be very interesting to note how the whole database landscape will get shaped up in terms of product quality and market competition vis-à-vis Oracle’s proclaimed competitors like Amazon .




Follow us on Twitter:

Tweet this document:


Want to learn about the developments and announcements in this year's Oracle Open World ? Click here -->

Click here to learn more:









CSR stands for corporate social responsibility and we owe a lot from the society who has felicitated us to be what we are. In this year, VCE (VMWARE-CISCO_EMC) team has adopted a school in Bangalore where I played a key role in adopting the school based on the understanding of the facilities, infrastructure and other challenges of the school and ultimately helped the students to gain knowledge in basic math, English, science , computer , Hindi etc. It was a collective effort from many volunteers in the VCE team to teach the kids and repair all the desktops in their computer labs with EMC technicians. It was a very fascinating experience to teach the kids and celebrate different occasions as you can see in the figure 1


Figure 1 : Teaching and Spending time with School Kids


When I interact with these kids I feel that there are many areas where we can make a difference to these kids. So, I started teaching these under-privileged kids with a lot of passion and enthusiasm. Before the teaching exercise I did a thorough check on the requirements of the school kids, teachers, and headmaster and also for the school in general.  I found that they need a complete operational computer lab along with the basic infrastructure and comprehensive teaching that will enrich their knowledge level. This will in turn help them to become successful in their future exams and assignments. As a special request I taught them the leadership and communication skills as well. Overall, it was a very enthralling experience and I have learned how to make others successful & happy and enjoyed the joy of giving back to the society.



Follow us on Twitter:

Tweet this document:

School Adoption program empowers down-trodden kids with greater knowledge, wisdom and social exposure. Learn how


Click here to learn more:



Filter Blog

By date:
By tag: