Find Communities by: Category | Product
VCE Vision is an advanced architecture for converged infrastructure management that discovers system health and compliance and federates the data so that multiple system’s health and compliance can be managed from a single dashboard, as well as share that information with other IT management tools. The architecture is depicted in Figure 1.
Figure 1 : VCE VISION Software Architecture
The architecture is comprised of 2 main modules: Core and Multi-Systems Management (MSM). Both reside on each VCE System’s AMP (Advanced Management Platform), but in separate virtual machines. It communicates with VCE System components’ element managers and individual devices directly over a variety of southbound protocols and methods. It also Collects the data and transforms it into functional streams – VCE System inventory, capacity, health, logs and events ; processes and stores the data in a PostgreSQL relational database management system. Finally, it provides the Core services and applications with a common platform (a RabbitMQ Message Broker) to send and receive messages asynchronously and ensures the persistence of messages until delivery to a consumer passes data northbound to the MSM (for Vision dashboard visualization) and to third-party tools such as: VMware vCenter Web Client via a Plug-in.
After knowing the architecture of the product. the automatic question that comes to our mind is why we need the above product and why it is important. The average amount of resources(around 70%) that modern IT organization requires for operations is just to “keeping the lights on” in order to perform many routine activities like Integrating technologies , upgrading them with new firmware, monitoring and securing them. To put into perspective, the siloed nature of traditional infrastructure and operations are largely responsible for this high cost of operations and inefficiency. In this context, a large US manufacturer recently told me that they don’t even have the time to deploy new applications to drive new business. They have also stated that they need too many maintenance windows just for upgrading firmware. Even worse -- untested releases cause outages! They need a proper panacea for all these nightmares.
I feel that VCE Vision software solves these problems -- so we can spend less time “keeping the lights on” in our data center and devote more resources onto new projects that grow the business as depicted in Figure 2.
Figure 2 : Transition to Vision Dashboard
Each VCE System’s health and compliance status is on the dashboard directly below the top-level information as shown in figure 3.
Figure 3: Single Dashboard for Vision 3.3 software.
From Figure 3 users can know at-a-glance about converged system health and compliance risks around all VCE systems. It also helps us proactively to address risks before they impact businesses – or escalate them quickly if they have impacted business. In figure 3 we see that a heat map gets generated which indicates system health issues are evolving based on the availability of system architectures’ redundant components. In this figure 3 we also see many out-of compliance components and its type in the system “Failed components” which shows components that got failed in Vision’s automated RCM (firmware/software release) and Security hardening policy compliance audits. All the benefits are discussed in side labels of figure 3. In short we get the following benefits from the VCE Vision software :-
This time I attended the Oracle Open world,2016(OOW16) and found that the emphasis is on 3 major pillars like cloud computing, big data and in-memory database. In addition, it has been announced that oracle will be working on non-volatile memory and will be engaged in improving and innovating the new features of its in-memory databases. So far as releasing the database version 12c release 2 is concerned there are many areas where we see many innovations in the multitenancy categories like online clones, refresh and relocation. Also, a column store will be added to Active data guard for performance boosting. In OOW16 we have seen that the shift is from Disk-based to In-Memory databases, from Data Warehouse to Big Data and from On-Premise to database Optimized Cloud as shown in Figure 1.
Figure 1: Oracle’s technology direction in coming days
In line with the above direction, oracle made many announcements for 12.2 which is slated to be available on November , 2016. I would like to discuss some new features in the multiple areas for 12.2. Firstly, the In-Memory database will run on Active standby Data Guard. It has been claimed by Oracle that the real-time analytics has no impact on the database as it makes productive use of standby database resources. In this database version, the in-memory format can now be used in Smart Columnar Flash cache. It will also facilitate the in-memory optimizations on data in Exadata Flash cache. We can also expect to see the improvement of the in-memory performance as it will extend server DRAM seamlessly to larger flash in storage. A new feature which is very interesting to me is the increase in the number of PDBs which is upped to 4,096 per container. We can “clone, refresh, or relocate” PDBs even while they’re running, while “isolation between tenants” in the same container has been “strengthened” This new feature has been communicated by Mendelsohn(executive vice president for database server technologies at Oracle).
The most important announcement in this OOW16 is the development of the Non-volatile memory which is slated to be available on 2018. Oracle claims it will be a big disruption in the storage and in database markets. So far as announcements in big data are concerned, there may be a tectonic shift from the data warehousing to big Data as depicted in Figure 2.
Figure 2: Transforming to Big Data
In Big data space the greater innovation is towards the faster SQL access for relational, Hadoop and NoSQL databases while using Oracle Big Data SQL on JSON data. By this we mean that we can join JSON with any other data source and also can apply any SQL analytics to JSON. Big Data SQL in Oracle Cloud also identifies the JSON.
With Oracle Database 12c Release 2, Oracle is introducing Oracle Database Exadata Express Cloud Service (Exadata Express), an entry-level version of the high-performance engineered system of oracle. Starting at just $175 per month for a 20GB database, it’s a entry-level database cloud service for dev/test or production databases for departments or small businesses. Exadata Express runs Oracle Database Enterprise Edition with most options and runs on Oracle’s database-optimized Oracle Exadata infrastructure. Customers can start with a small deployment on Exadata Express and scale to large database deployments on Oracle Database Cloud Service and Oracle Exadata Cloud Service. The best features are demonstrated in Figure 3.
Figure 3 : 12c Innovations in Exadata Express Service
So far as data guard broker is concerned we have the following developments as depicted in Figure 4
Figure 4 : Data guard Broker Enhancements
In summary, I observed this time and in last couple of years (in OOW) the Database technology is moving towards a new, different & innovative direction rapidly, especially in three areas viz. in-memory technology, big data analytics, and the cloud—(The same has been pronounced by Andy Mendelsohn, executive vice president for database server technologies at Oracle). Oracle is positioning and showcasing its portfolio towards cloud compatible products and it will be very interesting to note how the whole database landscape will get shaped up in terms of product quality and market competition vis-à-vis Oracle’s proclaimed competitors like Amazon .
CSR stands for corporate social responsibility and we owe a lot from the society who has felicitated us to be what we are. In this year, VCE (VMWARE-CISCO_EMC) team has adopted a school in Bangalore where I played a key role in adopting the school based on the understanding of the facilities, infrastructure and other challenges of the school and ultimately helped the students to gain knowledge in basic math, English, science , computer , Hindi etc. It was a collective effort from many volunteers in the VCE team to teach the kids and repair all the desktops in their computer labs with EMC technicians. It was a very fascinating experience to teach the kids and celebrate different occasions as you can see in the figure 1
Figure 1 : Teaching and Spending time with School Kids
When I interact with these kids I feel that there are many areas where we can make a difference to these kids. So, I started teaching these under-privileged kids with a lot of passion and enthusiasm. Before the teaching exercise I did a thorough check on the requirements of the school kids, teachers, and headmaster and also for the school in general. I found that they need a complete operational computer lab along with the basic infrastructure and comprehensive teaching that will enrich their knowledge level. This will in turn help them to become successful in their future exams and assignments. As a special request I taught them the leadership and communication skills as well. Overall, it was a very enthralling experience and I have learned how to make others successful & happy and enjoyed the joy of giving back to the society.
Hyper-converged architecture (HCI) brings together best-in-class hardware and software components (compute, networking, storage, virtualization, and management) to greatly simplify data center operations. When combined we get hyper-converged infrastructure, which integrates compute, software defined storage, networking, and virtualization into a single build block for the data center as shown in Figure 1.
Figure 1 : Evolution of hyper Converged Architecture
It enables compute, storage, and networking functions to be decoupled from the underlying infrastructure and run on a common set of physical resources that are based on industry-standard x86 components. These systems are built with a modular design which makes it easy and flexible to scale within the core data center. They offer rich data services and highly resilient infrastructure components, which many of today’s mission critical enterprise applications require. Traditional converged system has also proven to deliver excellent TCO while HCI and appliances deliver different benefits as depicted in Figure 2.
Figure 2 : Comparative Study between Traditional and Hyper converged Architecture
If organizations have remote, small data center and edge locations and don’t have highly skilled IT administrators available, then a hyper-converged appliance could be a great option. Simple to set up and highly automated, appliances deliver simplicity and speed at a relatively low cost of entry (compared to most ref architectures or traditional SAN). Appliances are highly configurable and are designed to start small and grow by simply adding appliances/nodes to the cluster. They provide a buy experience, which is inclusive of ongoing lifecycle management and support.
A rack scale system brings the same benefits as the stand alone appliance but on a larger scale. Core data centers are a perfect choice because like their name suggests- they can scale bigger and easier than stand alone appliances because they include the networking components like Cisco switches. Again, in the buy experience, hyper-converged, software defined but scale to extreme without difficulties.
There are several key drivers for the adoption of HCI. They are
Lower TCO - is a major factor. Savings in CapEx or initial investments are lower, and operational expenses are also lower when compared to traditional SAN architectures (this includes power and cooling, ongoing system administration, and the elimination of forklift upgrades and data migrations). HCI increases the capabilities to start small and gradually scale-up by moving to software defined software to provide the lowest TCO. In fact, hyper-converged infrastructure can deliver more than 30 percent lower TCO compared to traditional SANs.
Speed and Agility – In addition to faster time to value/deployment, IT can easily add storage and compute resources and networking to meet business demands as they grow and expand. As mentioned earlier- there’s no waiting for boxes to arrive, there’s no testing, there’s none of that- up and ready in less than a day. Simply plug it in and you are ready to go.
Scale Easily - HCI allows users to start with a small deployment (several nodes), and then flexibly and efficiently scale out to support dynamic workloads and evolving business needs (100’s or 1000+ nodes depending on the system type).
Operational Simplicity - HCI is also managed from a central console that controls both compute and storage and automates many functions, which is also driving customer interest because it helps cuts down on the number of tools that have to be learned and used.
The software-defined architecture combines multiple infrastructure components in a single solution, eliminating the need to refresh multiple siloed components – consecutively or in parallel. It enables us to pay-as-you-grow approach - start with what you need today and expand incrementally rather purchasing large amount of compute and storage up front. In addition, the software-defined architecture eliminates the need for complex migrations and forklift hardware upgrades. It also addresses the typical over-provisioning and over-purchasing that occurs when technology is intended to last for multiple year cycles.
VCE addresses all workloads issues with a broad portfolio of optimized systems: blocks, racks and appliances that can be interconnected into a single pool of resources with consistent management, operations, backup, business continuity and disaster recovery capabilities.Here are some figures and data points that highlight VCE’s leadership in the converged infrastructure market as shown in Figure 3.
Figure 3 : VCE HCI products leadership charts
In summary modern day data centers provide the following benefits:-
In my last blog I was talking about many best practices that are being followed when we use vBlock with XtremIO and Oracle database. I discussed about a wide range of best practices methodologies for oracle databases on vBlock and XtremIO. Best practices are very important as we know as an oracle DBA we encounter various challenges.in Oracle environments as demonstrated in Figure 1. Oracle environments typically require high transaction rates at very low latency. At the same time, different types of activity performed against the database – online transaction processing [OLTP], online analytical processing [OLAP], and data warehousing [DW], for example – cause very different I/O sizes, randomness, and R/W ratios at the storage level. Coupled to this, is the fact that even inside the database itself, logging activity and database activity are very different in terms of I/O pattern. In this blog, I will continue the same discussion on best practices.
Figure 1: Oracle DB Workloads
Lets start with the ASM settings like ASM disk group layout & ASM disks per disk group. Oracle recommends separating disk groups into three parts: Data, FRA/Redo, and System. Due to the nature of Redo, a disk group can be dedicated. While the XtremIO array will perform well using a single LUN in a single disk group, it will be better to use multi-threading and parallelism to maximize performance for the database. Here, it is best to use four LUNs for the data disk group allowing the host to use simultaneous threads at different queuing points. That means that the RAC system will have four LUNs dedicated for control files and data files; 1 for Redo; 1 for archive logs, flashback logs, and RMAN backup; and one for system files. The best practice is to use four LUNS for the Data disk group. This allows the hosts/applications to use simultaneous threads at various queuing point to extract the maximum performance from the XtremIO array. The number of disk groups should be 10 or less for optimum performance.
Also modify /etc/sysconfig/oracleasm
# ORACLEASM_ENABELED: 'true' means to load the driver on boot.
# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning.
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan.
Allocation Unit Size
The default AU size (1 MB) for coarse grained and 128KB for fine grained striping works well on XtremIO Array for various database files. There is no need to modify striping recommendations provided by default templates for various Oracle DBMS file types.
File Type Striping
XtremIO and Oracle Best Practices 9
In order for ASM disk groups with various values associated to the attribute sector size (512,4096) to be mounted by an ASM instance, the parameter “_disk_sector_size_override=TRUE” has to be set in the parameter file of the database instance. Consider setting ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true in /etc/sysconfig/oracleasm. Setting this to true sets the logical block size to what is being reported by the disk ( 512 bytes.) The minimum I/O request size for database files residing in an ASM disk group is dictated by the sector size (ASM disk attribute). For ease of deployment, the recommendation is to keep the logical sector size to 512 bytes to ensure that the minimum I/O block size can be met for all types of database files. Consider skipping (not installing) ASMLIB entirely. By skipping ASMLIB, you can now create an ASM DG with 512 bytes per sector. By creating an ASM DG with 512 bytes for the sector size, you can direct the default redo log files (512 bytes block size) to this DG at least in the interim just for DBCA to complete the DB creation.
Regarding virtualization we have the following recommendations:-
Infrastructure traffic should be separated from VM traffic. This leads to better security, and isolates the traffic types effectively, improving performance. In VMware environments, the para-virtualized network adapters are the highest performing and most efficient, and should be used for all VMs. Network redundancy will ensure continued trouble-free operation even if a failure should occur. Physical NICs should be used in pairs for each server or vSwitch, with NICs of each pair assigned to separate physical switches.
The SAN connectivity, whether FC or iSCSI, will be important in an Oracle environment. At least two HBAs should be used per host, with at least two 8 Gb/s FC ports or 10 Gb/s iSCSI ports per physical CPU.Zoning/IP connectivity should be as broad as possible, keeping in mind the limit on path count  to a single device. This restriction will be significant in 6 and 8 X-Brick clusters, where the total number of available front-end ports per protocol [FC and iSCSI] exceeds 16. Use single-initiator zoning, and preferably single initiator/single target zoning. Enough bandwidth for normal operation should be ensured for the failure scenarios.Balance hosts across SCs to provide a distributed load. Keep hosts and SCs on the same switch if possible; do not allow more than two ISLs in the data path.
Hope the readers got some ideas regarding optimal implementation of Oracle database on vBlock and XtremIO storage.
In my last blog I was talking about the vblock and its relevance to oracle database.
Customers now realize that architecting, planning, testing, verifying, deploying, and maintaining interoperability between infrastructure components saps budgets and resources while adding no value to the business. This is the reason that the customers are moving towards converged architecture. Figure 1 illustrates the above logic.
Figure 1. Build vs Buying a Car/Converged Architecture
In this regard vBlock 540, XtremIO and Oracle database stand out to be the winner as per Gartner demonstrated in Figure 2.
Figure 2 : Magic Quadrant for Integrated Systems
As per Gartner , VCE is clearly the market leader with the following distinguishing features :-
So, to run this factory-integrated and validated reference architecture of VCE products (like vBlock) we need to follow some best practices so that we can extract the maximum benefits from the 3 softwares viz. vBlock , XtremIO and oracle database. So, I will try to document some of the best practices from Oracle database standpoint in this blog and in the next subsequent blog.
Firstly, The default block size for Oracle Redo Logs is 512 bytes. This default block size will cause redo log entries to be misaligned by the ready-modify-write modify operations (explained in the blog post by flashdba); redo log block size should therefore be set to 4 kB. In order to create redo logs with a 4 kB block size, and ASM disk groups with a 4096 byte sector size, add the option _disk_sector_size_override=TRUE to the parameter file of the database instance. Oracle’s default block size of 8k works well with XtremIO. This setting provides a great balance between IOPS and bandwidth, but can be improved upon in the right conditions. The new versions of Vblock 540 and XtremIO all flash array performance is 1.5X faster than the previous release as it has been optimized for 8k-block size. If the rows don’t fit nicely in 4k or above block sizes, it will be better to stick with the default setting. For data files, I/O requests will be in a multiple of the database block size – 4k, 8k, 16k, etc. If the starting addressable sector is aligned to a 4k boundary, optimal condition is met.
Secondly, Oracle controls the maximum number of blocks read in one I/O operation during a full scan using the DB_FILE_MULTIBLOCK_READ_COUNT parameter. This parameter is specified in blocks and defaults to a 1 MB read size. This value should be set to the maximum effective I/O block size divided by the database block size. If there are multiple tables with PARALLEL_DEGREE_POLICY set, reduce this to 64 kB or 128 kB. If you’re running with the default block size of 8 kB, this DB_FILE_MULTIBLOCK_READ_COUNT will need to be 8 or 16 respectively.
Thirdly, XDP protects the data while on disk, there is no need to use additional redundancy; Oracle should be configured to use ASM External redundancy, and also to use the default ASM stripe sizes. Best practice is to use four LUNs for the DATA disk group to allow the host to use simultaneous threads. The REDO and FRA disk groups should use two LUNs each. The traditional complexities of storage configuration have evolved to faster resulting into more simplified provisioning as all database data receives the sub-millisecond performance of all flash array and also the protection of XDP in XtremIO and vBlock 540. For Configuring Oracle Disk Groups, the following guidelines may be followed :-
Multiple LUNs should be configured – each has its own queue, and maximizing queue depths will be important in Oracle environments. This is a host requirement, and is not needed by the XtremIO array, where one LUN can easily service all the required IOPs. It is also not required to separate LUNs by I/O type, since XtremIO randomizes accesses internally. All LUNs should be created with the 512 byte sector type. LUNs accessed by a hypervisor cluster should use a consistent addressing scheme. If LUN numbers do not match, VM power up operations or migration operations may be affected. I am going to discuss some best practices of Oracle Database in the converged architecture segment in my next blog.
In my last blog (part 1 of this blog) I was talking about the relevance and the architecture of VxBlock with respect to the oracle database.
We were analyzing the main components of VxBlock which are as follows :-
Let us now discuss each of the above components in regard to oracle databases:-
The Cisco Unified Computing System (UCS) within VxBlock systems system offers a scalable, high-performance computing platform for running Oracle databases. Oracle databases on VxBlock architecture provide built-in reliability, availability, and serviceability (RAS). This helps to ensure nonstop access to important applications and data at a lower total cost of ownership.
The UCS Servers within VxBlock are automatically configured through unified, profile-based management. UCS server within VxBlock system helps us in the following way:
Cisco networking within VxBlock accelerates and strengthens security of Oracle applications and databases. Cisco UCS inside VxBlock also extend these benefits with the addition of a high-performance x86 platform optimized for Oracle solutions. This networking architecture provides the following benefits:
Network Virtualization is implemented by VMware NSX which is the market leading implementation of network virtualization from VMware. NSX is a multi-hypervisor solution that leverages the vSwitches already present in server hypervisors across the data center. NSX coordinates the vSwitches and the network services pushed to them for connected VMs to effectively deliver a platform – or “network hypervisor” – for the creation of virtual networks.
It offers the following benefits:-
The XtremIO Storage which is inbuilt with VxBlock 540 provides the following benefits to Oracle Database :-
VxBlock Systems are all-flash-based converged infrastructure systems for mixed, high-performance workloads and emerging third-platform applications. Leveraging EMC XtremIO, all Flash media storage and Cisco Unified Computing System (UCS) and Application-Centric Infrastructure (ACI)-ready network, these systems deliver scale-out performance at ultralow latency. Additionally, combined with VCE Technology Extensions for EMC, VxBlock 540 is an ideal platform for big data analytics and end-user computing. In a nutshell, we derive the following benefits from VxBlock with regard to the Oracle databases :-