Tweet this document:
Follow us on Twitter:
The EMC XtremIO storage array is an all-flash system that uses proprietary intelligent software to deliver unparalleled levels of performance. XtremIO’s inline data reduction stops duplicated write I/Os from being written to disk, which improves application response time. XtremIO is highly scalable where performance, memory, and capacity increase linearly. XtremIO has its own data protection algorithm dedicated to fast rebuilds and all-around protection, and performs better than the traditional RAID types. The application I/O load is balanced across the XtremIO system. XtremIO provides native thin provisioning. All volumes are thin provisioned as they are created. Since XtremIO dynamically calculates the location of the 4 KB data blocks, it never pre-allocates or thick provisions storage space before writing the actual data. Thin provisioning is not a configurable property, it is always enabled. There is no performance loss or capacity overhead. Furthermore, there is no volume defragmentation necessary, since all blocks are distributed over the entire array by design.
There are many environments, applications and solutions that would benefit from the addition of an XtremIO storage array. This includes Virtual Desktop Infrastructure (VDI), Server virtualization, and database analytics and testing. The idea is to implement XtremIO in an environment where there is a high number of small random I/O requests, low latency is required, and data has a high rate of deduplication. These features are very beneficial from the perspective of Oracle DB performance . Hence , Oracle DBAs will love to work on this all flash storage arrays.
The benefits of XtremIO extend across multiple audiences in the IT organization.
Application owners benefit from accelerating performance resulting in faster transactions, scaling more end-users and improving efficiency.
Infrastructure owners can now drive consolidation of database infrastructure even across mixed database workload environments, whether physical or virtual, and service all environments with all flash.
DBAs can now eliminate the need for constant database tuning and chasing hot spots. They can provision new databases in less time and reduce downtime for capacity planning and growth management.
CIOs can improve overall database infrastructure economics through consolidating databases and storage and controlling costs even as multiple databases are deployed and copied over time.
XtremIO supports both 8 Gb/s Fibre Channel (FC) and 10 Gb/s iSCSI with SFP+ optical connectivity to the hosts. Each X-Brick provides four FC and four iSCSI front-end ports. Access to the XtremIO Management server (XMS) or to the Storage Controllers in each X-Brick is provided via Ethernet. XtremIO can also use LDAP to provide user authentication
Fibre Channel (FC) is a serial data transfer protocol and standard for high-speed enterprise-grade storage networking. It supports data rates up to 10 Gbps and delivers storage data over fast optical networks. Basically, FC is the language through which storage devices such as HBAs, switches and controllers can communicate. The FC protocol helps to clear I/O bottlenecks and makes the DB faster.
XtremIO offers storage connectivity via Fibre Channel and iSCSI therefore the proper cables must be supplied and correctly configured in order to successfully present storage. XtremIO also requires connectivity for management via Ethernet. An additional RJ45 port will be required if a physical XMS is being used.
As in every SAN storage environment, a highly available environment requires at least two HBA connections per host with each HBA connected to a separate Fibre Channel switch, as shown here. Connecting the XtremIO cluster to the FC switches is also straight forward. Each Storage Controller has 2 FC ports and therefore you should connect each Storage Controller to each Fibre Channel switch.
Each X-Brick of an XtremIO system can lose one Storage Controller and still remain fully functional. In general, every host should be connected to as many Storage Controllers as possible on an XtremIO cluster, as long as the host and multipathing software supports that number of connections. Best practice indicates 4-8 paths per host for Two X-Brick clusters, Up to four paths for Single X-Brick clusters, up to 16 paths for a four X-Brick cluster, and never to include more than one host initiator in a zone. To avoid multipathing performance degradation, do not use more than 16 paths per device. All volumes are accessible via any and all of the front-end ports. To get optimal performance of Oracle DB on XtremIO storage array, these best practices need to be followed.
When performing the cabling for iSCSI connectivity, the ideal configuration is to have redundant paths and redundant switches as well. General best practice for highly available iSCSI environments is for every host to have two physical adapters and for these adapters to be connected to separate VLANs, as shown here. As with FC connectivity, connecting the XtremIO iSCSI ports is easy. Since each Storage Controller has two iSCSI ports, simply connect each SC to a separate iSCSI subnet or VLAN.
In this blog , I tried to explain the architecture of XtremIO with special reference to the Oracle database. If we consider XtremIO and Oracle DB in unison , then below are the top 5 features for using Oracle DB on the top of XtremIO storage array.
N.B. : The above discussion is with respect to XtremIO version 2.4. For the latest features of XtremIO 3.0 , pls. click here.