In my last blog I was mentioning some of the best features and benefits of deploying and running Oracle in XtremIO array. In this blog, I would like to discuss some of the best practices which can be implemented to exploit the great features and benefits of using Oracle in XtremIO arrays discussed earlier.

 

Tuning I/O Block Size , DB_FILE_MULTIBLOCK_READ_COUNT

 

Oracle’s default block size of 8k works just fine with XtremIO.  This setting provides a great balance between IOPS and bandwidth, but can be improved on in the right conditions. If the data rows fit into a 4k-block size one can see an IOPS improvement of over 20% by using a 4KB request size. If the rows don’t fit nicely in 4k block sizes, it will be better to stick with the default setting. For data files, I/O requests will be in a multiple of the database block size – 4k, 8k, 16k, etc. If the starting addressable sector is aligned to a 4k boundary, optimal condition is met.

 

The default block size for Oracle Redo Logs is 512 bytes. This default block size will cause redo log entries encapsulated in large-block I/O requests that (likely) do not align to the 4k boundary. For Redo logs, the default block size is 512 bytes. The I/O request to the redo log is a multiple of the block size. Redo entries encapsulated in large-block I/O requests are very likely not to start and end on a 4k aligned boundary resulting in extra computational work and I/O sub-routines on the XtremIO back-end.  In order to avoid extra computational work and I/O subroutines on the array back-end we need to set redo log block size to 4k. In order to create redo log with a non-default block size you’ll need to add the option ”_disk_sector_size_override=TRUE” to the parameter file of the database instance. It is recommended to create a separate, stand-alone disk group for data files.The default database block size for an Oracle database is 8k. The XtremIO Array performs well with this particular setting.

 

Oracle controls the maximum number of blocks read in one I/O operation during a full scan using the “DB_FILE_MULTIBLOCK_READ_COUNT” parameter.  This parameter is specified in blocks and is, generally, defaulted to 1MB.  Generally we set this value to the maximum effective I/O block size divided by the database block size.  If there are a lot of tables with parallel degree set, we may want to drop this to 64k or 128k.  If we are running with the default block size of 8k, this DB_FILE_MULTIBLOCK_READ_COUNT will need to be 8 or 16.

 

 

During performance benchmarks, the XtremIO Array has proven capable of over 200K absolute-random RIOPS based on OLTP-performance benchmarks performed against a single X-brick induced through SQL.  Moreover, the application has reported sub-millisecond for latency. During Bandwidth testing, the XtremIO Array, again SQL induced, showed to sustain 2.5 GB/s against a single X-brick. As more hosts were added to the mix and tested against an expanded XtremIO array with two X-bricks, performance doubled – recording over 400K IOPS during the OLTP testing and over 5GB/s for the Bandwidth test.

 

Arguably, the 8k DB block size is ideal for most workloads. It strikes a very good balance between IOPS and Bandwidth. However, a very strong case can be made for 4K in extreme circumstance where rows fit nicely in 4k block size, the buffer cache is not effective due to the randomness of the application access, and the speed of the storage now becomes the most determining factor for successful deployment. When using a 4KB request size, the XtremIO array can service approximately 20-30% more I/O’s per second than 8KB requests.

 

ASM Disk Group layout and ASM Disks per Disk group

 

Oracle recommends separating disk groups into three parts: Data, FRA/Redo, and System. Due to the nature of Redo a disk group can be dedicated to it. While the XtremIO array will perform great using a single LUN in a single disk group, it will be better to use  multi-threading and parallelism to maximize performance for our database. Here, it is best to use 4 LUNs for the data disk group allowing the host to use simultaneous threads at different queuing points.  That means the RAC system will have 4 LUNs dedicated for control files and data files; 1 for Redo; 1 for archive logs, flashback logs, and RMAN backup; and one for your system files. The best practice is to use 4 LUNS for the Data disk group. This allows the hosts/applications to use simultaneous threads at various queuing point to extract the maximum performance from the XtremIO array. The number of disk groups should be 10 or less for optimum performance.

 

Modify /etc/sysconfig/oracleasm

 

# ORACLEASM_ENABELED: 'true' means to load the driver on boot.

ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.

ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.

ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.

ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning

ORACLEASM_SCANORDER="dm"

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan

ORACLEASM_SCANEXCLUDE="sd"

 

Allocation Unit Size

 

The default AU size (1 MB) for coarse grained and 128KB for fine grained striping works well on XtremIO Array for various database files. There is no need to modify striping recommendations provided by default templates for various Oracle DBMS file types.

 

File Type          Striping

CONTROLFILE  FINE

DATAFILE         COARSE

ONLINELOG     FINE

ARCHIVELOG   COARSE

TEMPFILE        COARSE

PARAMETER    COARSE

FLASHBACK    FINE   

 

In order for ASM disk groups with various values associated to the attribute sector size (512, 4096) to be mounted by an ASM instance, the parameter “_disk_sector_size_override=TRUE” has to be set in the parameter file of the ASM instance. Consider setting ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true in /etc/sysconfig/oracleasm. Setting this to true sets the logical block size to what is being reported by the disk and that is 512 bytes.

 

The minimum I/O request size for database files residing in an ASM disk group is dictated by the sector size (ASM disk attribute). For ease of deployment, the recommendation is to keep the logical sector size to 512 bytes to ensure that the minimum I/O block size can be met for all types of database files.

 

Consider skipping (not installing) ASMLIB entirely. By skipping ASMLIB, you can now create an ASM DG with 512 bytes per sector. By creating an ASM DG with 512 bytes for the sector size, you can direct the default redo log files (512 bytes block size) to this DG at least in the interim just for DBCA to complete the DB creation.

 

HBA Settings, Multipath Software Settings

 

A single XtremIO X-Brick has two storage controllers. Each storage controller has two fiber channel ports. Best practices are to have two HBAs in the host and it will be good to zone each initiator all targets in the fabric. The maximum number of paths recommended is 16 to the storage ports per host. For the storage ports, the recommendation is to utilize all storage ports evenly amongst all hosts and clusters to ensure balanced utilization of XtremIO’s resources.

 

The recommended setting for LUN-queue depth is the maximum per supported HBA given one host connecting to the XtremIO Array – 256 for QLOGIC HBA, 128 for Emulex HBA. With 2 hosts, it is recommended to reduce that to half of the maximum – 128 for QLOGIC HBA, 64 for Emulex HBA. As the number of hosts increase, it is recommended to decrease the LUN-queue depth setting proportionately. The minimum recommended setting as the number of hosts increase in the SAN is 32 for both QLOGIC and Emulex HBAs.

 

Great performance has been recorded using XtremIO arrays with just one LUN in a single disk group. However, maximizing performance from a single host calls for multi-threading and parallelism, and the best practice is to use 4 LUNS for the Data disk group. This allows the hosts/applications to use simultaneous threads at various queuing point to extract the maximum performance from the XtremIO array.

 

It is a good practice to use a dynamic multi-pathing tool, such as Power Path, to help distribute the IO across the multiple HBAs, as well as help to assuage in HBA failures. As more systems are added to the shared infrastructure, performance of static multi-pathing tools may be affected, causing additional management overhead and possible application availability implications as data needs to be re-distributed across other paths. Dynamic multi-pathing will continue to adjust to changes in I/O response times from the shared infrastructure as the needs of the application and usage of the shared infrastructure change over time.

 

References:-

 

 

 

 

Socialize this blog using bitly URL: http://bit.ly/1lcHCvQ


Click to Tweet:

New blog on tuning #Oracle databases on #XtremIO: http://bit.ly/W70Q0i Learn I/O blocksize, db_file_multiblock_read_count & other stuff #EMC