This is a translated blog from the original blog of my colleague Mr. Makoto Miura (Makoto-san). This blog was also reviewed by Makoto-san and so I convey my utmost gratitude to him.

 

From my previous blog, we have understood the features and the benefits of storage based backup. Will not we now learn about some important points while using ASM? This time I would like to write an ASM related blog which is in continuation to my previous post Foundation of the storage based backup related to the Database.

 

Though ASM can be used both with block and NFS storage, I would like to talk about the block storage first. Of all the customers that I have worked with in Japan, I found that the majority of them are using block storage. I know of only 1 customer who uses NFS. In your environment, which storage option is most frequently used?

 

  • At the time of mounting the copied ASM DG to the production server, kindly rename the ASM DG.

 

In earlier versions of Oracle, we could start the multiple ASM instances on a single DB server box individually.  At the time of mounting the server for the purpose of copying ASM DG, it was possible to use the same name of the ASM DG. But in the new version of Oracle viz. Oracle 11gR2, in a single DB server we can start only ONE ASM instance. For this reason, before mounting the copied ASM DG, the name of the ASM DG needs to be changed. If we use EMC Replication Manager then the process of renaming the ASM DG name gets automated.

 

  • Maintaining the Consistency of the ASM DG

 

It is very important to maintain the consistency within DG when we build the DG from multiple LUNs. The metadata information (related to the ASM DG) is maintained within the DG itself. So, at the time of rebalancing, when hot backup mode is used, it is very important to maintain the write-order consistency in the LUN unit of the ASM DG. Within the LUNs of the EMC Storage, multiple logical groupings can be created, thereby the consistency is also maintained. This happens not only in high-end storage, but also in the mid-range storage. Yes, we can achieve this.

 

  • Important Points during the usage of ASM’s Failure Groups

 

If some users want to perform Data mirroring between different storage boxes while using ASM using the Failure Group, the data gets scattered between storages. In the below figure, the dispersion of data is getting depicted for ASM-DG#1 to two failure groups.

 

bb1.png

 

The above figure shows the self-setup function of ASM’s failure group creation.

 

Now, which process will be appropriate when we want to setup ASM-DG#1by storage copy?

 

To be frank, in the above setup oracle supports in maintaining the integrity of the whole setup and performs the copying operation. In short, one failure group’s copy looks insufficient here.

 

Let’s look into the various failure scenarios:-

 

  1. Within the failure group , one failure group got crashed.                                                                                    At this situation, the crashed Failure group is restored from the surviving failure group/s. Here storage based copy cannot be used.
  2. Within the failure group , two failure groups  got crashed.

 

That type of situation does not occur normally but even if we restore 1 failure group by the storage copy mechanism,  we are not sure whether Oracle DB can  be restarted or not as Oracle (formally) has not provided any guarantee for this operation. In the above picture, Oracle supports the storage copy only when  we create both Replica#1 and Replica#2 with the same timing and the consistency should also be maintained.  To recap, when we use storage based copy with ASM failure groups, we must take copy of all failure groups with consistency.

 

  • Why the customer wants to do ASM mirroring by using failure groups?

 

I feel the true purpose is to improve the reliability of the DB system by having redundant storage infrastructure.  In this situation, we can propose many alternatives, such as storage based mirroring between storage boxes; Appliance based mirroring (RecoverPoint) and Storage virtualization (VPLEX).