PowerPath update broke oracle rack

We have Dell branded CX3-40 that Dell came out to help us with a health check and upgrade Powerpath on our RHEL 4 hosts from 5.1 to 5.3. We are running oracle rac in a clustered environment and the upgrade renamed all our Pseudo names so Oracle no longer recognizes them. Since then I've seen plenty of documentation on how we did it wrong and how it should have been done. Can anyone tell me how I can fix it now that it's broken. I was going to use emcpadm renamepseudo but it doesn't like the names I give it. Any you folks run into this before that may know how to fix it?

Can you post output from "powermt display dev=all" and the command where you try to rename devices ?

 

This e-mail message (including any attachments) is for the sole use of

the intended recipient(s) and may contain confidential and privileged

information.  If the reader of this message is not the intended

recipient, you are hereby notified that any dissemination, distribution

or copying of this message (including any attachments) is strictly

prohibited.

 

If you have received this message in error, please contact

the sender by reply e-mail message and destroy all copies of the

original message (including attachments).


Ohh dam ,, ASM and oracle rac i guess you are in a big trouble

 

bdw, so you are unable to change the names or you forgot the old one's ?? you should follow emc166911 ,emc113184


Well I got lucky. After hours on the phone with techsupport I realized we had the emcgrabs we had taken prior to upgrading to PowerPath 5.3 which contain the output of "powermt display dev=all". What I ended up having to do is:

 

1. find out which pseudo names are in use:

emcpadm getusedpseudos

( I also reran and printed the output from powermt display dev=all for reference purposes)

 

2. Find out what pseudo names were free. I needed about 45 so i ran:

emcpadm getfreepseudos -n 45

 

I then had to manually rename each pseudo name to one of the free names. Once that was done I repeated the process renaming the pseudo names back to their original names based on the output from our original powermt display dev=all.

 

3. Once I had it corrected on the first node i was able to export the pseudo names using:

emcpadm export_mappings -f <filename> (seems to accept any filename you'd like to use.

 

4. Next I copied the file from the export to the second node and ran:

emcpadm import_mappings -f <filename>

 

Remember to run powermt save to save the configuration to your powermt.custom file. This resolved my problem and I'm back up and running.

 

Next time I will know to run emcpadm export_mappings before updating power path so that we can import them after the update. Oracle hates it when you change the names.


is the 5.3 with any Service Pack? I am expecting a SP1..
good news.. be careful next time
SKT, not that I'm aware of. We installed the EMCpower.LINUX-5.3.1.00.00-111.rhel.x86_64.rpm package on RHEL 4 update 8.

I'm trying to find the best practices/documentation as far as Oracel RAC set up  for Clariion attached  systems and any specific powerpath configuration that are required for that set up. Could you provide any links? I cannot seem to find any good documents on powerlink.


Hello.  There is no specific document on PowerPath and Oracle RAC.  Is the concern on installation, load balancing policy setting, something else?

 

Thanks, Brion


Hello STADMIN,

First of all , welcome to the EMC forums

 

here you go , i guess this will be helpful..

http://www.2shared.com/document/Hp93jzEs/10GASMwEMC.html

 

Gd luck


yes, there was a specific issue with Powerpath and Oracel ASM in  our environment las tnigh, Oracle DBAs and systems admins worked on it. Here is the email from the person who fixed the issue:

 

 

EMC Powerpath relies on configuration files in /etc to determine the proper device mappings of the LUNS from the SAN.  The mapping configurations are stored (in PowerPath version 4.X and higher) in /etc/emcp_devicesDB.idx and /etc/emcp_devicesDB.dat.  When the cluster servers were initially built last year, the devices were mapped on uprrdb005 and then these configuration files copied over to the second node (uprrdb006); thus, the devices map the same.  An EMC recommendation is to backup these files in case of such an occurrence,

 

We saw no issues until the SAN was powered down earlier this year, and for whatever reason, when the LUNs were re-scanned by the OS, the devices changed on uprrdb006.  Instead of using the assigned emcpowerX devices, the assignments started over at emppowera.  Why is beyond the scope of this email, see me later.

 

The fix is to assign each of the devices a pseudo name to map to the correct LUNs and create a /etc/powermt.custom file to map the correct device names.  Thus, I issued the following commands:

 

  emcpadm renamepseudo –s emcpowerd –t emcpowers

 

I had to do this for all but one LUN, then I issued a powermt save to build the /etc/powermt.custom file.  This configuration file was then created on uprrdb006 and everything worked fine.

 

Last night, the same thing happened to uprrdb005, the /etc/powermt.custom file did not exist; therefore, I went through the same renaming steps to get the names in line and then issues the powermt save as before and everything worked out.

 

So the question is- is there anything on the SAN side (in additon to the host side procedures described above)  that needs to be setup to prevent this from happening? taken the question from that email - why emppowerX devices changed?


what version of PP you are running??


Pseudo device names should not change across reboot in normal cases.  We need to understand what happened in the fabric and array side to figure out what happened.  Was a case opened with CS to obtain more details about the set up and the events that led to this situation?


Thanks, Brion

Pseudo device names should not change across reboot in normal cases.

We need to understand what happened in the fabric and array side to figure out what has gone wrong.

Can we get more details about their set up and the events that led to this situation?


I didn't have a chance to open a case yet,

powerpath version is 4.3 I believe - to answer the other qeustion that was posted


Hello,

  Do you know if there was any change at an OS level in how devices were assigned?

Regards,

    Nollaig


Unfortunatelly - I don't have good details, I just started this new job in the company 4 days a go and was hit by the issue, from the answers I got - there were no major changes in this environment.


It's rare for that to happen, and we'd certainly recommend that before any upgrade -- whether that is an OS upgrade or a PowerPath upgrade -- you do a "powernt save" followed by an emcpadm export_mappings -f <filename>. My best guess, in the absence of any other information, is that the array information might have changed between the last save and the reboot, so that PowerPath would have noticed a different device order and reassigned the powernames (if there was no valid save information around). That's only a guess based on what we have here, though.


This document was generated from the following discussion: PowerPath update broke oracle rack