Find Communities by: Category | Product

From time to time you must reconfigure or configure CFG scripts and this can be pain.  With newer released, this is more centralized, but not everyone uses that.  Therefore, it is always handy when you have script to automate what would be boring copy and paste. Essentially, all you need to now is place where you send logs, what is Oracle SID and what is ORACLE_HOME.  Actually, SID is not mandatory if only sid on the box, but this may not be your case.  And even if it is, who cares.


Couple of hints/notes:

  • This solution does not take into account that multiple sids can be part of single script
  • I assume that RMAN log should be written to /oracle/base/admin/$SID/rman/log - this is most likely different in your case so make sure to modify it
  • instead of 1 cfg file, I use 2; this is because I set different log files for DB and ARCH backups which means that each scheduled job will use its own cfg file
  • I tested this on AIX; on Linux (or elsewhere) most likely you need to adjust shell variable (path)
  • While you can feed in many variable, I use only those necessary.  If it turns out you have extra variables, feel free to add it.  For other options, check template.
  • One could simplify this script further (since only one line is different in respect to arch and db cfg file), but I didn't do this for beauty contest
  • This has been tested with v8 of modules and NW


Finally, the script:





if [ -e "$oratab" ]; then

        echo "/etc/oratab found - we can proceed"


        echo "/etc/oratab not found - exiting"

        exit 1




cat /etc/oratab | egrep -v '^#|SET_|^$|^\*' | while read LINE; do

        SID=`echo $LINE | cut -d: -f1`

        ORACLE_HOME=`echo $LINE | cut -d: -f2`



        echo "Checking if /oracle/base/admin/$SID/rman/log exists..."

        if [ -d "/oracle/base/admin/$SID/rman/log" ]; then

                echo "Log destination found in place"


                echo "Log destination does not exist - CORRECT THIS!!!!"




        echo "Creating /nsr/apps/config/nmda_${SID}_data.cfg configuration file"

        echo "ORACLE_HOME = $ORACLE_HOME" > /nsr/apps/config/nmda_${SID}_data.cfg

        echo "ORACLE_SID = $SID" >> /nsr/apps/config/nmda_${SID}_data.cfg

        echo "ORACLE_USER = oracle" >> /nsr/apps/config/nmda_${SID}_data.cfg

        echo "NSR_RMAN_ARGUMENTS = \"msglog '/oracle/base/admin/${SID}/rman/log/msglog_${SID}_data.log' append\""  >> /nsr/apps/config/nmda_${SID}_data.cfg

        echo "NSR_ENV_LIST=\"NLS_DATE_FORMAT='DDMMYYYY_HH24:MI:SS'\""  >> /nsr/apps/config/nmda_${SID}_data.cfg

        echo "Done"



        echo "Creating /nsr/apps/config/nmda_${SID}_arch.cfg configuration file"

        echo "ORACLE_HOME = $ORACLE_HOME" > /nsr/apps/config/nmda_${SID}_arch.cfg

        echo "ORACLE_SID = $SID" >> /nsr/apps/config/nmda_${SID}_arch.cfg

        echo "ORACLE_USER = oracle" >> /nsr/apps/config/nmda_${SID}_arch.cfg

        echo "NSR_RMAN_ARGUMENTS = \"msglog '/oracle/base/admin/${SID}/rman/log/msglog_${SID}_arch.log' append\""  >> /nsr/apps/config/nmda_${SID}_arch.cfg

        echo "NSR_ENV_LIST=\"NLS_DATE_FORMAT='DDMMYYYY_HH24:MI:SS'\""  >> /nsr/apps/config/nmda_${SID}_arch.cfg

        echo "Done"






This went out 7 days ago, but for some reason EMC didn't release fix list yet.  As I write this, they also removed all binaries except Windows ones - most likely they have some issue with copying the data I guess or whatever (not that original release was 955 while the one posted now is 958 so I suspect some bug was found).


Those familiar with NetWorker know that each NetWorker code tree gets patch released one month apart (approximately).  Currently we have following code trees:

  • 8.0SP4 with latest patch being (build 475)
  • 8.1SP3 with latest patch being (build 553)
  • 8.2SP1 with latest patch being (build 821)
  • 8.2SP2 with latest patch now being (build 958)
  • 9.0 with latest patches being (build 448) - this is GA release


Now, good old custom of NetWorker engineering is that they place in public places only list of escalations fixed - not bug nor RFEs which sometimes end up in patch releases.  If you are partner, you should have release to full list.  For the rest of the world, only escalation fix list is available.


Here is the official fix list for NetWorker

242238/24177/Synthetic full fails due to verification failure
241053/24456/networker does not move to the next file when encounters an I/O error reading a file
203413//Display of RAP resource of policy hangs for more than 30mins and nsradmin shoots to 100% CPU utilization
185809//NW160541stbuf overflow msgID:38698 generated for DD device.
244864/24753/Recover wizard stalls at 5% progress when recovering from a clone volume
232803/23370/The NMC "Software administration wizard" hang at 33% and nsrcpd is not showing any error.
224380/22668/NW162186Command lne NMSQL VDI restores are slow because a lot of time is spent in parsing the entire client index
204845/23889/NW161828SQL SSMS Plugin generates invalid date for restore if locale is not set for nsrgetdate format


Last two, obviously, are related to NMM.  I happen to know that some fixes also got into NMDA, but for some reason EMC does not release that information (yet).  Hopefully they will follow the same approach as for NMM also for NMDA and NMSAP.


Fix locations: (NW) (NMM)


Note: At the time of writing this, NMM location is still "dead" and one for NW contains only win binaries from 7 days ago (I suspect build 955?).  So, most likely, you will see these two locations populated during the day or tomorrow.


For those into fairy tales, message below is for you.



My attention the other day was caught by post on ENC named Getting wrong path doing recover using the Browse tab, Search tab works. I read what Gustavo B. Schenkel wrote and I immediately remembered I noticed something similar in the past from CLI as well.  I think bingo was spot on in his answer, but it felt more like this is covering GUI and details about CLI escaped me.  I had a sort of partial memory that it depended on location from which I was running restore of how it was selected, but I wasn't sure.  So I decided to make some tests.


In my tests I used one small backup server which I used as client as well.  It is based on Linux with NetWorker version being (build 821).  For the sake of the test, CLI wise, I will focus on following:


So, I will deal with /usr/man/man8 folder and some 4 files inside.  Let's see what happens if I use recover from CLI and try to dump this onto some location (eg. /nsr/recover/test1) when using -a option. Here is the screenshot of test:


There is nothing strange or unexpected here.  I have path to I wanted to restore (/usr/man/man8) and I said I want this to be dumped into /nsr/recover/test1.  I would expect to see man8 and everything below in that destination and that happened.  Nothing worth excitement here.


In above example we saw ssid so while we are here, let's see what happens with ssid restore:


Here we have different picture, but it make sense.  To grasp this, let me explain what I did and then we will come to point why it came out the wait it did.  I instructed recover to do so called ssid restore (where you specify ssid) and said dump it onto /nsr/recover/test2.  On top of that, I said I want only /usr/man/man8.  What I got into test2 was not just man8 but also parent folder structure.  Why is that?  Well, ssid recover reads whole ssid so whenever you are doing restore, you are restoring whole structure as it was.  Your selection is ssid which means your selection is also parent folder structure.  What about selection I made (/usr/man/man8)?  In case of ssid restore, this acts as a filter as to what recover will pass to be written and rest is discarded (but read by nsrmmd).  I like to draw parallel to grep here (or findstr if on Windows). To illustrate this, observe following:


In above example, I did ssid restore of / and I said I with to get /etc/hosts, but I did get much more as - just as in example with grep - I got everything matching the pattern.


With this in mind, let's do now restore from recover prompt itself.  Here is one approach:


Again, nothing to get exciting here.  What you select  - that is what you get.  So, what if do selection from /usr in recover window?  Will that make anything more exciting?


For a moment I thought things will be a bit different here as in selection phase you see both /usr/man and /usr/man/man8 mentioned, but you can see from restore log that only man8 (as selected) is used to build folder tree.  So, no big surprises there.


Let's see how this works with files.  Repeating two tests with files gives you pretty much the same picture:


This is expected and logical.  Doing it from recover prompt is no different than before:


Let's go to GUI now.  As per Gustavo he used Recover wizard thingy - something I never did so far so this was learning curve for me as well.  Important to notice, Gustavo used recover process and not NetWorker User.  What he noticed was that there is difference in structure recovered depending if he was browsing or searching for file.  So, I decided to do the same.


I initiated new recover via recover tab and this is what I did:





Just to be safe, I fired up following too:


Result here is pretty much expected:


Above suggests that selected file was restored to test9 and indeed - that is true:


If I check my little process capture logging I see:


So, in short... task called Test9 is created and execute and that runs recover from CLI with input option where input is file selection which NW will feed recover process as standard input.  As this is done via task, log (same as in GUI) is available via /nsr/logs/recover too:


Let's do this again, but this time I will use search instead.  This showed me following:


This little experiment showed me I had two copies of files (one in tmp and one in /usr/man/man8 - which is pretty much right as I usually scp this file to /tmp with my own userid and from them I cp it to final destination with altered privileges). For fun, I selected both. Recovery list here contains absolute paths, but so it did in previous attempt.




Well, this looks like ssid restore where parent folder structure is made.  Indeed, structure is there:


However, if you check process capture list then nothing suggests this was ssid restore:


Well, one thing is different: I restored two identical files from different locations.  If they went onto same folder, they would overwrite each other (at least one of them would do that).  So, back to the basis and let's use GUI search, but this time only we restore same file as before without the one from /tmp.  We get:



OK, so with one file we only get file without parrent infrastructure created.  Again, since I used in Test 10 identical files, it still may be too early to draw any conclusion so I will now go back to CLI and run few more tests which will answer the open questions.


Simplest test to try is the one where we select two files from the same folder.  That should be business as usual and it is:


Now, let's repeat this test by selecting files from two different folder on the same partition (/usr).  Here is what happened:


So, we tried to restore /usr/man/man8/nsrcli.8 and /usr/share/man/man8/nsrpush.8 and we see those restore from first different folder as from their common parent.  This is why we do not see /usr here. 


Finally, let's try to restore 2 different files from two different partitions.  We get:


While these are different files, they are also on different partitions so full paths are used:


So, I could not really see what Gustavo reported (or I could not reproduce it or perhaps I even didn't understood well what he said), but structure in which files are restored depends on:

  1. which restore method you use
  2. where data selected for restore belongs (where differentiating factor might arise from different partitions or different folder paths on same partition)

This question pops up from time to time and it looks like people are for some reason lurking in dark.  Not sure why, but I decided to shine on some light on this at most basic level.  People wish to have email notifications based on some rules.  I base my approach on AIX, HPUX, Solaris and Linux.  In short, this s not tested only under Windows.  Example I will discuss is on Linux, but in essence it is the same on other UNIX flavors (changes mostly come from different paths used).


OK, here is example.  I use script which I called nsrcli where I put in all that I can think of that I would (or could) use one day.  Specific functions are called via specific switches.  For savegrp notification I have:




One of them, failure one I believe, is default one.  This one sends email to localhost to root account.  I have that one.  Then we have savegrp completion which I modified and instructed to write onto /nsr/logs/savegrp.log.  You can use whatever you want of course.  I never use this log and from time to time it will grow huge so trim it.


savegrp failure however I duplicated (because I could not modify original one, at least in NW8) and there I instructed NW to call /usr/sbin/nsrcli -G -N -sf  (which in my world is my swiss knife script; -G means operations with groups, -N is for notification management and -sf is savegroup failures).  There are two kinds of failures as far as I'm concerned:

  • failed groups (one or more clients)
  • group already running


I always wanted to get some sort of email where from subject I could see what failed as that would be kind of modern beeper to me.  So, I simply made this simple approach.  If we look at the script and part of the code where this is called we see:




It makes sense to do it in similar fashion if this is part of multipurpose script - if not you don't need it this way nor you need to call separate functions.  In my case, this is handled by function called sgroup_fail_ntfc.  The function itself is quite simple:




cat $* > $OUT_TEMP

GROUP1=`cat $OUT_TEMP | grep "(alert)" | cut -f2 -d\) | $AWK '{print $1}'`

GROUP2=`cat $OUT_TEMP | grep "(alert)" | cut -f2 -d\) | $AWK '{print $2}'`

CLIENT=`cat $OUT_TEMP | grep Failed: | $AWK 'BEGIN {FS="Failed: "};{print $2}' | sed 's/.<domain1>//g;s/.<domain2>//g;s/.<domain3>//g;s/.<domain4>//g;s/.<domain5>//g'`

ARC=`cat $OUT_TEMP | grep "aborted, savegrp is already running" | wc -l`

if [ "$ARC" -ne "0" ]; then

        cat $OUT_TEMP | mail_sn_ar


        cat $OUT_TEMP | mail_sn_fail




My clients can be part of 5 domains.  In report, as client failed is part of subject line for email, I do not wish to see domain as line might just be to big.  So, I simply cut off those so that line listing host(s) would only have. Next, depending on if this is real failure (fail) or group already running (ar), I have two actions/functions: mail_sn_ar and mail_sn_fail.  Here they are:



$MAIL -r -s "$(/usr/bin/uname -n): Group $GROUP2 is already running - check email and logs"




$MAIL -r -s "$(/usr/bin/uname -n): Group $GROUP1 has failed for client(s) $CLIENT - check email and logs"



Obviously $MAIL here will depend on which mail program you use (in my case I use case statement to determine OS and based on that I set number of variables where mail is just one of them).  Also, your organization might not have requirement for return address to be be noreply@whatever, but most companie use this to control emails sent from intranet machines and to make sure no return to the address is possible.


In the end, you get something like:



You can extend this further with use of imagination, but try not to complicate too much.


Inevitable question is if this will also work with NW9 and I doubt it will as savegrp is no longer what it used to be.  I didn't check NW9 notification yet, but certainly I will have to adjust this once I go there.

Filter Blog

By date:
By tag: