Here is part IV of live exploration of NW9.  I write as I do the things so you may find me going back and forth within same post or different posts.  I also don't read what I wrote so bare with typos.  If you missed previous parts (how could you!?), you can check them in following order:

Part I: What is NetWorker 9?

Part II: What is NetWorker 9?

Part III: What is NetWorker 9?

Part IV: What is NetWorker 9?

Part V: What is NetWorker 9?

 

One of the things which will be new to NetWorker users with release 9 is that backup server platform is now supported only on x86 platform. This means your backup server should be either Linux or Windows.  Support for AIX, HPUX and Solaris has been dropped.  You can still use these platforms for storage nodes and clients of course, but not for server.  This also means that if you used one of these before for backup server that you will have to migrate to new platform.

 

You might take couple of paths here.  I personally would simply build new server based on VM and start migrating one by one client.  Even I do have majority of servers running on Linux, changes made in NW9 are big and I would rather have clean slate (especially in combination with new database and application modules). To be, this is not a problem as backup retentions are usually small so keeping old server until older backups don't expire is no big deal.  So, key question is what if you have long term backups.

 

Traditionally, you have two choices here:

- scan data from old server to new one

- migrate old server to new server

- keep old server as legacy and use it only when needed for restore

 

Scanning tapes can be pain in the back if you have many of those.  Normally, scanning single tapes takes 2h and if you have 100 tapes that is already some 200h of scanning.  Hours will surely go up as you need to handle tapes in certain order due to spanning ssids, do manual loads and unloads and test it.  This might not be your cup of tea so migrating server looks suddenly more attractive.

 

Migrating server is - in general - designed in a way that you rename your server to some temp name.  Then, once server is up with temp name, you rename client (and mdb records along) to the name of the new server.  Then you copy over databases.  And this works fine when you have same OS involved, but with different OSes this is kind of problematic as they might be different endians.  Traditionally, x86 platforms are so called small endian and big UNIX boxes are big endians meaning that copying mdb database as such will not work fine due to architectural differences.  And surely some resources won't work fine either due to different paths (eg. notifications).  So, what do you do?

 

With in mind, I decided to make a test.  I have one AIX cluster based backup server which is about to be decommissioned.  It has server and 2 storage nodes clients only left.  Here is what I have:

[bck_phy1].root:/ # mminfo -avot -r ssid | wc -l               

    1486

[bck_phy1].root:/ # mminfo -avot -r client | sort -u

bck_phy1

bck_phy2

sn1.domain

sn2.domain

bck_cls

 

The idea I wish to test is something like following:

- install same NW8 server on new Linux box (we will call it bck_new)

- dump pool, group and client information into file via nsradmin (and any other NSR information I need)

- rename server to some temp name (here I will simply start server as single node)

- rename bck_cls (cluster name client) to bck_new

- dump mdb

- import pool, group and client information (and any other NSR information I need)

- import mdb

- see what is left or what I missed

 

So, first I installed same NW version on bck_new - in my case 8.1.3.5.  I started the server and took a note of clientid on new server:

[root@bck_new]# printf "show client id\n print type: NSR client\n" | nsradmin -i -

                   client id: \

72cb9689-00000004-563f25b6-563f25b5-00011700-66c877a4;

 

Now, stuff I wish to migrate from old server - resource wise - are:

- clients

- pool

- schedules

- devices

- templates

 

First, I will create list of clients with:

[root@bck_cls]# mminfo -avot -r client | sort -u > /tmp/clnt.lst

 

Then I will extract the data I need.  Data I need might not be data you need and this is something you will have to decide before you this. You environment is specific as mine is to me so I use only stuff I need - so should you.  Obviously each resource has many entries so make sure you use only those you need.  If you don't know which one you need, then do following:

[bck_phy1].root:/ # mkdir -p /nsr/migration

[bck_phy1].root:/ # printf "show \n print type: NSR client\n" | nsradmin -i - > /nsr/migration/client.out

This will dump all the data you need for client.  I suggest to remove all you don't need or it is default value.  Some client entries may have some fields while some may not (eg. directive, server backup interface, remote access, etc).  One thing to have there for sure is clientid.

 

You should do the same for pools, schedules, groups, policies and so on - everything you made specific.  Don't migrate licenses at this point (that you will handle that later on with host transfer). 

 

My client.out at the end looks something like following:

[bck_phy1].root:/ # cat /nsr/migration/client.out

                        type: NSR client;

                        name: bck_cls;

                   client id: \

93a1118e-00000004-5343b9e6-5343b9e5-0001b659-6cf5d502;

            scheduled backup: Enabled;

                     comment: LAN FS of backup cluster package;

                    schedule: Standard_Friday;

               browse policy: Standard;

            retention policy: Standard;

                       group: FS_PRD_NYC_VLAN666_1, index_1;

                    save set: All;

               remote access: *@bck_phy1, *@bck_phy2;

              virtual client: Yes;

               physical host: lpar2;

                     aliases: bck_cls, bck_cls.domain;

               storage nodes: nsrserverhost;

 

 

 

                        type: NSR client;

                        name: bck_phy2;

                   client id: \

29647c88-00000004-5343c070-5343c085-000212ca-46052f02;

            scheduled backup: Enabled;

                     comment: LAN FS of backup server node in BST;

                    schedule: Standard_Monday;

               browse policy: Standard;

            retention policy: Standard;

                       group: FS_PRD_BST_VLAN666_1, index_1;

                    save set: All;

                     aliases: bck_phy2, bck_phy2.domain,

                              bck_phy2-st,

                              bck_phy2-st.domain,

                              bck_phy2-st666,

                              bck_phy2-st666.domain;

               storage nodes: nsrserverhost;

 

 

 

                        type: NSR client;

                        name: sn2-st.domain;

                   client id: \

2fcd236b-00000004-5348752b-534e978f-012cb659-6cf5d502;

                     comment: LAN FS of media server in BST;

                    schedule: Standard_Tuesday;

               browse policy: Standard;

            retention policy: Standard;

                       group: FS_PRD_BST_VLAN666_1, index_1;

                    save set: All;

              virtual client: Yes;

               physical host: lpar2;

    server network interface: bck_cls-st.domain;

                     aliases: sn2, sn2.domain,

                              sn2-st,

                              sn2-st.domain,

                              sn2-st666,

                              sn2-st666.domain;

               storage nodes: sn2-st.domain,

                              sn1-st.domain;

       recover storage nodes: sn2-st.domain,

                              sn1-st.domain;

 

 

 

                        type: NSR client;

                        name: sn1-st.domain;

                      server: bck_cls;

                   client id: \

a54473b0-00000004-5347e132-5347e131-0003b659-6cf5d502;

                     comment: LAN FS of media server in NYC;

                    schedule: Standard_Tuesday;

               browse policy: Standard;

            retention policy: Standard;

                       group: FS_PRD_NYC_VLAN666_1, index_1;

                    save set: All;

              virtual client: Yes;

               physical host: lpar2;

    server network interface: bck_cls-st.domain;

                     aliases: sn1, sn1.domain,

                              sn1-st,

                              sn1-st.domain,

                              sn1-st666,

                              sn1-st666.domain;

               storage nodes: sn1-st.domain,

                              sn2-st.domain;

       recover storage nodes: sn1-st.domain,

                              sn2-st.domain;

 

 

 

                        type: NSR client;

                        name: bck_phy1;

                   client id: \

40ed0831-00000004-5343c06f-5343c06e-000112ca-46052f02;

                     comment: LAN FS of backup server node in NYC;

                    schedule: Standard_Monday;

               browse policy: Standard;

            retention policy: Standard;

                       group: FS_PRD_NYC_VLAN666_1, index_1;

                    save set: All;

                     aliases: bck_phy1, bck_phy1.domain,

                              bck_phy1-st,

                              bck_phy1-st.domain,

                              bck_phy1-st666,

                              bck_phy1-st666.domain;

               storage nodes: nsrserverhost;

I can't import this yet to backup server.  For example, I can remove (if I want to) cluster name and so on of old backup server (as that will be replaced by new server).  But most importantly, I should add entries for policies and schedules as well.  Assuming this has been dumped in similar fashion as for client list, we can copy over dump file to new server and import it. 

 

Here is example for policies.  When I did dump I did get following file (after kicking out defaults):

[root@bck_new ~]# cat /tmp/policy.out

                        type: NSR policy;

                        name: Standard;

                     comment: ;

                      period: Days;

           number of periods: 14;

 

 

 

                        type: NSR policy;

                        name: 40 Days;

                     comment: ;

                      period: Days;

           number of periods: 40;

Now, to import it, all we need to do is add "create" in front of type (and have extra CR at the bottom of the file) and call it with nsradmin. So, input file looks as following:

[root@bck_new ~]# cat /tmp/policy.out

                 create type: NSR policy;

                        name: Standard;

                     comment: ;

                      period: Days;

           number of periods: 14;

 

 

 

                 create type: NSR policy;

                        name: 40 Days;

                     comment: ;

                      period: Days;

           number of periods: 40;

 

 

 


[root@bck_new ~]#

 

Now we can add it:

[root@bck_new~]# nsradmin -i /tmp/policy.out

created resource id 143.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 144.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

[root@bck_new ~]#

 

And same we do for the rest of NSR resource that we wish to migrate.  Now, keep in mind that when you import the groups to disable them from running.  Keep in mind also about dependencies.  For example, I can't create group which states some clone pool unless it exists:

[root@bck_new tmp]# nsradmin -i /tmp/group.out

create failed: 'NYC_cVLAN666' invalid choice for 'clone pool', valid values are 'Default Clone'.

create failed: 'NYC_cVLAN666' invalid choice for 'clone pool', valid values are 'Default Clone'.

create failed: 'NYC_cVLAN666' invalid choice for 'clone pool', valid values are 'Default Clone'.

And pool I can't add as I have devices listed in and I can't add device until old setup is using nor until I didn't add clients.  So, depending on what you do and migrate, you will need to think about order.  You can also do this while NW is offline by using nsradmin with -d option and supplying path to nsrdb database.

 

Here is what I did after stopping NW on back_new:

[root@bck_new~]# nsradmin -d /nsr/res/nsrdb -i /tmp/schedule.out

created resource id 145.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 146.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 147.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 148.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 149.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 150.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 151.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 152.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 153.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 154.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 155.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 156.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 157.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 158.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 159.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 160.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 161.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 162.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 163.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 164.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 165.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 166.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 167.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 168.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 169.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

created resource id 170.0.72.69.0.0.0.0.181.37.63.86.10.248.22.65(1)

[root@bck_new tmp]# nsradmin -d /nsr/res/nsrdb -i /tmp/client.out

created resource id 1.0.131.68.0.0.0.0.64.88.63.86.10.248.22.65(1)

created resource id 2.0.131.68.0.0.0.0.64.88.63.86.10.248.22.65(1)

created resource id 3.0.131.68.0.0.0.0.64.88.63.86.10.248.22.65(1)

created resource id 4.0.131.68.0.0.0.0.64.88.63.86.10.248.22.65(1)

created resource id 5.0.131.68.0.0.0.0.64.88.63.86.10.248.22.65(1)

[root@bck_new tmp]# nsradmin -d /nsr/res/nsrdb -i /tmp/label.out

created resource id 1.0.59.238.0.0.0.0.243.93.63.86.10.248.22.65(1)

created resource id 2.0.59.238.0.0.0.0.243.93.63.86.10.248.22.65(1)

created resource id 3.0.59.238.0.0.0.0.243.93.63.86.10.248.22.65(1)

created resource id 4.0.59.238.0.0.0.0.243.93.63.86.10.248.22.65(1)

created resource id 5.0.59.238.0.0.0.0.243.93.63.86.10.248.22.65(1)

created resource id 6.0.59.238.0.0.0.0.243.93.63.86.10.248.22.65(1)

[root@bck_new tmp]# nsradmin -d /nsr/res/nsrdb -i /tmp/pool.out

created resource id 1.0.128.24.0.0.0.0.67.95.63.86.10.248.22.65(1)

created resource id 2.0.128.24.0.0.0.0.67.95.63.86.10.248.22.65(1)

created resource id 3.0.128.24.0.0.0.0.67.95.63.86.10.248.22.65(1)

created resource id 4.0.128.24.0.0.0.0.67.95.63.86.10.248.22.65(1)

created resource id 5.0.128.24.0.0.0.0.67.95.63.86.10.248.22.65(1)

created resource id 6.0.128.24.0.0.0.0.67.95.63.86.10.248.22.65(1)

[root@bck_new tmp]# nsradmin -d /nsr/res/nsrdb -i /tmp/client.out

created resource id 1.0.65.205.0.0.0.0.210.109.63.86.10.248.22.65(1)

created resource id 2.0.65.205.0.0.0.0.210.109.63.86.10.248.22.65(1)

created resource id 3.0.65.205.0.0.0.0.210.109.63.86.10.248.22.65(1)

created resource id 4.0.65.205.0.0.0.0.210.109.63.86.10.248.22.65(1)

created resource id 5.0.65.205.0.0.0.0.210.109.63.86.10.248.22.65(1)

 

Note that to make my life easier I decided to create pools without device restriction in my case.  At this point, I still didn't add devices, I will do that later.  Now, I need to rename existing server (bck_cls).  As this is AIX cluster, one thing I could do is simply stop NW and start it in no cluster mode.

 

So, I have:

[bck_phy1].root:/nsr/migration # /usr/es/sbin/cluster/utilities/clRGinfo

-----------------------------------------------------------------------------

Group Name     State                        Node          

-----------------------------------------------------------------------------

legato_rg      ONLINE                       bck_phy1

               OFFLINE                      bck_phy2   

 

 

[bck_phy1].root:/nsr/migration # ps -ef | grep nsrd

    root 10485890 17694778   0 16:47:17  pts/0  0:00 grep nsrd

    root 11337792        1   0   Oct 12      - 10:02 /usr/bin/nsrd -k <bck_cls IP>

[bck_phy1].root:/nsr/migration #

 

To be safe, I disabled all DD devices and NSR storage node definitions.   Next, we stop NW and start it as cluster unaware:

[bck_psy1].root:/nsr/migration # nsr_shutdown

Stopping service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Waiting for service: nsrd (11337792)

Service nsrd (11337792) shutdown.

Stopping service: nsrexecd (9764936)

Waiting for service: nsrexecd (9764936)

Service nsrexecd (9764936) shutdown.

[bck_psy1].root:/nsr/migration # rm /bin/NetWorker.clustersvr

[bck_psy1].root:/nsr/migration # NETWORKERRC=/opt/nsr/admin/networkerrc

[bck_psy1].root:/nsr/migration # export NETWORKERRC

[bck_psy1].root:/nsr/migration # /bin/nsrexecd

[bck_psy1].root:/nsr/migration # /bin/nsrd

[bck_phy1].root:/nsr/migration # ps -ef | grep nsr

    root  5570600        1   0 17:11:42      -  0:00 /bin/nsrexecd

    root  5963850  7799026   0 17:12:12      -  0:00 /usr/bin/nsrmmdbd

    root  7799026        1   0 17:12:11      -  0:01 /bin/nsrd

    root 11141236  7799026   0 17:12:14      -  0:00 /usr/bin/nsrindexd

    root 14614648 17694778   0 17:36:39  pts/0  0:00 grep nsr

    root 15138888  5570600   0 17:12:19      -  0:00 /usr/bin/nsrlogd

    root 18284674  7799026   0 17:12:16      -  0:00 /usr/bin/nsrjobd

[bck_phy1].root:/nsr/migration #

Alternative is to make this run on passive node so that you can easily fail back to bck_psy1 - this is up to you (you can also stop the cluster, but make sure to copy mm, index and res to local folder then). One thing to change, while NW is offline, is auditlog hostname resource (NSR auditlog) as that will otherwise stand in your way. Server is now running under the name of bck_phy1 so I can create this connection in NMC and rename client bck_cls to bck_new? How?  Very simple.  Here is what I did:

1. Connect to server now running as bck_phy1 via NMC

2. Take a note of client id for bck_cls (either via NMC or mminfo -avot -c bck_cls -r clientid | sort -u).  In my case I used mminfo:

[bck_phy1].root:/nsr/migration # mminfo -avot -c bck_cls -r clientid | sort -u

93a1118e-00000004-5343b9e6-5343b9e5-0001b659-6cf5d502

3. Delete  bck_cls as client in NMC.

4. Create new client using bck_new name and clientid of bck_cls.  You will be asked to confirm this as NW will detect name change

 

When this is done, you can on bck_phy1 do mminfo to check if you see previously bck_cls owned ssids to be part of bck_new now. Now you can do mdb dump.  This is done with nsrmmdbasm command (refer to https://support.emc.com/kb/193762 for details).  Here is what I did:

[bck_phy1].root:/bin # nsrmmdbasm -s /nsr/mm/mmvolume6 > /tmp/mm.xdr

66135:nsrmmdbasm: NSR directive file (/nsr/mm/.nsr) parsed

[bck_phy1].root:/bin # ll /tmp/mm.xdr

-rw-r--r--    1 root     system       672568 Nov 08 20:27 /tmp/mm.xdr

[bck_phy1].root:/bin #

Note that you need running NSR to run above command.  Now you can stop NSR on bck_phy1 and rename client index to reflect change:

[bck_psy1].root:/nsr/index # ls -la

total 16

drwxr-xr-x    9 root     system         4096 Nov 08 20:18 .

drwxr-xr-x   13 root     system         4096 Nov 08 18:07 ..

drwxr-xr-x    2 root     system          256 Apr 04 2014  lost+found

drwx------    3 root     system          256 Apr 08 2014  bck_psy1

drwx------    3 root     system          256 Apr 08 2014  bck_psy2

drwx------    3 root     system          256 Apr 11 2014  sn1-st.domain

drwx------    3 root     system          256 Apr 16 2014  sn2-st.domain

drwx------    3 root     system          256 Nov 08 20:18 bck_new

drwx------    3 root     system          256 Apr 08 2014  bck_cls

[bck_psy1].root:/nsr/index # rm -rf bck_new

[bck_psy1].root:/nsr/index # mv bck_cls bck_new

[bck_psy1].root:/nsr/index #

 

Now, make tarball of index folder and copy it over to bck_new and untar it there.  Then start NetWorker and import mdb:

[root@bck_new~]# /etc/init.d/networker start

[root@bck_new ~]# nsrmmdbasm -r -2 < /tmp/mm.xdr

Let's do some tests, for example let's list bootstrap records:

[root@bck_new~]# mminfo -B

      date     time       level ssid        file  record   volume

  10/14/2015 04:01:12 AM   full 2904405864     0       0   dataC.001

  10/14/2015 04:01:12 AM   full 2904405864     0       0   dataL.002

  10/14/2015 04:00:39 PM   full 2803785735     0       0   dataC.001

  10/14/2015 04:00:39 PM   full 2803785735     0       0   dataL.001

  10/14/2015 06:32:10 PM   full 2384364428     0       0   dataC.002

  10/14/2015 06:32:10 PM   full 2384364428     0       0   dataL.002

  10/15/2015 04:00:43 AM   full 1964968139     0       0   dataC.002

  10/15/2015 04:00:43 AM   full 1964968139     0       0   dataL.002

  10/15/2015 04:01:05 PM   full 1864348066     0       0   dataC.002

  10/15/2015 04:01:05 PM   full 1864348066     0       0   dataL.002

  10/15/2015 06:32:47 PM   full 1444926768     0       0   dataC.001

  10/15/2015 06:32:47 PM   full 1444926768     0       0   dataL.001

  10/16/2015 04:00:49 AM   full 1025530449     0       0   dataC.001

  10/16/2015 04:00:49 AM   full 1025530449     0       0   dataL.001

  10/16/2015 04:00:40 PM   full 924910345      0       0   dataC.002

  10/16/2015 04:00:40 PM   full 924910345      0       0   dataL.002

  10/16/2015 07:23:19 PM   full 186725000      0       0   dataC.001

  10/16/2015 07:23:19 PM   full 186725000      0       0   dataL.001

  10/17/2015 04:00:46 AM   full 86092750       0       0   dataC.001

  10/17/2015 04:00:46 AM   full 86092750       0       0   dataL.001

  10/17/2015 04:00:39 PM   full 4280439943     0       0   dataC.001

  10/17/2015 04:00:39 PM   full 4280439943     0       0   dataL.001

  10/18/2015 04:02:31 AM   full 4179819960     0       0   dataC.001

  10/18/2015 04:02:31 AM   full 4179819960     0       0   dataL.001

  10/18/2015 04:00:35 PM   full 4079199747     0       0   dataC.001

  10/18/2015 04:00:35 PM   full 4079199747     0       0   dataL.001

  10/19/2015 04:00:38 AM   full 3978579654     0       0   dataC.001

  10/19/2015 04:00:38 AM   full 3978579654     0       0   dataL.001

  10/19/2015 04:00:37 PM   full 3877959557     0       0   dataC.001

  10/19/2015 04:00:37 PM   full 3877959557     0       0   dataL.001

  10/19/2015 06:39:58 PM   full 3458538719     0       0   dataC.002

  10/19/2015 06:39:58 PM   full 3458538719     0       0   dataL.001

  10/20/2015 04:01:19 AM   full 3039142000     0       0   dataC.001

  10/20/2015 04:01:19 AM   full 3039142000     0       0   dataL.001

  10/20/2015 04:00:39 PM   full 2938521863     0       0   dataC.001

  10/20/2015 04:00:39 PM   full 2938521863     0       0   dataL.001

  10/20/2015 06:32:56 PM   full 2519100601     0       0   dataC.002

  10/20/2015 06:32:56 PM   full 2519100601     0       0   dataL.002

  10/21/2015 04:00:46 AM   full 2099704270     0       0   dataC.002

  10/21/2015 04:00:46 AM   full 2099704270     0       0   dataL.002

  10/21/2015 04:00:42 PM   full 1999084170     0       0   dataC.002

  10/21/2015 04:00:42 PM   full 1999084170     0       0   dataL.002

  10/21/2015 06:32:34 PM   full 1579662883     0       0   dataC.001

  10/21/2015 06:32:34 PM   full 1579662883     0       0   dataL.001

  10/22/2015 04:00:46 AM   full 1160266574     0       0   dataC.001

  10/22/2015 04:00:46 AM   full 1160266574     0       0   dataL.001

  10/22/2015 04:00:38 PM   full 1059646470     0       0   dataC.001

  10/22/2015 04:00:38 PM   full 1059646470     0       0   dataL.001

  10/22/2015 06:32:47 PM   full 640225200      0       0   dataC.002

  10/22/2015 06:32:47 PM   full 640225200      0       0   dataL.002

  10/23/2015 04:01:05 AM   full 220828897      0       0   dataC.002

  10/23/2015 04:01:05 AM   full 220828897      0       0   dataL.002

  10/23/2015 04:01:02 PM   full 120208798      0       0   dataC.002

  10/23/2015 04:01:02 PM   full 120208798      0       0   dataL.001

  10/23/2015 07:25:26 PM   full 3676990857     0       0   dataC.002

  10/23/2015 07:25:26 PM   full 3676990857     0       0   dataL.002

  10/24/2015 04:00:52 AM   full 3576358484     0       0   dataC.002

  10/24/2015 04:00:52 AM   full 3576358484     0       0   dataL.002

  10/24/2015 04:00:37 PM   full 3475738373     0       0   dataC.002

  10/24/2015 04:00:37 PM   full 3475738373     0       0   dataL.002

  10/25/2015 03:01:06 AM   full 3375118306     0       0   dataC.002

  10/25/2015 03:01:06 AM   full 3375118306     0       0   dataL.002

  10/25/2015 04:01:28 AM   full 3274458633     0       0   dataC.002

  10/25/2015 04:01:28 AM   full 3274458633     0       0   dataL.002

  10/25/2015 04:00:38 PM   full 3173838486     0       0   dataC.002

  10/25/2015 04:00:38 PM   full 3173838486     0       0   dataL.002

  10/26/2015 04:00:40 AM   full 3073218392     0       0   dataC.002

  10/26/2015 04:00:40 AM   full 3073218392     0       0   dataL.002

  10/26/2015 04:00:52 PM   full 2972598309     0       0   dataC.002

  10/26/2015 04:00:52 PM   full 2972598309     0       0   dataL.002

  10/26/2015 06:45:29 PM   full 2553177788     0       0   dataC.002

  10/26/2015 06:45:29 PM   full 2553177788     0       0   dataL.002

  10/27/2015 04:00:43 AM   full 2133780699     0       0   dataC.002

  10/27/2015 04:00:43 AM   full 2133780699     0       0   dataL.002

  10/27/2015 04:00:39 PM   full 2033160599     0       0   dataC.002

  10/27/2015 04:00:39 PM   full 2033160599     0       0   dataL.002

  10/27/2015 06:31:42 PM   full 1747956992     0       0   dataC.002

  10/27/2015 06:31:42 PM   full 1747956992     0       0   dataL.002

  10/28/2015 08:25:56 AM   full 1278244996     0       0   dataC.002

  10/28/2015 08:25:56 AM   full 1278244996     0       0   dataL.001

  10/28/2015 08:26:02 AM   full 1261467786     0       0   dataC.002

  10/28/2015 08:26:02 AM   full 1261467786     0       0   dataL.001

 

Great.  One thing which kept me worried was clientid on new_bck.  Will new server use client id from old server after renamed client bck_cls to bck_new or will it continue to use one from bck_new on bck_new.  Check:

[root@bck_new~]# printf "show client id\n print type: NSR client; name: bck_new.domain\n" | nsradmin -i -

                   client id: \

93a1118e-00000004-5343b9e6-5343b9e5-0001b659-6cf5d502;

[root@bck_new ~]#

Good, it is the one I needed (from original server).  Expected, but I had my doubts.  Now, finally, I need to add devices.

[root@bck_new tmp]# nsradmin -i /tmp/device.out

created resource id 52.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 53.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 54.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 55.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 56.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 57.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 58.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 59.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 60.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 62.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 63.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

created resource id 65.0.41.121.0.0.0.0.161.174.63.86.10.248.22.65(1)

 

Check in GUI if all looks ok as it should be.  I also checked volume and it is all there:

Capture178.JPG.jpg

Here comes restore test:

[root@bck_new~]# recover -c bck_phy2

No index was found for /root/. The current working directory is /.

recover> add /etc/hosts

/etc

1 file(s) marked for recovery

recover> relocate /tmp

recover> recover

Recovering 1 file from /etc/ into /tmp

Volumes needed (all on-line):

        dataL.002 at <dd1>.domain_dataL002

Total estimated disk space needed for recover is 4 KB.

Requesting 1 file(s), this may take a while...

Recover start time: Sun 08 Nov 2015 10:37:11 PM CET

Requesting 1 recover session(s) from server.

libDDBoost version: major: 2, minor: 6, patch: 5, engineering: 0, build: 449492

Successfully established client DDCL session for recovering save-set ID '2536402345'.

./hosts

Received 1 file(s) from NSR server `bck_new'

Recover completion time: Sun 08 Nov 2015 10:37:13 PM CET

recover>

[root@bck_new ~]#

 

Nice.  Before I made above working, I had one issue to fix.  None of devices would mount and they all failed with error stating my ddboost user didn't have necessary permissions.  So I re-entered password for ddboost user in NSR device field.  This makes kind of sense as in output made by nsradmin, password is shown as ***** and that surely is not my password.  The next thing I would do, if this was real, I would create new set of devices under mtree for bck_new and migrate data from bck_cls mtree based devices.  Now, in case of long-term backup, you most likely are doing this with tapes so DD related stuff does not apply at you at all.  Oh yes, I almost forgot, you will have to some key cleanup on server for storage nodes as well.

 

Now, one important thing. What I did here is something customers are not supposed to do.  I did it first time now and that doesn't mean I didn't miss something obvious (or less obvious).  Normally, this is done by EMC professional services and if you do it you risk to hit some issue.  And even if this issue is bug, you won't get support since you did something you were not supposed to do in the first place.  Keep that in mind before embarking to adventure.

 

This might put off some folks as engaging EMC professional service might mean engaging certain amount of money for it as well.  I'm not sure what is the current view on this nowadays so you should check this with support.  Most likely, they will do same or similar and reason behind this is to check what do you have in mdb and if these records are error free for migration path (I doubt that your NetWatre 4.x backups will be ok with NW9 for example).  Anyway, once you migrate, you can upgrade in next stage to NW9.

 

Of course, third option is always there - keep existing server available and use it for restore purposes only.  This may also be expensive option - depending how long you wish to keep it and what maintenance cost you pay for UNIX boxes.  Current EOL matrix for NW looks like following:

NW codeEOSL
8.0.xMarch 31st 2016
8.1.xOctober 31st 2016
8.2.xSeptember 30th 2017
9.0September 30th 2018

 

I don't plan to consider more seriously NW9 until summer next year.  NW9 is currently in DA phase.  It is exciting version, but for large scale environment rather disruptive and requires some planning and testing.  Small scale environments should be just fine.  Anyway, I do not expect first GA before end of year and SP1 most likely in mid Q2.  Then, on top of that, you wish to have 2-3 post SP patches at least so summer of after summer period next year sounds like busy period for upgrades.

 

And summer reminds me of something else - EMC World (I know, that is May and that is spring, but temperatures in Las Vegas at that time are already summer for me).  Dell is about to buy EMC so we have EMC World, VMWare World and Dell World.  It doesn't take rocket science to realize that Dell and EMC World will probably merge.  However, next year I still expect EMC World, but this might be the last one.  However, there is way more important reason to visit.  Normally, if you take NetWorker training, this will set you back same amount of money as you would go to EMC World.  So, EMC would be wise, if they would offer learning classes at EMC World for NetWorker (eg. 2 days NW, half day for different modules each like SAP, MSEXCH, SQL and Oracle).  This leaves 1 day to users to go to free certification test - something of big value you get at EMC World.  Learning (vLAB), exam, access to engineering, roadmaps - sounds like good business case (perhaps Sherry Davenport can check this with folks in the loop).  And perhaps last EMC World ever - only future will tell.

 

My last advice - test it.  And then test it again.  And no matter how much you test it, something will slip through. But at least less than if you went into cowboy mode and had it broken.  I didn't read what I wrote and there might be typos - so, no copy and paste please and no complains.  This was just proof of concept.