How to move a file system between different storage pools while keeping the exports and shares


Hardware: All Celerra Products

Software: NAS code 6.0
Software: NAS code 5.6



How to move a file system between different storage pools while keeping the exports and shares




Use the Celerra Replicator.


Follow these steps:

1) List the interconnect loopback id for the Data Mover the Production File System (PFS) is mounted to:



$ nas_cel -interconnect -list
id     name         source_server   destination_system   destination_server
20001  loopback     server_2        CS_NS40_1_MSS        server_2

2) Create a replication session for this PFS:



$ nas_replicate -create gustavo_rep -source -fs gustavo -destination -pool clarata_archive -interconnect id=20001 -max_time_out_of_sync 10


With the above syntax, a file system named <pfs_name>_replica1 will be created on the specified storage pool.

$ nas_replicate -l
Name             Type        Local Mover   Interconnect    Celerra       Status
gustavo_rep      filesystem  server_2      <->loopback     CS_NS40_1_M+  OK

$ nas_replicate -i gustavo_rep
ID                             = 1027_APM00073700689_0000_1103_APM00073700689_0000
Name                           = gustavo_rep
Source Status                  = OK
Network Status                 = OK
Destination Status             = OK
Last Sync Time                 = Fri Dec 03 22:26:00 BRST 2010
Type                           = filesystem
Celerra Network Server         = CS_NS40_1_MSS
Dart Interconnect              = loopback
Peer Dart Interconnect         = loopback
Replication Role               = loopback
Source Filesystem              = gustavo
Source Data Mover              = server_2
Source Interface               =
Source Control Port            = 0
Source Current Data Port       = 0
Destination Filesystem         = gustavo_replica1
Destination Data Mover         = server_2
Destination Interface          =
Destination Control Port       = 5085
Destination Data Port          = 8888
Max Out of Sync Time (minutes) = 10
Next Transfer Size (KB)        = 0
Current Transfer Size (KB)     = 0
Current Transfer Remain (KB)   = 0
Estimated Completion Time      =
Current Transfer is Full Copy  = No
Current Transfer Rate (KB/s)   = 0
Current Read Rate (KB/s)       = 0
Current Write Rate (KB/s)      = 0
Previous Transfer Rate (KB/s)  = 51405
Previous Read Rate (KB/s)      = 1446
Previous Write Rate (KB/s)     = 952
Average Transfer Rate (KB/s)   = 51405
Average Read Rate (KB/s)       = 1446
Average Write Rate (KB/s)      = 952

3) When the copy finishes, rename the original file system:

$ nas_fs -rename gustavo gustavo_orig
id        = 619
name      = gustavo_orig
acl       = 0
in_use    = True
type      = uxfs
worm      = off
volume    = v1027
pool      = clar_r5_performance
member_of = root_avm_fs_group_3
ro_servers= server_2
rw_vdms   =
ro_vdms   =
auto_ext  = no,virtual_provision=no
deduplication   = On

ckpts     = root_rep_ckpt_619_147321_2,root_rep_ckpt_619_147321_1
rep_sess  = 1027_APM00073700689_0000_1103_APM00073700689_0000(ckpts: root_rep_ckpt_619_147321_1, root_rep_ckpt_619_147321_2)
stor_devs = APM00073700689-000E,APM00073700689-0013,APM00073700689-0008,APM00073700689-000B
disks     = d19,d14,d16,d11
disk=d19   stor_dev=APM00073700689-000E addr=c16t1l10       server=server_2
disk=d19   stor_dev=APM00073700689-000E addr=c0t1l10        server=server_2
disk=d14   stor_dev=APM00073700689-0013 addr=c0t1l13        server=server_2
disk=d14   stor_dev=APM00073700689-0013 addr=c16t1l13       server=server_2
disk=d16   stor_dev=APM00073700689-0008 addr=c16t1l4        server=server_2
disk=d16   stor_dev=APM00073700689-0008 addr=c0t1l4         server=server_2
disk=d11   stor_dev=APM00073700689-000B addr=c0t1l7         server=server_2
disk=d11   stor_dev=APM00073700689-000B addr=c16t1l7        server=server_2

4) Then, rename the destination file system in order to keep the original one:

$ nas_fs -rename gustavo_replica1 gustavo
id        = 660
name      = gustavo
acl       = 0
in_use    = True
type      = uxfs
worm      = off
volume    = v1103
pool      = clarata_archive
member_of = root_avm_fs_group_4
rw_servers= server_2
rw_vdms   =
ro_vdms   =
auto_ext  = no,virtual_provision=no
deduplication   = On
ckpts     = root_rep_ckpt_660_147328_2,root_rep_ckpt_660_147328_1
rep_sess  = 1027_APM00073700689_0000_1103_APM00073700689_0000(ckpts: root_rep_ckpt_660_147328_1, root_rep_ckpt_660_147328_2)
stor_devs = APM00073700689-0018,APM00073700689-001A,APM00073700689-0009,APM00073700689-002C
disks     = d33,d34,d36,d31
disk=d33   stor_dev=APM00073700689-0018 addr=c16t1l10       server=server_2
disk=d33   stor_dev=APM00073700689-0018 addr=c0t1l10        server=server_2
disk=d34   stor_dev=APM00073700689-001A addr=c0t1l13        server=server_2
disk=d34   stor_dev=APM00073700689-001A addr=c16t1l13       server=server_2
disk=d36   stor_dev=APM00073700689-0009 addr=c16t1l4        server=server_2
disk=d36   stor_dev=APM00073700689-0009 addr=c0t1l4         server=server_2
disk=d31   stor_dev=APM00073700689-002C addr=c0t1l7         server=server_2
disk=d31   stor_dev=APM00073700689-002C addr=c16t1l7        server=server_2


5) Delete the replica (which will also remove the internal checkpoints used by the replication):

$ nas_replicate -delete gustavo_rep -mode both


6) Umount both file systems:


$ server_umount server_2 -p gustavo_orig
server_2 : done
$ server_umount server_2 -p gustavo
server_2 : done


7) Then mount the new file system on the original mountpoint:


$ server_mount server_2 gustavo /gustavo
server_2 : done


This will ensure all exports and shares will still be correct. The exports and shares points to the mountpoint, and since we are keeping the same mountpoint as the original file system, no changes are needed.


8) Remove the original file system ( this could be done few days later ) :

$ nas_fs -d gustavo_orig


Notes :


  • If you have checkpoints on the PFS, you must remove them before Step 6 as well any checkpoint schedules for the PFS. The checkpoint schedules will need to be recreated for the new file system.
  • If the PFS is being replicated, the replication session must be stopped and removed before this procedure.



ATTENTION: This procedure requires a brief disruption on Steps 6-7 that should take around to 1-2 minutes.