Replication

This section gives examples of the following replication functionality:

Local Replication with SnapVX

In this example a new storage group is created with a single 1GB volume. A snapshot name is generated using the current time so it can be easily identified, and the storage group snapshot is created. The operation is verified by querying for a list of snapshots for a given storage group and confirming the snapshot we created is present in that list.

 1"""docs/source/programmers_guide_src/code/replication-snapshot_create.py"""
 2
 3import PyU4V
 4import time
 5
 6# Initialise PyU4V connection to Unisphere
 7conn = PyU4V.U4VConn()
 8
 9# Create storage Group with one volume using settings specified for
10# service level and capacity
11storage_group = conn.provisioning.create_non_empty_storage_group(
12    srp_id='SRP_1', storage_group_id='PyU4V_SG', service_level='Diamond',
13    workload=None, num_vols=1, vol_size=1, cap_unit='GB')
14
15# Define a Name for the Snapshot, in this case the name auto appends
16# the host
17# time for when it was taken for ease of identification
18snap_name = 'PyU4V_Snap_' + time.strftime('%d%m%Y%H%M%S')
19
20# Create the snapshot of the storage group containing the volume and
21# storage group created in the previous step
22snapshot = conn.replication.create_storage_group_snapshot(
23    storage_group_id=storage_group['storageGroupId'], snap_name=snap_name)
24
25# Confirm the snapshot was created successfully, get a list of storage
26# group snapshots
27snap_list = conn.replication.get_storage_group_snapshot_list(
28    storage_group_id=storage_group['storageGroupId'])
29
30# Assert the snapshot name is in the list of storage group snapshots
31assert snapshot['name'] in snap_list
32
33# Close the session
34conn.close_session()

This example will create a storage group with a volume, create a snapshot of that storage group and link the snapshot to a new storage group. This is a typical workflow for provisioning a dev environment and making a copy available.

 1"""docs/source/programmers_guide_src/code/replication-snapshot_link.py"""
 2
 3import PyU4V
 4import time
 5
 6# Set up connection to Unisphere for PowerMax Server, details collected
 7# from configuration file in working directory where script is stored.
 8conn = PyU4V.U4VConn()
 9
10# Create storage Group with one volume
11storage_group = conn.provisioning.create_non_empty_storage_group(
12    srp_id='SRP_1', storage_group_id='PyU4V_SG', service_level='Diamond',
13    workload=None, num_vols=1, vol_size=1, cap_unit='GB')
14
15# Define a Name for the Snapshot, in this case the name auto appends the
16# host time for when it was taken for ease of identification
17snap_name = 'PyU4V_Snap_' + time.strftime('%d%m%Y%H%M%S')
18
19# Create the snapshot of the storage group containing the volume and
20# storage group created in the previous step
21snapshot = conn.replication.create_storage_group_snapshot(
22    storage_group_id=storage_group['storageGroupId'], snap_name=snap_name)
23snapshot_details = conn.replication.get_storage_group_snapshot_snap_id_list(
24    storage_group['storageGroupId'], snap_name)
25snap_id = snapshot_details[0]
26
27# Link The Snapshot to a new storage group, the API will automatically
28# create the link storage group with the right number of volumes if one
29# with that name doesn't already exist
30conn.replication.modify_storage_group_snapshot_by_snap_id(
31    src_storage_grp_id=storage_group['storageGroupId'],
32    tgt_storage_grp_id='PyU4V_LNK_SG',
33    snap_name=snap_name, snap_id=snap_id, link=True,)

Snapshot Policies

The Snapshot policy feature provides snapshot orchestration at scale (1,024 snaps per storage group). The feature simplifies snapshot management for standard and cloud snapshots.

Snapshots can to be used to recover from data corruption, accidental deletion or other damage, offering continuous data protection. A large number of snapshots can be difficult to manage. The Snapshot policy feature provides an end to end solution to create, schedule and manage standard (local) and cloud snapshots.

For full detailed information on snapshot policies in Unisphere for PowerMax please consult the official Unisphere for PowerMax online help guide.

In the example below a new snapshot policy is created, modified, then deleted.

 1"""docs/source/programmers_guide_src/code/replication-snapshot_policy.py"""
 2
 3import PyU4V
 4
 5# Set up connection to Unisphere for PowerMax Server, details collected
 6# from configuration file in working directory where script is stored.
 7conn = PyU4V.U4VConn()
 8
 9# Create storage Group with one volume
10storage_group = conn.provisioning.create_non_empty_storage_group(
11    srp_id='SRP_1', storage_group_id='PyU4V_SG', service_level='Diamond',
12    workload=None, num_vols=1, vol_size=1, cap_unit='GB')
13
14# Create a snapshot policy for the new storage group
15policy_name = 'PyU4V-Test_Policy'
16snap_policy = conn.snapshot_policy.create_snapshot_policy(
17    snapshot_policy_name=policy_name, interval='1 Day',
18    cloud_retention_days=7, cloud_provider_name='Generic_Provider',
19    local_snapshot_policy_snapshot_count=5)
20
21# Confirm the snapshot policy was created successfully
22assert policy_name in conn.snapshot_policy.get_snapshot_policy_list()
23
24# Get snapshot policy detailed info
25snap_policy_details = conn.snapshot_policy.get_snapshot_policy(
26    snapshot_policy_name=policy_name)
27
28# Modify the snapshot policy
29new_policy_name = 'PyU4V-Test_Policy-5mins'
30new_snap_policy = conn.snapshot_policy.modify_snapshot_policy(
31    snapshot_policy_name=policy_name, action='Modify',
32    new_snapshot_policy_name=new_policy_name, interval_mins=5)
33
34# Confirm the snapshot policy was renamed successfully
35assert new_policy_name in conn.snapshot_policy.get_snapshot_policy_list()
36assert policy_name not in conn.snapshot_policy.get_snapshot_policy_list()
37
38# Delete the snapshot policy
39conn.snapshot_policy.delete_snapshot_policy(
40    snapshot_policy_name=new_policy_name)
41
42# Close the session
43conn.close_session()

Snapshot Policy Compliance

This allows the user to query snapshot policy compliance over a period of time. Last week, last four weeks, epoch to/from, human readable to/from are supported.

For full detailed information on snapshot policies in Unisphere for PowerMax please consult the official Unisphere for PowerMax online help guide.

In the example below, snapshot policy compliance over the last week is queried.

 1"""docs/source/programmers_guide_src/code/
 2replication-snapshot-policy-compliance.py"""
 3
 4import PyU4V
 5
 6# Set up connection to Unisphere for PowerMax Server, details collected
 7# from configuration file in working directory where script is stored.
 8conn = PyU4V.U4VConn()
 9
10# Create a snapshot policy
11snapshot_policy_name = 'PyU4V_Compliance_Policy'
12conn.snapshot_policy.create_snapshot_policy(
13    snapshot_policy_name=snapshot_policy_name, interval='1 Day',
14    local_snapshot_policy_snapshot_count=5)
15
16# Get the snapshot policy
17snapshot_policy_details = (
18    conn.snapshot_policy.get_snapshot_policy(snapshot_policy_name))
19
20# Check that snapshot policy exists
21assert snapshot_policy_details and snapshot_policy_details.get(
22    'snapshot_policy_name')
23
24# Create storage Group with one volume and associate with snapshot
25# policy.
26storage_group_name = 'PyU4V_compliance_SG'
27storage_group = conn.provisioning.create_non_empty_storage_group(
28    srp_id='SRP_1', storage_group_id=storage_group_name,
29    service_level='Diamond', workload=None,
30    num_vols=1, vol_size=1, cap_unit='GB',
31    snapshot_policy_ids=[snapshot_policy_name])
32
33# Get the storage group
34storage_group_details = conn.provisioning.get_storage_group(
35    storage_group_name)
36
37# Check that storage group exists
38assert storage_group_details and storage_group_details.get('storageGroupId')
39
40# Get the compliance details
41compliance_details = (
42    conn.snapshot_policy.get_snapshot_policy_compliance_last_week(
43        storage_group_name))
44
45# Check details have been return
46assert compliance_details
47
48# Disassociate from snapshot policy
49conn.snapshot_policy.modify_snapshot_policy(
50    snapshot_policy_name, 'DisassociateFromStorageGroups',
51    storage_group_names=[storage_group_name])
52
53# Delete the snapshot policy
54conn.snapshot_policy.delete_snapshot_policy(snapshot_policy_name)
55
56# Get volumes from the storage group
57volume_list = (conn.provisioning.get_volumes_from_storage_group(
58    storage_group_name))
59
60# Delete the storage group
61conn.provisioning.delete_storage_group(storage_group_id=storage_group_name)
62
63# Delete each volume from storage group
64for volume in volume_list:
65    conn.provisioning.delete_volume(volume)
66
67# Close the session
68conn.close_session()

Remote Replication with SRDF

This example will create a storage group on the PowerMax array with some volumes. Once the storage group has been created it will protect the volumes in the storage group to a remote array using SRDF/Metro, providing Active/Active business continuity via Symmetrix Remote Data Facility (SRDF).

 1"""docs/source/programmers_guide_src/code/replication-srdf_protection.py"""
 2
 3import PyU4V
 4
 5# Initialise PyU4V connection to Unisphere
 6conn = PyU4V.U4VConn()
 7
 8# Create storage Group with one volume using settings specified for
 9# service level and capacity
10storage_group = conn.provisioning.create_non_empty_storage_group(
11    srp_id='SRP_1', storage_group_id='PyU4V_SG', service_level='Diamond',
12    workload=None, num_vols=1, vol_size=1, cap_unit='GB')
13
14# Make a call to setup the remote replication, this will automatically
15# create a storage group with the same name on the remote array with the
16# correct volume count and size, the example here is executed
17# asynchronously and a wait is added to poll the async job id until
18# complete
19srdf_job_id = conn.replication.create_storage_group_srdf_pairings(
20    storage_group_id=storage_group['storageGroupId'],
21    remote_sid=conn.remote_array, srdf_mode="Active", _async=True)
22
23# Wait until the previous create SRDF pairing job has completed before
24# proceeding
25conn.common.wait_for_job_complete(job=srdf_job_id)
26
27# The now protected storage group will have an RDFG associated with it,
28# using the function conn.replication.get_storage_group_rdfg() function we
29# can retrieve a list of RDFGs associated with the storage group, in this
30# case there will only be one
31rdfg_list = conn.replication.get_storage_group_srdf_group_list(
32    storage_group_id=storage_group['storageGroupId'])
33
34# Extract the (only) RDFG number from the retrieved list
35rdfg_number = rdfg_list[0]
36
37# Finally the details of the protected storage group can be output to the
38# user.
39storage_group_srdf_info = conn.replication.get_storage_group_srdf_details(
40    storage_group_id=storage_group['storageGroupId'],
41    rdfg_num=rdfg_number)
42
43# Close the session
44conn.close_session()

Remote Replication with SRDF Metro Smart DR

SRDF/Metro Smart DR is a two region High-Availability (HA) Disaster Recovery (DR) solution. It integrates SRDF/Metro and SRDF/A, enabling HA DR for an SRDF/Metro session. By closely coupling the SRDF/A sessions on both sides of an SRDF/Metro pair, SRDF/Metro Smart DR can replicate to a single DR device.

SRDF/Metro Smart DR environments are identified by a unique name and contain three arrays (MetroR1, MetroR2, and DR). For SRDF/Metro Smart DR, arrays must be running PowerMaxOS 5978.669.669 or higher.

This example will create a storage group on the PowerMax array with some volumes. Once the storage group has been created it will protect the volumes in the storage group to a remote array using SRDF/Metro DR.

Note once Metro DR is setup, the environment is controlled exclusively by the environment name. SRDF Metro and SRDF/A replication can not be controlled by standard Replication Calls without first deleting the Metro DR environment.

For more information on SRDF/Metro Smart DR, please refer to Dell EMC Solutions Enabler SRDF Family CLI user guide available on https://support.dell.com

 1"""docs/source/programmers_guide_src/code/replication-metro_dr.py"""
 2
 3import PyU4V
 4
 5# Initialise PyU4V connection to Unisphere
 6conn = PyU4V.U4VConn()
 7
 8metro_r1_array_id = '000297600111'
 9metro_r2_array_id = '000297600112'
10dr_array_id = '000297600113'
11
12sg_name = 'PyU4V_Test_MetroDR'
13environment_name = 'PyU4VMetro'
14
15# Create a storage Group with some volumes
16sg_details = conn.provisioning.create_non_empty_storage_group(
17    storage_group_id=sg_name, service_level='Diamond',
18    num_vols=5, vol_size=6, cap_unit='GB', srp_id='SRP_1',
19    workload=None)
20
21"""
22The next call section creates the metro dr environment, this includes all
23Necessary SRDF setup creating the SRDF metro pairings and remote devices
24and storage group at the Metro R2 array.
25
26The API also creates all the necessary SRDF groups between R11, R21 and R2
27for the DR leg, this includes a recovery SRDF group.  The example is
28using async execution of the REST calls,  this task can take many
29minutes to complete and depending on the number of devices.
30"""
31job = conn.metro_dr.create_metrodr_environment(
32    storage_group_name=sg_name, environment_name=environment_name,
33    metro_r1_array_id=metro_r1_array_id, metro_r2_array_id=metro_r2_array_id,
34    dr_array_id=dr_array_id, dr_replication_mode='adaptivecopydisk')
35
36conn.common.wait_for_job_complete(job=job)
37
38metro_dr_env_details = conn.metro_dr.get_metrodr_environment_details(
39    environment_name, array_id=metro_r1_array_id)
40
41# Close the session
42conn.close_session()