Veritas InfoScale™ 8.0.2 Disaster Recovery Implementation Guide - AIX
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- How VCS global clusters work
- User privileges for cross-cluster operations
- VCS global clusters: The building blocks
- Visualization of remote cluster objects
- About global service groups
- About global cluster management
- About serialization - The Authority attribute
- About resiliency and "Right of way"
- VCS agents to manage wide-area failover
- About the Steward process: Split-brain in two-cluster global clusters
- Secure communication in global clusters
- Disaster recovery feature support for components in the Veritas InfoScale product suite
- Virtualization support for InfoScale 8.0.2 products in replicated environments
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- Preparing to set up a campus cluster configuration
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for campus cluster configuration
- Configuring VCS service group for campus clusters
- Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
- Fire drill in campus clusters
- About the DiskGroupSnap agent
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
- Preparing to set up a campus cluster in a parallel cluster database environment
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
- Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
- Tuning guidelines for parallel campus clusters
- Best practices for a parallel campus cluster
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- About setting up a replicated data cluster configuration using third-party replication
- About typical replicated data cluster configuration using third-party replication
- About setting up third-party replication
- Configuring the service groups for third-party replication
- Fire drill in replicated data clusters using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Installing and Configuring Cluster Server
- Setting up VVR replication
- About configuring VVR replication
- Best practices for setting up replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Setting up third-party replication
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Fire drill in global clusters
- Configuring a global cluster with Storage Foundation Cluster File System High Availability or Storage Foundation for Oracle RAC
- About global clusters
- About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
- About setting up a global cluster environment for parallel clusters
- Configuring the primary site
- Configuring the secondary site
- Setting up replication between parallel global cluster sites
- Testing a parallel global cluster configuration
- Configuring a global cluster with Volume Replicator and Storage Foundation Cluster File System High Availability or Storage Foundation for Oracle RAC
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Starting replication of the primary site database volume to the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Replication use cases for global parallel clusters
- Configuring global clusters for VCS and SFHA
- Section V. Implementing disaster recovery configurations in virtualized environments
- Section VI. Reference
Configuring IBM PowerVM LPAR guest for disaster recovery
The IBM PowerVM is configured for disaster recovery by replicating the boot disk by using the replication methods like Hitachi TrueCopy, EMC SRDF, IBM duplicating, cloning rootvg technology, and so on. The network configuration for the LPAR on the primary site may not be effective on the secondary site, if the two sites are in the different IP subnets. To apply the different network configurations on the different sites, you will need to make additional configuration changes to the LPAR resource.
To configure LPAR for disaster recovery, you need to configure VCS on both the sites in the management LPARs with the GCO option. See the Cluster Server Administrator's Guide for more information about the global clusters.
Perform the following steps to set up the LPAR guest (managed LPAR) for disaster recovery:
- On the primary and the secondary site, create the PowerVM LPAR guest using the Hardware Management Console (HMC) with the ethernet and the client Fibre Channel (FC) virtual adapter's configuration.
Note:
The installed OS in the LPAR guest is replicated using the IBM rootvg cloning technology or the DR strategy N_Port ID Virtualization (NPIV).
- On the LPAR guest, copy and install the
VRTSvcsnrfileset from the VCS installation media. This fileset installs thevcs-reconfigservice in the LPAR guest. This service ensures that the site-specific-network parameters are applied when the LPAR boots. You can install theVRTSvcsnrfileset by performing the following steps:# mkdir /<temp_dir> # cp <media>/pkgs/VRTSvcsnr.bff /<tmp_dir> # cd /<temp_dir> # installp -a -d VRTSvcsnr.bff VRTSvcsnr
- Create a VCS service group and add a VCS LPAR resource for the LPAR guest. Configure the DROpts attribute of the LPAR resource with the site-specific values for each of the following: IPAddress, Netmask, Gateway, DNSServers (nameserver), DNSSearchPath, Device, Domain, and HostName.
Set the value of the ConfigureNetwork attribute to 1 from the DROpts attribute to make the changes effective. The LPAR agent does not apply to the DROpts attributes for the guest LPAR, if the value of the ConfigureNetwork attribute is 0. For more info about DROpts attribute see the Cluster Server Bundled Agents Reference Guide.
- [ This step is optional:] To perform the rootvg replication using NPIV, the boot disk LUN is mapped directly to the guest LPARs via NPIV, and the source production rootvg LUN is replicated using the hardware technologies like Hitachi TrueCopy, EMC SRDF, and so on for the DR Site. Subsequently, add the appropriate VCS replication resource to the LPAR DR service group. Examples of hardware replication agents are SRDF for EMC SRDF, HTC for Hitachi TrueCopy, MirrorView for EMC MirrorView, and so on. VCS LPAR resource depends on the replication resource.
For more information about the appropriate VCS replication agent that is used to configure the replication resource, you can visit our website at the following URL: https://sort.veritas.com/agents
The replication resource ensures that when the resource is online in a site, the underlying replicated devices are in the primary mode, and the remote devices are in the secondary mode. Thus, when the LPAR resource is online, the underlying storage is always in the read-write mode.
- Repeat step 1 through step 4 on the secondary site.
Figure: Sample resource dependency diagram for NPIV base rootvg replication using the hardware replication technology
When the LPAR is online, the LPAR agent creates a private VLAN (with VLAN ID 123) between the management LPAR and the managed LPAR. The VLAN is used to pass the network parameters specified in the DROpts attribute to the managed LPAR. When the managed LPAR boots, it starts the vcs-reconfig service that requests for the network configuration from the management LPAR. As a result, the network configuration is resent, as a part of the response through the same VLAN. The vcs-reconfig service subsequently applies this configuration when the appropriate commands are run.