Veritas InfoScale™ 7.4 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- How VCS global clusters work
- User privileges for cross-cluster operations
- VCS global clusters: The building blocks
- Visualization of remote cluster objects
- About global service groups
- About global cluster management
- About serialization - The Authority attribute
- About resiliency and "Right of way"
- VCS agents to manage wide-area failover
- About the Steward process: Split-brain in two-cluster global clusters
- Secure communication in global clusters
- Disaster recovery feature support for components in the Veritas InfoScale product suite
- Virtualization support for Storage Foundation and High Availability Solutions 7.4 products in replicated environments
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- Preparing to set up a campus cluster configuration
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for campus cluster configuration
- Configuring VCS service group for campus clusters
- Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
- Fire drill in campus clusters
- About the DiskGroupSnap agent
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
- Preparing to set up a campus cluster in a parallel cluster database environment
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
- Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
- Tuning guidelines for parallel campus clusters
- Best practices for a parallel campus cluster
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- About setting up a replicated data cluster configuration using third-party replication
- About typical replicated data cluster configuration using third-party replication
- About setting up third-party replication
- Configuring the service groups for third-party replication
- Fire drill in replicated data clusters using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Installing and Configuring Cluster Server
- Setting up VVR replication
- About configuring VVR replication
- Best practices for setting up replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Setting up third-party replication
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Fire drill in global clusters
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About global clusters
- About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
- About setting up a global cluster environment for parallel clusters
- Configuring the primary site
- Configuring the secondary site
- Setting up replication between parallel global cluster sites
- Testing a parallel global cluster configuration
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Starting replication of the primary site database volume to the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Replication use cases for global parallel clusters
- Configuring global clusters for VCS and SFHA
- Section V. Configuring disaster recovery in cloud environments
- Section VI. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
- Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
- Sample main.cf for a primary CVM VVR site
- Sample main.cf for a secondary CVM VVR site
- Appendix A. Sample configuration files
Replication across multiple AWS Availability Zones and regions (campus cluster)
In this scenario, data is replicated across multiple Availability Zones and regions. The configuration uses software VPN Openswan to connect the VPCs across different regions.
Figure: Replication across multiple Availability Zones and regions illustrates the configuration for replication across multiple Availability Zones and regions.
Two VPCs with valid CIDR blocks (for example, 10.30.0.0/16 and 10.60.0.0/16 respectively) that are located in two different regions.
The primary instance belongs to AZ1 of region 1 and the secondary instance belongs to AZ2 of region 2.
Veritas InfoScale instances in each AZ.
The primary Instance communicates with the secondary Instance using VPN instances at both ends.
A VPN tunnel to secure communication between the instances across VPCs.
Elastic IP addresses (EIP) to connect the two VPN instances
Private IP addresses used for replication in standalone environments OR
Overlay IP addresses used for replication in clustered environments.
Perform the steps in the following procedure to set up replication across regions.
To set up replication across regions
- Create two VPCs with valid CIDR blocks, for example, 10.30.0.0/16 and 10.60.0.0/16 respectively.
- Create the primary site EC2 instances in the respective Availability Zones of the region.
- Create the primary site VPN instances in the respective Availability Zones of the region. The VPN instances belong to the same VPC as that of the primary EC2 instance.
- Choose a valid overlay IP address as the replication IP address for the primary site. The overlay IP address is a private IP address outside of the primary site's VPC CIDR block. Plumb the overlay IP address on the master node of the primary site cluster.
- Modify the route table on the primary site to include the overlay IP address. Ensure that the route table entry directs any traffic destined for the primary site to be routed through the secondary VPN instance and the traffic destined for the secondary site overlay IP to be routed through the secondary InfoScale instance.
- Create the secondary site EC2 instances in the respective Availability Zones of the second region.
- Create the secondary site VPN instances in the respective Availability Zones of the second region. The VPN instances belong to the same VPC as that of the secondary EC2 instance.
- Choose a valid overlay IP address as the replication IP address for the secondary site. The overlay IP address is a private IP address outside of the secondary site's VPC CIDR block. Plumb the overlay IP address on the master node of the secondary site cluster.
- Modify the route table on the secondary site. Ensure that the route table entries direct traffic destined for the secondary site to be routed through the primary VPN instance and the traffic destined for the primary site overlay IP to be routed through the primary InfoScale instance.
- Set up connectivity across regions using software VPN. The sample configuration uses Openswan.
Perform the following steps:
Install the Openswan packages on the primary and secondary VPN instances.
Configure the
/etc/ipsec.confand/etc/ipsec.secretsfiles.Note:
The
/etc/ipsec.conffile contains information about the private IP address of the VPN instance, the subnet range of the left subnet, elastic IP address of the destination VPN, the subnet range of the destination right subnet.The
/etc/ipsec.secretsfile contains the secret key. This key must be the same on both VPN sites.Restart the IPSec service.
# service ipsec restart
Add the IPSec connection.
# ipsec auto -add vpc2vpcConnection # ipsec auto -up vpc2vpcConnection
Enable IPSec forwarding.
# sysctl -w net.ipv4.ip_forward=1
- Modify the
ipsec.conffile to add the overlay IP address for both primary and secondary site VPN instances. - Verify whether or not the master nodes on the primary and secondary site can reach each other using the overlay IP address.
- Set up replication between the primary and secondary sites.
For instructions, see the chapter Setting up replication in the Veritas InfoScale Replication Administrator's Guide.
- Verify the status of replication.
# vradmin -g dg_name repstatus rvg_name
Ensure that the RLINK is in CONNECT state and the replication status shows:
Replication status: replicating (connected)