InfoScale™ 9.0 Disaster Recovery Implementation Guide - Solaris
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- How VCS global clusters work
- User privileges for cross-cluster operations
- VCS global clusters: The building blocks
- Visualization of remote cluster objects
- About global service groups
- About global cluster management
- About serialization - The Authority attribute
- About resiliency and "Right of way"
- VCS agents to manage wide-area failover
- About the Steward process: Split-brain in two-cluster global clusters
- Secure communication in global clusters
- Disaster recovery feature support for components in the Veritas InfoScale product suite
- Virtualization support for InfoScale 9.0 products in replicated environments
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- Preparing to set up a campus cluster configuration
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for campus cluster configuration
- Configuring VCS service group for campus clusters
- Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
- Fire drill in campus clusters
- About the DiskGroupSnap agent
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
- Preparing to set up a campus cluster in a parallel cluster database environment
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
- Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
- Tuning guidelines for parallel campus clusters
- Best practices for a parallel campus cluster
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- About setting up a replicated data cluster configuration using third-party replication
- About typical replicated data cluster configuration using third-party replication
- About setting up third-party replication
- Configuring the service groups for third-party replication
- Fire drill in replicated data clusters using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Installing and Configuring Cluster Server
- Setting up VVR replication
- About configuring VVR replication
- Best practices for setting up replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Setting up third-party replication
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Fire drill in global clusters
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About global clusters
- About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
- About setting up a global cluster environment for parallel clusters
- Configuring the primary site
- Configuring the secondary site
- Setting up replication between parallel global cluster sites
- Testing a parallel global cluster configuration
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Starting replication of the primary site database volume to the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Replication use cases for global parallel clusters
- Configuring global clusters for VCS and SFHA
- Section V. Implementing disaster recovery configurations in virtualized environments
- Section VI. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
- Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
- Sample main.cf for a primary CVM VVR site
- Sample main.cf for a secondary CVM VVR site
- Appendix A. Sample configuration files
Setting up replication between parallel global cluster sites
You have configured Cluster Server (VCS) service groups for the database on each cluster. Each cluster requires an additional virtual IP address associated with the cluster for cross-cluster communication. The VCS installation and creation of the ClusterService group typically involves defining this IP address.
Configure a global cluster by setting:
Heartbeat
Wide area cluster (wac)
GCO IP (gcoip)
remote cluster resources
Table: Tasks for configuring a parallel global cluster
Task | Description |
|---|---|
Prepare to configure global parallel clusters | Before you configure a global cluster, review the following requirements:
|
Configure a global cluster using the global clustering wizard. | See “To modify the ClusterService group for global clusters using the global clustering wizard”. |
Define the remote global cluster and heartbeat objects | |
Configure global service groups for database resources | See “To configure global service groups for database resources”. |
Start replication between the sites. | For software-based replication using Volume Replicator (VVR): See About configuring a parallel global cluster using Volume Replicator (VVR) for replication. For replication using Oracle Data Guard see the Data Guard documentation by Oracle. For replication using hardware-based replication see the replicated agent guide for your hardware. See the Cluster Server Bundled Agents Guide |
Test the HA/DR configuration before putting it into production |
The global clustering wizard completes the following tasks:
Validates the ability of the current configuration to support a global cluster environment.
Creates the components that enable the separate clusters, each of which contains a different set of GAB memberships, to connect and operate as a single unit.
Creates the ClusterService group, or updates an existing ClusterService group.
To modify the ClusterService group for global clusters using the global clustering wizard
- On the primary cluster, start the GCO Configuration wizard:
# /opt/VRTSvcs/bin/gcoconfig
- The wizard discovers the NIC devices on the local system and prompts you to enter the device to be used for the global cluster. Specify the name of the device and press Enter.
- If you do not have NIC resources in your configuration, the wizard asks you whether the specified NIC will be the public NIC used by all the systems. Enter y if it is the public NIC; otherwise enter n. If you entered n, the wizard prompts you to enter the names of NICs on all systems.
- Enter the virtual IP address for the local cluster.
- If you do not have IP resources in your configuration, the wizard prompts you for the netmask associated with the virtual IP. The wizard detects the netmask; you can accept the suggested value or enter another one.
The wizard starts running commands to create or update the ClusterService group. Various messages indicate the status of these commands. After running these commands, the wizard brings the ClusterService failover group online on any one of the nodes in the cluster.
After configuring global clustering, add the remote cluster object to define the IP address of the cluster on the secondary site, and the heartbeat object to define the cluster-to-cluster heartbeat. Heartbeats monitor the health of remote clusters. VCS can communicate with the remote cluster only after you set up the heartbeat resource on both clusters.
To define the remote cluster and heartbeat
- On the primary site, enable write access to the configuration:
# haconf -makerw
- On the primary site, define the remote cluster and its virtual IP address.
In this example, the remote cluster is clus2 and its IP address is 10.11.10.102:
# haclus -add clus2 10.11.10.102
- Complete step 1 and step 2 on the secondary site using the name and IP address of the primary cluster.
In this example, the primary cluster is clus1 and its IP address is 10.10.10.101:
# haclus -add clus1 10.10.10.101
- On the primary site, add the heartbeat object for the cluster. In this example, the heartbeat method is ICMP ping.
# hahb -add Icmp
- For example:
Define the following attributes for the heartbeat resource:
ClusterList lists the remote cluster.
Arguments enable you to define the virtual IP address for the remote cluster.
# hahb -modify Icmp ClusterList clus2 # hahb -modify Icmp Arguments 10.11.10.102 -clus clus2
- Save the configuration and change the access to read-only on the local cluster:
# haconf -dump -makero
- Complete step 4-6 on the secondary site using appropriate values to define the cluster on the primary site and its IP as the remote cluster for the secondary cluster.
- It is advisible to modify "OnlineRetryLimit" & "OfflineWaitLimit" attribute of IP resource type to 1 on both the clusters:
# hatype -modify IP OnlineRetryLimit 1
# hatype -modify IP OfflineWaitLimit 1
- Verify cluster status with the hastatus -sum command on both clusters.
# hastatus -sum
- Display the global setup by executing haclus -list command.
# haclus -list clus1 clus2
Example of heartbeat additions to the main.cf file on the primary site:
. . remotecluster clus2 ( Cluster Address = "10.11.10.102" ) heartbeat Icmp ( ClusterList = { clus2 } Arguments @clus2 = { "10.11.10.102" } ) system sys1 ( ) . .Example heartbeat additions to the main.cf file on the secondary site:
. . remotecluster clus1 ( Cluster Address = "10.10.10.101" ) heartbeat Icmp ( ClusterList = { clus1 } Arguments @clus1 = { "10.10.10.101" } ) system sys3 ( ) . .See the Cluster Server Administrator's Guide for more details for configuring the required and optional attributes of the heartbeat object.
To configure global service groups for database resources
- Configure and enable global groups for databases and resources.
Configure VCS service groups at both sites.
Configure the replication agent at both sites.
For SF Oracle RAC, make the Oracle RAC service group a global service group, enabling failover across clusters.
For SF Sybase CE, make the database service group (sybasece) a global service group, enabling failover across clusters.
For example:
See Modifying the Cluster Server (VCS) configuration on the primary site.
- To test real data in an environment where HA/DR has been configured, schedule a planned migration to the secondary site for testing purposes.
For example:
See “To migrate the role of primary site to the remote site”.
See “To migrate the role of new primary site back to the original primary site”.
- Upon successful testing, bring the environment into production.
For more information about VCS replication agents:
See the Cluster Server Bundled Agents Guide
For complete details on using VVR in a shared disk environment:
See the Veritas InfoScale™ Replication Administrator's Guide.