InfoScale™ 9.0 Disaster Recovery Implementation Guide - Solaris
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- How VCS global clusters work
- User privileges for cross-cluster operations
- VCS global clusters: The building blocks
- Visualization of remote cluster objects
- About global service groups
- About global cluster management
- About serialization - The Authority attribute
- About resiliency and "Right of way"
- VCS agents to manage wide-area failover
- About the Steward process: Split-brain in two-cluster global clusters
- Secure communication in global clusters
- Disaster recovery feature support for components in the Veritas InfoScale product suite
- Virtualization support for InfoScale 9.0 products in replicated environments
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- Preparing to set up a campus cluster configuration
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for campus cluster configuration
- Configuring VCS service group for campus clusters
- Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
- Fire drill in campus clusters
- About the DiskGroupSnap agent
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
- Preparing to set up a campus cluster in a parallel cluster database environment
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
- Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
- Tuning guidelines for parallel campus clusters
- Best practices for a parallel campus cluster
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- About setting up a replicated data cluster configuration using third-party replication
- About typical replicated data cluster configuration using third-party replication
- About setting up third-party replication
- Configuring the service groups for third-party replication
- Fire drill in replicated data clusters using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Installing and Configuring Cluster Server
- Setting up VVR replication
- About configuring VVR replication
- Best practices for setting up replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Setting up third-party replication
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Fire drill in global clusters
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About global clusters
- About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
- About setting up a global cluster environment for parallel clusters
- Configuring the primary site
- Configuring the secondary site
- Setting up replication between parallel global cluster sites
- Testing a parallel global cluster configuration
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Starting replication of the primary site database volume to the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Replication use cases for global parallel clusters
- Configuring global clusters for VCS and SFHA
- Section V. Implementing disaster recovery configurations in virtualized environments
- Section VI. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
- Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
- Sample main.cf for a primary CVM VVR site
- Sample main.cf for a secondary CVM VVR site
- Appendix A. Sample configuration files
Configuring the Steward process (optional)
In case of a two-cluster global cluster setup, you can configure a Steward to prevent potential split-brain conditions, provided the proper network infrastructure exists.
See About the Steward process: Split-brain in two-cluster global clusters.
To configure the Steward process for clusters not running in secure mode
- Identify a system that will host the Steward process.
To configure Steward in a dual-stack configuration, ensure that you enable IPv4 and IPv6 on the system that will host the Steward process. You must also plumb both the IPv4 and IPv6 addresses on the host system.
- Make sure that both clusters can connect to the system through a ping command.
- Install VRTSvcs, VRTSperl and VRTSvlic packages:
# pkg set-publisher -g <path to p5p package> Veritas# pkg install --accept VRTSperl VRTSvcs VRTSvlic# pkg unset-publisher Veritas - In both the clusters, set the Stewards attribute to the IP address of the system running the Steward process.
The steward attribute must contain the IPv4 or IPv6 address of the steward server depending on whether the cluster node is configured with IPv4 or IPv6 respectively.
When a cluster node is configured with IPv4, the steward attribute must be set to the IPv4 address of the steward server. When a cluster node is configured with IPv6, the Steward attribute must be set to the IPv6 address.
For example:
cluster cluster1938 ( UserNames = { admin = gNOgNInKOjOOmWOiNL } ClusterAddress = "10.182.147.19" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 Stewards = {"10.212.100.165", "10.212.101.162"} } - On the system designated to host the Steward, start the Steward process:
# steward -start
To configure the Steward process for clusters running in secure mode
- Verify that the prerequisites for securing Steward communication are met.
To verify that the wac process runs in secure mode, do the following:
Check the value of the wac resource attributes:
# hares -value wac StartProgram
The value must be "/opt/VRTSvcs/bin/wacstart - secure."
# hares -value wac MonitorProcesses
The value must be "/opt/VRTSvcs/bin/wac - secure."
List the wac process:
# ps -ef | grep wac
The wac process must run as "/opt/VRTSvcs/bin/wac - secure."
- Identify a system that will host the Steward process.
- Make sure that both clusters can connect to the system through a ping command.
- Perform this step only if VCS is not already installed on the Steward system. If VCS is already installed, skip to step 5.
If the cluster UUID is not configured, configure it by using
/opt/VRTSvcs/bin/uuidconfig.pl.On the system that is designated to run the Steward process, run the installvcs -securityonenode command.
The installer prompts for a confirmation if VCS is not configured or if VCS is not running on all nodes of the cluster. Enter y when the installer prompts whether you want to continue configuring security.
For more information about the -securityonenode option, see the Cluster Server Configuration and Upgrade Guide.
- Generate credentials for the Steward using
/opt/VRTSvcs/bin/steward_secure.plor perform the following steps:# unset EAT_DATA_DIR
# unset EAT_HOME_DIR
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat createpd -d VCS_SERVICES -t ab
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat addprpl -t ab -d VCS_SERVICES -p STEWARD -s password
# mkdir -p /var/VRTSvcs/vcsauth/data/STEWARD
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
# /opt/VRTSvcs/bin/vcsat setuptrust -s high -b localhost:14149
- Set up trust on all nodes of the GCO clusters:
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC
# vcsat setuptrust -b <IP_of_Steward>:14149 -s high
- Set up trust on the Steward:
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
# vcsat setuptrust -b <VIP_of_remote_cluster1>:14149 -s high
# vcsat setuptrust -b <VIP_of_remote_cluster2>:14149 -s high
- In both the clusters, set the Stewards attribute to the IP address of the system running the Steward process.
For example:
cluster cluster1938 ( UserNames = { admin = gNOgNInKOjOOmWOiNL } ClusterAddress = "10.182.147.19" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 Stewards = {"10.212.100.165", "10.212.101.162} } - On the system designated to run the Steward, start the Steward process:
# /opt/VRTSvcs/bin/steward -start -secure
To stop the Steward process
- To stop the Steward process that is not configured in secure mode, open a new command window and run the following command:
# steward -stop
To stop the Steward process running in secure mode, open a new command window and run the following command:
# steward -stop -secure