InfoScale™ 9.0 Disaster Recovery Implementation Guide - Solaris
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- How VCS global clusters work
- User privileges for cross-cluster operations
- VCS global clusters: The building blocks
- Visualization of remote cluster objects
- About global service groups
- About global cluster management
- About serialization - The Authority attribute
- About resiliency and "Right of way"
- VCS agents to manage wide-area failover
- About the Steward process: Split-brain in two-cluster global clusters
- Secure communication in global clusters
- Disaster recovery feature support for components in the Veritas InfoScale product suite
- Virtualization support for InfoScale 9.0 products in replicated environments
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- Preparing to set up a campus cluster configuration
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for campus cluster configuration
- Configuring VCS service group for campus clusters
- Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
- Fire drill in campus clusters
- About the DiskGroupSnap agent
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
- Preparing to set up a campus cluster in a parallel cluster database environment
- Configuring I/O fencing to prevent data corruption
- Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
- Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
- Tuning guidelines for parallel campus clusters
- Best practices for a parallel campus cluster
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- About setting up a replicated data cluster configuration using third-party replication
- About typical replicated data cluster configuration using third-party replication
- About setting up third-party replication
- Configuring the service groups for third-party replication
- Fire drill in replicated data clusters using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Installing and Configuring Cluster Server
- Setting up VVR replication
- About configuring VVR replication
- Best practices for setting up replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Setting up third-party replication
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Fire drill in global clusters
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About global clusters
- About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
- About setting up a global cluster environment for parallel clusters
- Configuring the primary site
- Configuring the secondary site
- Setting up replication between parallel global cluster sites
- Testing a parallel global cluster configuration
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Starting replication of the primary site database volume to the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Replication use cases for global parallel clusters
- Configuring global clusters for VCS and SFHA
- Section V. Implementing disaster recovery configurations in virtualized environments
- Section VI. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
- Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
- Sample main.cf for a primary CVM VVR site
- Sample main.cf for a secondary CVM VVR site
- Appendix A. Sample configuration files
Starting replication of the primary site database volume to the secondary site using VVR
When you have both the primary and secondary sites set up for replication, you can start replication from the primary site to the secondary site.
Start with the default replication settings:
Mode of replication: synchronous=off
Latency Protection: latencyprot=off
Storage Replicator Log (SRL) overflow protection: srlprot=autodcm
Packet size: packet_size=8400
Network protocol: protocol=TCP
Method of initial synchronization:
Automatic synchronization
Full synchronization with Storage Checkpoint
For guidelines on modifying these settings and information on choosing the method of replication for the initial synchronization:
See the Veritas InfoScale™ Replication Administrator's Guide
Use the vradmin command to start replication or the transfer of data from the primary site to the secondary site over the network. Because the cluster on the secondary site uses only one host name, the command does not require the sec_host argument.
To start replication using automatic synchronization
- From the primary site, use the following command to automatically synchronize the Replicated Volume Group (RVG) on the secondary site:
vradmin -g disk_group -a startrep pri_rvg sec_host
where:
disk_group is the disk group on the primary site that VVR will replicate
pri_rvg is the name of the Replicated Volume Group (RVG) on the primary site
sec_host is the virtual host name for the secondary site
For example:
# vradmin -g dbdatadg -a startrep dbdata_rvg clus2
Use the vradmin command with the Storage Checkpoint option to start replication using full synchronization with Storage Checkpoint.
To start replication using full synchronization with Storage Checkpoint
- From the primary site, synchronize the RVG on the secondary site with full synchronization (using the -c checkpoint option):
vradmin -g disk_group -full -c ckpt_name syncrvg pri_rvg sec_host
where:
disk_group is the disk group on the primary site that VVR will replicate
ckpt_name is the name of the Storage Checkpoint on the primary site
pri_rvg is the name of the RVG on the primary site
sec_host is the virtual host name for the secondary site
For example:
# vradmin -g dbdatadg -c dbdata_ckpt syncrvg dbdata_rvg clus2
- To start replication after full synchronization, enter the following command:
# vradmin -g dbdatadg -c dbdata_ckpt startrep dbdata_rvg clus2
Verify that replication is properly functioning.
To verify replication status
- Check the status of VVR replication:
# vradmin -g disk_group_name repstatus rvg_name
- Review the flags output for the status. The output may appear as connected and consistent. For example:
# vxprint -g dbdatadg -l rlk_clus2_dbdata_rvg Rlink: rlk_clus2_dbdata_rvg info: timeout=500 packet_size=8400 rid=0.1078 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm . . protocol: UDP/IP checkpoint: dbdata_ckpt flags: write enabled attached consistent connected asynchronous