Storage Foundation and High Availability Solutions 7.4 HA and DR Solutions Guide for Enterprise Vault - Windows
- Introducing SFW HA for EV
- About clustering solutions with InfoScale products
- About high availability
- How a high availability solution works
- How VCS monitors storage components
- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
- About replication
- About disaster recovery
- What you can do with a disaster recovery solution
- Typical disaster recovery configuration
- Configuring high availability for Enterprise Vault with InfoScale Enterprise
- Reviewing the HA configuration
- Reviewing the disaster recovery configuration
- High availability (HA) configuration (New Server)
- Following the HA workflow in the Solutions Configuration Center
- Disaster recovery configuration
- Notes and recommendations for cluster and application configuration
- Configuring the storage hardware and network
- Configuring cluster disk groups and volumes for Enterprise Vault
- About cluster disk groups and volumes
- Prerequisites for configuring cluster disk groups and volumes
- Considerations for a fast failover configuration
- Considerations for disks and volumes for campus clusters
- Considerations for volumes for a Volume Replicator configuration
- Sample disk group and volume configuration
- Viewing the available disk storage
- Creating a cluster disk group
- Creating Volumes
- About managing disk groups and volumes
- Importing a disk group and mounting a volume
- Unmounting a volume and deporting a disk group
- Adding drive letters to mount the volumes
- Deporting the cluster disk group
- Configuring the cluster
- Adding a node to an existing VCS cluster
- Verifying your primary site configuration
- Guidelines for installing InfoScale Enterprise and configuring the cluster on the secondary site
- Setting up your replication environment
- Setting up security for Volume Replicator
- Assigning user privileges (secure clusters only)
- Configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Installing and configuring Enterprise Vault on the secondary site
- Configuring Volume Replicator replication and global clustering
- Configuring global clustering only
- Setting service group dependencies for disaster recovery
- Verifying the disaster recovery configuration
- Establishing secure communication within the global cluster (optional)
- Adding multiple DR sites (optional)
- Recovery procedures for service group dependencies
- Using the Solutions Configuration Center
- About the Solutions Configuration Center
- Starting the Solutions Configuration Center
- Options in the Solutions Configuration Center
- About launching wizards from the Solutions Configuration Center
- Remote and local access to Solutions wizards
- Solutions wizards and logs
- Workflows in the Solutions Configuration Center
- Installing and configuring Enterprise Vault for failover
- Installing Enterprise Vault
- Configuring the Enterprise Vault service group
- Configuring Enterprise Vault Server in a cluster environment
- Setting service group dependencies for high availability
- Verifying the Enterprise Vault cluster configuration
- Setting up Enterprise Vault
- Considerations when modifying an EV service group
- Appendix A. Using Veritas AppProtect for vSphere
- About Just In Time Availability
- Prerequisites
- Setting up a plan
- Deleting a plan
- Managing a plan
- Viewing the history tab
- Limitations of Just In Time Availability
- Getting started with Just In Time Availability
- Supported operating systems and configurations
- Viewing the properties
- Log files
- Plan states
- Troubleshooting Just In Time Availability
Establishing secure communication within the global cluster (optional)
A global cluster is created in non-secure mode by default. You may continue to allow the global cluster to run in non-secure mode or choose to establish secure communication between clusters.
The following prerequisites are required for establishing secure communication within a global cluster:
The clusters within the global cluster must be running in secure mode.
You must have Administrator privileges for the domain.
The following information is required for adding secure communication to a global cluster:
The active host name or IP address of each cluster in the global configuration.
The user name and password of the administrator for each cluster in the configuration.
If the local clusters do not point to the same root broker, the host name and port address of each root broker.
Adding secure communication involves the following tasks:
Taking the ClusterService-Proc (wac) resource in the ClusterService group offline on the clusters in the global environment.
Adding the -secure option to the StartProgram attribute on each node.
Establishing trust between root brokers if the local clusters do not point to the same root broker.
Bringing the ClusterService-Proc (wac) resource online on the clusters in the global cluster.
To take the ClusterService-Proc (wac) resource offline on all clusters
- From Cluster Monitor, log on to a cluster in the global cluster.
- In the Service Groups tab of the Cluster Explorer configuration tree, expand the ClusterService group and the Process agent.
- Right-click the ClusterService-Proc resource, click Offline, and click the appropriate system from the menu.
- Repeat all the previous steps for the additional clusters in the global cluster.
To add the -secure option to the StartProgram resource
- In the Service Groups tab of the Cluster Explorer configuration tree, right-click the ClusterService-Proc resource under the Process type in the ClusterService group.
- Click View > Properties view.
- Click the Edit icon to edit the StartProgram attribute.
- In the Edit Attribute dialog box, add -secure switch to the path of the executable Scalar Value.
For example:
"C:\Program Files\Veritas\Cluster Server\bin\wac.exe" -secure
- Repeat the previous step for each system in the cluster.
- Click OK to close the Edit Attribute dialog box.
- Click the Save and Close Configuration icon in the tool bar.
- Repeat all the previous steps for each cluster in the global cluster.
To establish trust between root brokers if there is more than one root broker
- Establishing trust between root brokers is only required if the local clusters do not point to the same root broker.
Log on to the root broker for each cluster and set up trust to the other root brokers in the global cluster.
The complete syntax of the command is:
vssat setuptrust --broker host:port --securitylevel [low|medium|high] [--hashfile fileName | --hash rootHashInHex]
For example, to establish trust with a low security level in a global cluster comprised of Cluster1 pointing to RB1 and Cluster2 pointing to RB2 use the following commands:
From RB1, type:
vssat setuptrust --broker RB2:14141 --securitylevel low
From RB2, type:
vssat setuptrust --broker RB1:14141 --securitylevel low
To bring the ClusterService-Proc (wac) resource online on all clusters
- In the Service Groups tab of the Cluster Explorer configuration tree, expand the ClusterService group and the Process agent.
- Right-click the ClusterService-Proc resource, click Online, and click the appropriate system from the menu.
- Repeat all the previous steps for the additional clusters in the global cluster.