Storage Foundation and High Availability Solutions 7.4.1 Solutions Guide - Windows
- Section I. Introduction
- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center
- About the Solutions Configuration Center
- Starting the Solutions Configuration Center
- Options in the Solutions Configuration Center
- About launching wizards from the Solutions Configuration Center
- Remote and local access to Solutions wizards
- Solutions wizards and logs
- Workflows in the Solutions Configuration Center
- SFW best practices for storage
- Section II. Quick Recovery
- Section III. High Availability
- High availability: Overview
- About high availability
- About clusters
- How VCS monitors storage components
- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
- Deploying InfoScale Enterprise for high availability: New installation
- About the high availability solution
- Tasks for a new high availability (HA) installation - additional applications
- Reviewing the InfoScale installation requirements
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Enabling fast failover for disk groups (optional)
- Configuring the service group in a non-shared storage environment
- Verifying the cluster configuration
- Possible tasks after completing the configuration
- Adding nodes to a cluster
- Modifying the application service groups
- Adding DMP to a clustering configuration
- High availability: Overview
- Section IV. Campus Clustering
- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster
- About the Campus Cluster solution
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Installing and configuring the hardware
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes
- About cluster disk groups and volumes
- Example disk group and volume configuration in campus cluster
- Considerations when creating disks and volumes for campus clusters
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for campus clusters
- Installing the application on cluster nodes
- Configuring service groups
- Verifying the cluster configuration
- Section V. Replicated Data Clusters
- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation
- Tasks for a new replicated data cluster installation - additional applications
- Notes and recommendations for cluster and application configuration
- Sample configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Setting up security for Volume Replicator
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- Creating the primary system zone for the application service group
- Verifying the cluster configuration
- Creating a parallel environment in the secondary zone
- Adding nodes to a cluster
- Creating the Replicated Data Sets with the wizard
- Configuring a RVG service group for replication
- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication
- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the application service group
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC
- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding the nodes from the secondary zone to the application service group
- Configuring the zones in the application service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
- Section VI. Disaster Recovery
- Disaster recovery: Overview
- Deploying disaster recovery: New application installation
- Tasks for a new disaster recovery installation - additional applications
- Tasks for setting up DR in a non-shared storage environment
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Verifying that your application or server role is configured for HA at the primary site
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring the application or server role (secondary site)
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the application service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Establishing secure communication within the global cluster (optional)
- Adding multiple DR sites (optional)
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Recovery procedures for service group dependencies
- Testing fault readiness by running a fire drill
- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
- Section VII. Microsoft Clustering Solutions
- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering
- Tasks for deploying InfoScale Storage with Microsoft failover clustering
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Creating a group for the application in the failover cluster
- Installing the application on cluster nodes
- Completing the setup of the application group in the failover cluster
- Implementing a dynamic quorum resource
- Verifying the cluster configuration
- Configuring InfoScale Storage in an existing Microsoft Failover Cluster
- Deploying SFW with Microsoft failover clustering in a campus cluster
- Tasks for deploying InfoScale Storage with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Setting up a group for the application in the failover cluster
- Installing the application on the cluster nodes
- Completing the setup of the application group in the cluster
- Verifying the cluster configuration
- Deploying SFW and VVR with Microsoft failover clustering
- Tasks for deploying InfoScale Storage and Volume Replicator with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site
- Reviewing the prerequisites and the configuration
- Installing and configuring the hardware
- Installing Windows and configuring network settings
- Establishing a Microsoft failover cluster
- Installing InfoScale Storage (primary site)
- Setting up security for Volume Replicator
- Creating SFW disk groups and volumes
- Completing the primary site configuration
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
- Section VIII. Server Consolidation
- Server consolidation overview
- Server consolidation configurations
- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP
- About this configuration
- Adding the new hardware
- Establishing the Microsoft failover cluster
- Adding SFW support to the cluster
- Setting up Microsoft failover cluster groups for the applications
- Installing applications on the second computer
- Completing the setup of the application group in the Microsoft cluster
- Changing the quorum resource to the dynamic quorum resource
- Verifying the cluster configuration
- Enabling DMP
- SFW features that support server consolidation
- Appendix A. Using Veritas AppProtect for vSphere
- About Just In Time Availability
- Prerequisites
- Setting up a plan
- Deleting a plan
- Managing a plan
- Viewing the history tab
- Limitations of Just In Time Availability
- Getting started with Just In Time Availability
- Supported operating systems and configurations
- Viewing the properties
- Log files
- Plan states
- Troubleshooting Just In Time Availability
Configuring VCS components
Applications configured using GenericService or Process resources may require network components or registry replication resources. You can configure these VCS components only for service groups created using the wizard.
Note:
Configure these components only after configuring all application resources. The wizard creates a service group after these components are configured. To add more application resources, you must rerun the wizard in the Modify mode.
To configure VCS components
- In the Application Options panel, click Configure Other Components.
- Select the VCS component to be configured for your applications.
The available options are as follows:
Registry Replication Component: Select this option to configure registry replication for your application. To configure a Registry Replication resource, proceed to step 3.
Network Component: Select this option to configure network components for your application. If you wish to configure a virtual computer name, check Lanman component also. To configure a network resource, proceed to step 5.
The wizard does not enable the Lanman Component check box unless the Network Component check box is checked.
- Specify the registry keys to be replicated.
The RegistryReplication dialog box appears only if you chose to configure the Registry Replication Component in the Application Component dialog box.
Specify the directory on the shared disk in which the registry changes are logged.
Click Add.
In the Registry Keys dialog box, select the registry key to be replicated.
Click OK. The selected registry key is added to Registry KeyList box.
This is applicable in case of VCS for Windows only.
Check the Configure NetApp SnapMirror Resource(s) check box if you want to set up a disaster recovery configuration. The SnapMirror resource is used to monitor replication between filers at the primary and the secondary site, in a disaster recovery configuration. Note that you must configure the SnapMirror resource only after you have configured the cluster at the secondary site.
Click Next.
If you chose Network Component from the Application Component dialog box, proceed to the next step. Otherwise, proceed to step 6.
- This step is applicable in case of VCS for Windows only.
On the Initiator Selection panel, select the initiator for the virtual disk from the list of available initiators displayed for each cluster node, and then click Next.
If you are configuring multipath I/O (MPIO) over Fibre Channel (FC), you must select at least two FC initiators for each cluster node. Note that the node from which you run this wizard already has an initiator selected by default. This is the initiator that was specified when you connected the LUNs to this cluster node.
- The Virtual Computer Configuration dialog box appears only if you chose to configure the Network Component in the Application Component dialog box.
Specify the network related information as follows:
Select IPv4 to configure an IPv4 address for the virtual server.
In the Virtual IP Address field, type a unique virtual IPv4 address for the virtual server.
In the Subnet Mask field, type the subnet to which the virtual IPv4 address belongs.
Select IPv6 to configure an IPv6 address for the virtual server. The IPv6 option is disabled if the network does not support IPv6.
Select the prefix from the drop-down list. The wizard uses the prefix and automatically generates an IPv6 address that is valid and unique on the network.
In the Virtual Server Name field, enter a unique virtual computer name by which the node will be visible to the other nodes.
The virtual name must not exceed 15 characters. Note that the Virtual Computer Name text box is displayed only if you chose to configure the Lanman Component in Application Component dialog box.
For each system in the cluster, select the public network adapter name. To view the adapters associated with a system, click the Adapter Display Name field and click the arrow.
Note that the wizard displays all TCP/IP enabled adapters on a system, including the private network adapters, if applicable. Ensure that you select the adapters assigned to the public network, not the private.
Click Advanced and then specify additional details for the Lanman resource as follows:
Check AD Update required to enable the Lanman resource to update the Active Directory with the virtual name.
This sets the Lanman agent attributes ADUpdateRequired and ADCriticalForOnline to true.
In the Organizational Unit field, type the distinguished name of the Organizational Unit for the virtual server in the format
CN=containername,DC=domainname,DC=com.To browse for an OU, click ... (ellipsis button) and search for the OU using the Windows Find Organization Units dialog box. By default, the Lanman resource adds the virtual server to the default container "Computers."
The user account for VCS Helper service must have adequate privileges on the specified container to create and update computer accounts.
Click OK.
Click Next.
- If you do not want to add any more resources, proceed to configuring the service group: