Storage Foundation and High Availability Solutions 7.4.1 Solutions Guide - Windows
- Section I. Introduction- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center- About the Solutions Configuration Center
- Starting the Solutions Configuration Center
- Options in the Solutions Configuration Center
- About launching wizards from the Solutions Configuration Center
- Remote and local access to Solutions wizards
- Solutions wizards and logs
- Workflows in the Solutions Configuration Center
 
- SFW best practices for storage
 
- Section II. Quick Recovery
- Section III. High Availability- High availability: Overview- About high availability
- About clusters
- How VCS monitors storage components- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
 
 
- Deploying InfoScale Enterprise for high availability: New installation- About the high availability solution
- Tasks for a new high availability (HA) installation - additional applications
- Reviewing the InfoScale installation requirements
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Enabling fast failover for disk groups (optional)
 
- Configuring the service group in a non-shared storage environment
- Verifying the cluster configuration
- Possible tasks after completing the configuration
- Adding nodes to a cluster
- Modifying the application service groups
 
- Adding DMP to a clustering configuration
 
- High availability: Overview
- Section IV. Campus Clustering- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster- About the Campus Cluster solution
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Installing and configuring the hardware
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes- About cluster disk groups and volumes
- Example disk group and volume configuration in campus cluster
- Considerations when creating disks and volumes for campus clusters
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for campus clusters
 
- Installing the application on cluster nodes
- Configuring service groups
- Verifying the cluster configuration
 
 
- Section V. Replicated Data Clusters- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation- Tasks for a new replicated data cluster installation - additional applications
- Notes and recommendations for cluster and application configuration
- Sample configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Setting up security for Volume Replicator
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
 
- Creating the primary system zone for the application service group
- Verifying the cluster configuration
- Creating a parallel environment in the secondary zone
- Adding nodes to a cluster
- Creating the Replicated Data Sets with the wizard
- Configuring a RVG service group for replication- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the application service group
 
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
 
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding the nodes from the secondary zone to the application service group
- Configuring the zones in the application service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
 
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
 
 
- Section VI. Disaster Recovery- Disaster recovery: Overview
- Deploying disaster recovery: New application installation- Tasks for a new disaster recovery installation - additional applications
- Tasks for setting up DR in a non-shared storage environment
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Verifying that your application or server role is configured for HA at the primary site
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring the application or server role (secondary site)
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the application service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Establishing secure communication within the global cluster (optional)
- Adding multiple DR sites (optional)
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Recovery procedures for service group dependencies
 
- Testing fault readiness by running a fire drill- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
 
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
 
 
- Section VII. Microsoft Clustering Solutions- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering- Tasks for deploying InfoScale Storage with Microsoft failover clustering
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Creating a group for the application in the failover cluster
- Installing the application on cluster nodes
- Completing the setup of the application group in the failover cluster
- Implementing a dynamic quorum resource
- Verifying the cluster configuration
- Configuring InfoScale Storage in an existing Microsoft Failover Cluster
 
- Deploying SFW with Microsoft failover clustering in a campus cluster- Tasks for deploying InfoScale Storage with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Setting up a group for the application in the failover cluster
- Installing the application on the cluster nodes
- Completing the setup of the application group in the cluster
- Verifying the cluster configuration
 
- Deploying SFW and VVR with Microsoft failover clustering- Tasks for deploying InfoScale Storage and Volume Replicator with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site- Reviewing the prerequisites and the configuration
- Installing and configuring the hardware
- Installing Windows and configuring network settings
- Establishing a Microsoft failover cluster
- Installing InfoScale Storage (primary site)
- Setting up security for Volume Replicator
- Creating SFW disk groups and volumes
- Completing the primary site configuration
 
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
 
 
- Section VIII. Server Consolidation- Server consolidation overview
- Server consolidation configurations- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP- About this configuration
- Adding the new hardware
- Establishing the Microsoft failover cluster
- Adding SFW support to the cluster
- Setting up Microsoft failover cluster groups for the applications
- Installing applications on the second computer
- Completing the setup of the application group in the Microsoft cluster
- Changing the quorum resource to the dynamic quorum resource
- Verifying the cluster configuration
- Enabling DMP
 
- SFW features that support server consolidation
 
 
- Appendix A. Using Veritas AppProtect for vSphere- About Just In Time Availability
- Prerequisites
- Setting up a plan
- Deleting a plan
- Managing a plan
- Viewing the history tab
- Limitations of Just In Time Availability
- Getting started with Just In Time Availability
- Supported operating systems and configurations
- Viewing the properties
- Log files
- Plan states
- Troubleshooting Just In Time Availability
 
Adding nodes to a cluster
If you are setting up a Replicated Data Cluster, use the VCS Cluster Configuration Wizard (VCW) to add the systems in the secondary zone (zone1) to the existing cluster.
You use the VCS Cluster Configuration Wizard (VCW) to add one or more nodes to an existing cluster.
Prerequisites for adding a node to an existing cluster are as follows:
- Verify that the logged-on user has VCS cluster administrator privileges. 
- The logged-on user must be a local administrator on the system where you run the wizard. 
- Verify that Command Server is running on all nodes in the cluster. Select Services on the Administrative Tools menu and verify that the Veritas Command Server shows that it is started. 
- Verify that the high availability daemon (HAD) is running on the node on which you run the wizard. Open the Services window, and verify that the Veritas High availability engine service is running. 
To add a node to a VCS cluster
- Start the VCS Cluster Configuration wizard.Click Start > All Programs > Veritas > Veritas Cluster Server > Configuration Tools > Cluster Configuration Wizard. Run the wizard from the node to be added or from a node in the cluster. The node that is being added should be part of the domain to which the cluster belongs. 
- Read the information on the Welcome panel and click Next.
- On the Configuration Options panel, click Cluster Operations and click Next.
- In the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.To discover information about all the systems and users in the domain, do the following: - Clear the Specify systems and users manually check box. 
- Click Next. - Proceed to step 8. 
 To specify systems and user names manually (recommended for large domains), do the following: - Check the Specify systems and users manually check box. - Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes. 
- Click Next. - If you chose to retrieve the list of systems, proceed to step 6. Otherwise proceed to the next step. 
 
- On the System Selection panel, complete the following and click Next:- Type the name of an existing node in the cluster and click Add. 
- Type the name of the system to be added to the cluster and click Add. 
 If you specify only one node of an existing cluster, the wizard discovers all nodes for that cluster. To add a node to an existing cluster, you must specify a minimum of two nodes; one that is already a part of a cluster and the other that is to be added to the cluster. Proceed to step 8. 
- On the System Selection panel, specify the systems to be added and the nodes for the cluster to which you are adding the systems.Enter the system name and click Add to add the system to the Selected Systems list. Alternatively, you can select the systems from the Domain Systems list and click the right-arrow icon. If you specify only one node of an existing cluster, the wizard discovers all nodes for that cluster. To add a node to an existing cluster, you must specify a minimum of two nodes; one that is already a part of a cluster and the other that is to be added to the cluster. 
- The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier.A system can be rejected for any of the following reasons: - The system does not respond to a ping request. 
- WMI access is disabled on the system. 
- The wizard is unable to retrieve information about the system's architecture or operating system. 
- VCS is either not installed on the system or the version of VCS is different from what is installed on the system on which you are running the wizard. 
 Click on a system name to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again. Click Next to proceed. 
- On the Cluster Configuration Options panel, click Edit Existing Cluster and click Next.
- On the Cluster Selection panel, select the cluster to be edited and click Next.If you chose to specify the systems manually in step 4, only the clusters configured with the specified systems are displayed. 
- On the Edit Cluster Options panel, click Add Nodes and click Next.In the Cluster User Information dialog box, type the user name and password for a user with administrative privileges to the cluster and click OK. The Cluster User Information dialog box appears only when you add a node to a cluster with VCS user privileges (a cluster that is not a secure cluster). 
- On the Cluster Details panel, check the check boxes next to the systems to be added to the cluster and click Next.The right pane lists nodes that are part of the cluster. The left pane lists systems that can be added to the cluster. 
- The wizard validates the selected systems for cluster membership. After the nodes have been validated, click Next.If a node does not get validated, review the message associated with the failure and restart the wizard after rectifying the problem. 
- On the Private Network Configuration panel, configure the VCS private network communication on each system being added and then click Next. How you configure the VCS private network communication depends on how it is configured in the cluster. If LLT is configured over Ethernet, you have to use the same on the nodes being added. Similarly, if LLT is configured over UDP in the cluster, you have use the same on the nodes being added.Do one of the following: - To configure the VCS private network over Ethernet, do the following: - Select the check boxes next to the two NICs to be assigned to the private network. - Veritas recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one NIC and use the low-priority NIC for public and private communication. 
- If you have only two NICs on a selected system, it is recommended that you lower the priority of at least one NIC that will be used for private as well as public network communication. - To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu. 
- If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Veritas recommends that you do not select teamed NICs for the private network. 
 - The wizard configures the LLT service (over Ethernet) on the selected network adapters. 
- To configure the VCS private network over the User Datagram Protocol (UDP) layer, do the following: - Select the check boxes next to the two NICs to be assigned to the private network. You can assign maximum eight network links. Veritas recommends reserving at least two NICs exclusively for the VCS private network. You could lower the priority of one NIC and use the low-priority NIC for both public and private communication. 
- If you have only two NICs on a selected system, it is recommended that you lower the priority of at least one NIC that will be used for private as well as public network communication. To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu. 
- Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK. 
- For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet. - The IP address is used for the VCS private communication over the specified UDP port. 
- For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier. 
 - The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication. 
 
- On the Public Network Communication panel, select a NIC for public network communication, for each system that is being added, and then click Next.This step is applicable only if you have configured the ClusterService service group, and the system being added has multiple adapters. If the system has only one adapter for public network communication, the wizard configures that adapter automatically. 
- Specify the credentials for the user in whose context the VCS Helper service runs.
- Review the summary information and click Add.
- The wizard starts running commands to add the node. After all commands have been successfully run, click Finish.
If you are setting up a Replicated Data Cluster, return to the task list: