InfoScale™ 9.0 Cluster Server Administrator's Guide - Windows
- Section I. Clustering concepts and terminology
- Introducing Cluster Server
- About Cluster Server
- About cluster control guidelines
- About the physical components of VCS
- Logical components of VCS
- About resources and resource dependencies
- Categories of resources
- About resource types
- About service groups
- Types of service groups
- About the ClusterService group
- About agents in VCS
- About agent functions
- Agent classifications
- VCS agent framework
- About cluster control, communications, and membership
- About security services
- Components for administering VCS
- Putting the pieces together
- About cluster topologies
- VCS configuration concepts
- Introducing Cluster Server
- Section II. Administration - Putting VCS to work
- About the VCS user privilege model
- Getting started with VCS
- Administering the cluster from the command line
- About administering VCS from the command line
- Starting VCS
- Stopping the VCS engine and related processes
- About managing VCS configuration files
- About managing VCS users from the command line
- About querying VCS
- About administering service groups
- Adding and deleting service groups
- Modifying service group attributes
- Bringing service groups online
- Taking service groups offline
- Switching service groups
- Freezing and unfreezing service groups
- Enabling and disabling priority based failover for a service group
- Enabling and disabling service groups
- Clearing faulted resources in a service group
- Linking and unlinking service groups
- Administering agents
- About administering resources
- About administering resource types
- Administering systems
- About administering clusters
- Using the -wait option in scripts that use VCS commands
- Configuring resources and applications in VCS
- About configuring resources and applications
- About Virtual Business Services
- About Intelligent Resource Monitoring (IMF)
- About fast failover
- How VCS monitors storage components
- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
- About storage configuration
- About configuring network resources
- About configuring file shares
- Before you configure a file share service group
- Configuring file shares using the wizard
- Modifying a file share service group using the wizard
- Deleting a file share service group using the wizard
- Creating non-scoped file shares configured with VCS
- Making non-scoped file shares accessible while using virtual server name or IP address if NetBIOS and WINS are disabled
- About configuring IIS sites
- About configuring services
- About configuring a service using the GenericService agent
- Before you configure a service using the GenericService agent
- Configuring a service using the GenericService agent
- About configuring a service using the ServiceMonitor agent
- Before you configure a service using the ServiceMonitor agent
- Configuring a service using the ServiceMonitor agent
- About configuring processes
- About configuring Microsoft Message Queuing (MSMQ)
- Before you configure the MSMQ service group
- Configuring the MSMQ resource using the command-line utility
- Configuring the MSMQ service group using the wizard
- Modifying an MSMQ service group using the wizard
- Configuring MSMQ agent to check port bindings more than once
- Binding an MSMQ instance to the correct IP address
- Checking whether MSMQ is listening for messages
- About configuring the infrastructure and support agents
- About configuring applications using the Application Configuration Wizard
- Before you configure service groups using the Application Configuration wizard
- Adding resources to a service group
- Configuring service groups using the Application Configuration Wizard
- Modifying an application service group
- Deleting resources from a service group
- Deleting an application service group
- About application monitoring on single-node clusters
- Configuring the service group in a non-shared storage environment
- About the VCS Application Manager utility
- About testing resource failover using virtual fire drills
- Modifying the cluster configuration
- Section III. Administration - Beyond the basics
- Controlling VCS behavior
- VCS behavior on resource faults
- About controlling VCS behavior at the service group level
- About the AutoRestart attribute
- About controlling failover on service group or system faults
- About defining failover policies
- About system zones
- Load-based autostart
- About freezing service groups
- About controlling Clean behavior on resource faults
- Clearing resources in the ADMIN_WAIT state
- About controlling fault propagation
- Customized behavior diagrams
- VCS behavior for resources that support the intentional offline functionality
- About controlling VCS behavior at the resource level
- Changing agent file paths and binaries
- Service group workload management
- Sample configurations depicting workload management
- The role of service group dependencies
- VCS event notification
- VCS event triggers
- About VCS event triggers
- Using event triggers
- List of event triggers
- About the dumptunables trigger
- About the injeopardy event trigger
- About the loadwarning event trigger
- About the nofailover event trigger
- About the postoffline event trigger
- About the postonline event trigger
- About the preonline event trigger
- About the resadminwait event trigger
- About the resfault event trigger
- About the resnotoff event trigger
- About the resrestart event trigger
- About the resstatechange event trigger
- About the sysoffline event trigger
- About the unable_to_restart_agent event trigger
- About the unable_to_restart_had event trigger
- About the violation event trigger
- Controlling VCS behavior
- Section IV. Cluster configurations for disaster recovery
- Connecting clusters–Creating global clusters
- How VCS global clusters work
- VCS global clusters: The building blocks
- Visualization of remote cluster objects
- About global service groups
- About global cluster management
- About serialization - The Authority attribute
- About resiliency and "Right of way"
- VCS agents to manage wide-area failover
- About the Steward process: Split-brain in two-cluster global clusters
- Secure communication in global clusters
- Prerequisites for global clusters
- Setting up a global cluster
- Preparing the application for the global environment
- Configuring the ClusterService group
- Configuring replication resources in VCS
- Linking the application and replication service groups
- Configuring the second cluster
- Linking clusters
- Configuring the Steward process (optional)
- Stopping the Steward process
- Configuring the global service group
- About IPv6 support with global clusters
- About cluster faults
- About setting up a disaster recovery fire drill
- Multi-tiered application support using the RemoteGroup agent in a global environment
- Test scenario for a multi-tiered environment
- Administering global clusters from Cluster Manager (Java console)
- Administering global clusters from the command line
- About administering global clusters from the command line
- About global querying in a global cluster setup
- Administering global service groups in a global cluster setup
- Administering resources in a global cluster setup
- Administering clusters in global cluster setup
- Administering heartbeats in a global cluster setup
- Setting up replicated data clusters
- Connecting clusters–Creating global clusters
- Section V. Troubleshooting and performance
- VCS performance considerations
- How cluster components affect performance
- How cluster operations affect performance
- VCS performance consideration when booting a cluster system
- VCS performance consideration when a resource comes online
- VCS performance consideration when a resource goes offline
- VCS performance consideration when a service group comes online
- VCS performance consideration when a service group goes offline
- VCS performance consideration when a resource fails
- VCS performance consideration when a system fails
- VCS performance consideration when a network link fails
- VCS performance consideration when a system panics
- VCS performance consideration when a service group switches over
- VCS performance consideration when a service group fails over
- Monitoring CPU usage
- VCS agent statistics
- About VCS performance with non-HA products
- About VCS performance with SFW
- Troubleshooting and recovery for VCS
- VCS message logging
- Handling network failure
- Troubleshooting VCS startup
- Troubleshooting secure clusters
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting the steward process
- VCS utilities
- VCS performance considerations
- Section VI. Appendixes
- Appendix A. VCS user privileges—administration matrices
- Appendix B. Cluster and system states
- Appendix C. VCS attributes
- Appendix D. Configuring LLT over UDP
- Appendix E. Handling concurrency violation in any-to-any configurations
- Appendix F. Accessibility and VCS
- Appendix G. InfoScale event logging
Adding nodes to a cluster
To add a node to a VCS cluster
- Start the VCS Cluster Configuration wizard.
Click Start > All Programs > Veritas > Veritas Cluster Server > Configuration Tools > Cluster Configuration Wizard.
Run the wizard from the node to be added or from a node in the cluster. The node that is being added should be part of the domain to which the cluster belongs.
- Read the information on the Welcome panel and click Next.
- On the Configuration Options panel, click Cluster Operations and click Next.
- In the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.
To discover information about all the systems and users in the domain, do the following:
Clear the Specify systems and users manually check box.
Click Next.
Proceed to step 8.
To specify systems and user names manually (recommended for large domains), do the following:
Check the Specify systems and users manually check box.
Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes.
Click Next.
If you chose to retrieve the list of systems, proceed to step 6. Otherwise proceed to the next step.
- On the System Selection panel, complete the following and click Next:
Type the name of an existing node in the cluster and click Add.
Type the name of the system to be added to the cluster and click Add.
If you specify only one node of an existing cluster, the wizard discovers all nodes for that cluster. To add a node to an existing cluster, you must specify a minimum of two nodes; one that is already a part of a cluster and the other that is to be added to the cluster.
Proceed to step 8.
- On the System Selection panel, specify the systems to be added and the nodes for the cluster to which you are adding the systems.
Enter the system name and click Add to add the system to the Selected Systems list. Alternatively, you can select the systems from the Domain Systems list and click the right-arrow icon.
If you specify only one node of an existing cluster, the wizard discovers all nodes for that cluster. To add a node to an existing cluster, you must specify a minimum of two nodes; one that is already a part of a cluster and the other that is to be added to the cluster.
- The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier.
A system can be rejected for any of the following reasons:
The system does not respond to a ping request.
WMI access is disabled on the system.
The wizard is unable to retrieve information about the system's architecture or operating system.
VCS is either not installed on the system or the version of VCS is different from what is installed on the system on which you are running the wizard.
Click on a system name to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again.
Click Next to proceed.
- On the Cluster Configuration Options panel, click Edit Existing Cluster and click Next.
- On the Cluster Selection panel, select the cluster to be edited and click Next.
If you chose to specify the systems manually in step 4, only the clusters configured with the specified systems are displayed.
- On the Edit Cluster Options panel, click Add Nodes and click Next.
In the Cluster User Information dialog box, type the user name and password for a user with administrative privileges to the cluster and click OK.
The Cluster User Information dialog box appears only when you add a node to a cluster with VCS user privileges (a cluster that is not a secure cluster).
- On the Cluster Details panel, check the check boxes next to the systems to be added to the cluster and click Next.
The right pane lists nodes that are part of the cluster. The left pane lists systems that can be added to the cluster.
- The wizard validates the selected systems for cluster membership. After the nodes have been validated, click Next.
If a node does not get validated, review the message associated with the failure and restart the wizard after rectifying the problem.
- On the Private Network Configuration panel, configure the VCS private network communication on each system being added and then click Next. How you configure the VCS private network communication depends on how it is configured in the cluster. If LLT is configured over Ethernet, you have to use the same on the nodes being added. Similarly, if LLT is configured over UDP in the cluster, you have use the same on the nodes being added.
Do one of the following:
To configure the VCS private network over Ethernet, do the following:
Select the check boxes next to the two NICs to be assigned to the private network.
Arctera recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one NIC and use the low-priority NIC for public and private communication.
If you have only two NICs on a selected system, it is recommended that you lower the priority of at least one NIC that will be used for private as well as public network communication.
To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.
If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Arctera recommends that you do not select teamed NICs for the private network.
The wizard configures the LLT service (over Ethernet) on the selected network adapters.
To configure the VCS private network over the User Datagram Protocol (UDP) layer, do the following:
Select the check boxes next to the two NICs to be assigned to the private network. You can assign maximum eight network links. Arctera recommends reserving at least two NICs exclusively for the VCS private network. You could lower the priority of one NIC and use the low-priority NIC for both public and private communication.
If you have only two NICs on a selected system, it is recommended that you lower the priority of at least one NIC that will be used for private as well as public network communication. To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.
Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK.
For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet.
The IP address is used for the VCS private communication over the specified UDP port.
For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier.
The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication.
- On the Public Network Communication panel, select a NIC for public network communication, for each system that is being added, and then click Next.
This step is applicable only if you have configured the ClusterService service group, and the system being added has multiple adapters. If the system has only one adapter for public network communication, the wizard configures that adapter automatically.
- Specify the credentials for the user in whose context the VCS Helper service runs.
- Review the summary information and click Add.
- The wizard starts running commands to add the node. After all commands have been successfully run, click Finish.