Storage Foundation and High Availability Solutions 7.4.1 Solutions Guide - Windows
- Section I. Introduction
- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center
- About the Solutions Configuration Center
- Starting the Solutions Configuration Center
- Options in the Solutions Configuration Center
- About launching wizards from the Solutions Configuration Center
- Remote and local access to Solutions wizards
- Solutions wizards and logs
- Workflows in the Solutions Configuration Center
- SFW best practices for storage
- Section II. Quick Recovery
- Section III. High Availability
- High availability: Overview
- About high availability
- About clusters
- How VCS monitors storage components
- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
- Deploying InfoScale Enterprise for high availability: New installation
- About the high availability solution
- Tasks for a new high availability (HA) installation - additional applications
- Reviewing the InfoScale installation requirements
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Enabling fast failover for disk groups (optional)
- Configuring the service group in a non-shared storage environment
- Verifying the cluster configuration
- Possible tasks after completing the configuration
- Adding nodes to a cluster
- Modifying the application service groups
- Adding DMP to a clustering configuration
- High availability: Overview
- Section IV. Campus Clustering
- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster
- About the Campus Cluster solution
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Installing and configuring the hardware
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes
- About cluster disk groups and volumes
- Example disk group and volume configuration in campus cluster
- Considerations when creating disks and volumes for campus clusters
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for campus clusters
- Installing the application on cluster nodes
- Configuring service groups
- Verifying the cluster configuration
- Section V. Replicated Data Clusters
- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation
- Tasks for a new replicated data cluster installation - additional applications
- Notes and recommendations for cluster and application configuration
- Sample configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Setting up security for Volume Replicator
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- Creating the primary system zone for the application service group
- Verifying the cluster configuration
- Creating a parallel environment in the secondary zone
- Adding nodes to a cluster
- Creating the Replicated Data Sets with the wizard
- Configuring a RVG service group for replication
- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication
- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the application service group
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC
- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding the nodes from the secondary zone to the application service group
- Configuring the zones in the application service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
- Section VI. Disaster Recovery
- Disaster recovery: Overview
- Deploying disaster recovery: New application installation
- Tasks for a new disaster recovery installation - additional applications
- Tasks for setting up DR in a non-shared storage environment
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Verifying that your application or server role is configured for HA at the primary site
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring the application or server role (secondary site)
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the application service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Establishing secure communication within the global cluster (optional)
- Adding multiple DR sites (optional)
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Recovery procedures for service group dependencies
- Testing fault readiness by running a fire drill
- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
- Section VII. Microsoft Clustering Solutions
- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering
- Tasks for deploying InfoScale Storage with Microsoft failover clustering
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Creating a group for the application in the failover cluster
- Installing the application on cluster nodes
- Completing the setup of the application group in the failover cluster
- Implementing a dynamic quorum resource
- Verifying the cluster configuration
- Configuring InfoScale Storage in an existing Microsoft Failover Cluster
- Deploying SFW with Microsoft failover clustering in a campus cluster
- Tasks for deploying InfoScale Storage with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Setting up a group for the application in the failover cluster
- Installing the application on the cluster nodes
- Completing the setup of the application group in the cluster
- Verifying the cluster configuration
- Deploying SFW and VVR with Microsoft failover clustering
- Tasks for deploying InfoScale Storage and Volume Replicator with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site
- Reviewing the prerequisites and the configuration
- Installing and configuring the hardware
- Installing Windows and configuring network settings
- Establishing a Microsoft failover cluster
- Installing InfoScale Storage (primary site)
- Setting up security for Volume Replicator
- Creating SFW disk groups and volumes
- Completing the primary site configuration
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
- Section VIII. Server Consolidation
- Server consolidation overview
- Server consolidation configurations
- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP
- About this configuration
- Adding the new hardware
- Establishing the Microsoft failover cluster
- Adding SFW support to the cluster
- Setting up Microsoft failover cluster groups for the applications
- Installing applications on the second computer
- Completing the setup of the application group in the Microsoft cluster
- Changing the quorum resource to the dynamic quorum resource
- Verifying the cluster configuration
- Enabling DMP
- SFW features that support server consolidation
- Appendix A. Using Veritas AppProtect for vSphere
- About Just In Time Availability
- Prerequisites
- Setting up a plan
- Deleting a plan
- Managing a plan
- Viewing the history tab
- Limitations of Just In Time Availability
- Getting started with Just In Time Availability
- Supported operating systems and configurations
- Viewing the properties
- Log files
- Plan states
- Troubleshooting Just In Time Availability
Creating volumes for campus clusters
This section will guide you through the process of creating a volume on a dynamic disk group for a campus cluster.
For creating volumes for other types of clusters:
Use the following procedure to create dynamic volumes for a campus cluster.
Note:
When assigning drive letters to volumes, ensure that the drive letters that you assign are available on all nodes.
To create dynamic volumes
- Launch the VEA console from Start > All Programs > Veritas > Veritas Storage Foundation > Veritas Enterprise Administrator or, on Windows 2012 operating systems, from the Apps menu.
- Click Connect to a Host or Domain.
- In the Connect dialog box select the host name and click Connect.
To connect to the local system, select localhost. Provide the user name, password, and domain if prompted.
- To start the New Volume wizard, expand the tree view under the host node to display all the disk groups. Right click a disk group and select New Volume from the context menu.
You can right-click the disk group you have just created.
- At the New Volume wizard opening screen, click Next.
- Select the disks for the volume as follows:
Group name
Make sure the appropriate disk group is selected.
Site preference
Select the Site Separated option.
Select site from
Select the campus cluster sites. Press CTRL to select multiple sites.
Note:
If no sites are listed, the disks have not yet been added to a site.
Auto select disks
Automatic disk selection is recommended for campus clusters.
SFW automatically selects the disks based on the following criteria:
Their port assignment (disks with two different ports are selected): Note that in the list of available disks, the entry after each disk name starts with the port number. For example, the "P3" in the entry P3C0T2L1 refers to port 3.
Amount of available space on the disks: SFW picks two disks (one from each array) with the most space.
Manually select disks
If you manually select disks, use the Add and Remove buttons to move the appropriate disks to the Selected disks list.
Disable Track Alignment
You may also check Disable Track Alignment to disable track alignment for the volume. Disabling track alignment means that the volume does not store blocks of data in alignment with the boundaries of the physical track of the disk.
Click Next.
- Specify the volume attributes as follows:
Volume name
Specify a name for the volume. The name is limited to 18 ASCII characters and cannot contain spaces or forward or backward slashes.
Size
Specify a size for the volume. If you click Max Size, the Size box shows the maximum possible volume size for that layout in the dynamic disk group.
Layout
Ensure that the Mirrored checkbox is selected.
Select either the Concatenated or Striped layout type.
If you are creating a striped volume, the Columns and Stripe unit size boxes need to have entries. Defaults are provided. In addition, click the Stripe across checkbox and select Ports from the drop-down list.
Mirror Info
Click Mirror across and select Enclosures from the drop-down list.
When creating a site separated volume, as required for campus clusters, the number of mirrors must correspond to the number of sites. If needed, you can add more mirrors after creating the volume.
Enable logging
Verify that this option is not selected.
Click Next.
- In the Add Drive Letter and Path dialog box, assign a drive letter or mount point to the volume. You must use the same drive letter or mount point on all systems in the cluster. Make sure to verify the availability of the drive letter before assigning it.
To assign a drive letter, select Assign a Drive Letter, and choose a drive letter.
To mount the volume as a folder, select Mount as an empty NTFS folder, and click Browse to locate an empty folder on the shared disk.
Click Next.
- Create an NTFS file system.
Make sure the Format this volume checkbox is checked and select NTFS.
Select an allocation size or accept the default.
The file system label is optional. SFW makes the volume name the file system label.
Select Perform a quick format if you want to save time.
Select Enable file and folder compression to save disk space.
Note that compression consumes system resources and performs encryption and decryption, which may result in reduced system performance.
Click Next.
- Click Finish to create the new volume.
- Repeat these steps to create additional volumes as needed.
Note:
Create the cluster disk group and volumes on the first node of the cluster only.