Storage Foundation and High Availability Solutions 7.4.1 Solutions Guide - Windows
- Section I. Introduction- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center- About the Solutions Configuration Center
- Starting the Solutions Configuration Center
- Options in the Solutions Configuration Center
- About launching wizards from the Solutions Configuration Center
- Remote and local access to Solutions wizards
- Solutions wizards and logs
- Workflows in the Solutions Configuration Center
 
- SFW best practices for storage
 
- Section II. Quick Recovery
- Section III. High Availability- High availability: Overview- About high availability
- About clusters
- How VCS monitors storage components- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
 
 
- Deploying InfoScale Enterprise for high availability: New installation- About the high availability solution
- Tasks for a new high availability (HA) installation - additional applications
- Reviewing the InfoScale installation requirements
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Enabling fast failover for disk groups (optional)
 
- Configuring the service group in a non-shared storage environment
- Verifying the cluster configuration
- Possible tasks after completing the configuration
- Adding nodes to a cluster
- Modifying the application service groups
 
- Adding DMP to a clustering configuration
 
- High availability: Overview
- Section IV. Campus Clustering- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster- About the Campus Cluster solution
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Installing and configuring the hardware
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes- About cluster disk groups and volumes
- Example disk group and volume configuration in campus cluster
- Considerations when creating disks and volumes for campus clusters
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for campus clusters
 
- Installing the application on cluster nodes
- Configuring service groups
- Verifying the cluster configuration
 
 
- Section V. Replicated Data Clusters- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation- Tasks for a new replicated data cluster installation - additional applications
- Notes and recommendations for cluster and application configuration
- Sample configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Setting up security for Volume Replicator
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
 
- Creating the primary system zone for the application service group
- Verifying the cluster configuration
- Creating a parallel environment in the secondary zone
- Adding nodes to a cluster
- Creating the Replicated Data Sets with the wizard
- Configuring a RVG service group for replication- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the application service group
 
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
 
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding the nodes from the secondary zone to the application service group
- Configuring the zones in the application service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
 
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
 
 
- Section VI. Disaster Recovery- Disaster recovery: Overview
- Deploying disaster recovery: New application installation- Tasks for a new disaster recovery installation - additional applications
- Tasks for setting up DR in a non-shared storage environment
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Verifying that your application or server role is configured for HA at the primary site
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring the application or server role (secondary site)
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the application service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Establishing secure communication within the global cluster (optional)
- Adding multiple DR sites (optional)
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Recovery procedures for service group dependencies
 
- Testing fault readiness by running a fire drill- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
 
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
 
 
- Section VII. Microsoft Clustering Solutions- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering- Tasks for deploying InfoScale Storage with Microsoft failover clustering
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Creating a group for the application in the failover cluster
- Installing the application on cluster nodes
- Completing the setup of the application group in the failover cluster
- Implementing a dynamic quorum resource
- Verifying the cluster configuration
- Configuring InfoScale Storage in an existing Microsoft Failover Cluster
 
- Deploying SFW with Microsoft failover clustering in a campus cluster- Tasks for deploying InfoScale Storage with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Setting up a group for the application in the failover cluster
- Installing the application on the cluster nodes
- Completing the setup of the application group in the cluster
- Verifying the cluster configuration
 
- Deploying SFW and VVR with Microsoft failover clustering- Tasks for deploying InfoScale Storage and Volume Replicator with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site- Reviewing the prerequisites and the configuration
- Installing and configuring the hardware
- Installing Windows and configuring network settings
- Establishing a Microsoft failover cluster
- Installing InfoScale Storage (primary site)
- Setting up security for Volume Replicator
- Creating SFW disk groups and volumes
- Completing the primary site configuration
 
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
 
 
- Section VIII. Server Consolidation- Server consolidation overview
- Server consolidation configurations- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP- About this configuration
- Adding the new hardware
- Establishing the Microsoft failover cluster
- Adding SFW support to the cluster
- Setting up Microsoft failover cluster groups for the applications
- Installing applications on the second computer
- Completing the setup of the application group in the Microsoft cluster
- Changing the quorum resource to the dynamic quorum resource
- Verifying the cluster configuration
- Enabling DMP
 
- SFW features that support server consolidation
 
 
- Appendix A. Using Veritas AppProtect for vSphere- About Just In Time Availability
- Prerequisites
- Setting up a plan
- Deleting a plan
- Managing a plan
- Viewing the history tab
- Limitations of Just In Time Availability
- Getting started with Just In Time Availability
- Supported operating systems and configurations
- Viewing the properties
- Log files
- Plan states
- Troubleshooting Just In Time Availability
 
Creating the Replicated Data Sets with the wizard
Set up the Replicated Data Sets (RDS) in the primary zone and secondary zone. You can configure an RDS using the Create RDS wizard for both zones.
Configuring Volume Replicator involves setting up the Replicated Data Sets on the hosts for the primary and secondary sites. The Setup Replicated Data Set Wizard enables you to configure Replicated Data Sets for both sites.
Verify whether the IP version preference is set before you configure replication.
If you specify host names when you configure replication, Volume Replicator resolves the host names with the IP addresses associated with them. This setting determines which IP version Volume Replicator uses to resolve the host names.
Use one of the following methods to set the IP preference:
- Veritas Enterprise Administrator (VEA) GUI - select the appropriate options on the Control Panel > VVR Configuration > IP Settings tab. 
- Run the vxtune ip_mode [ipv4 | ipv6] command at the primary site as well as the secondary site. 
- Verify that the data volumes are not of the following types, as Volume Replicator does not support these types of volumes: - Storage Foundation (software) RAID 5 volumes 
- Volumes with a Dirty Region Log (DRL) 
- Volumes that are already part of another RVG 
- Volumes names containing a comma 
 
- Verify that the disk group is imported and the volumes are mounted in the primary and secondary zone. 
- Verify that you have set the appropriate IP preference. 
- Configure the VxSAS service if you have not already done so. 
To create the Replicated Data Set
- Use the Veritas Enterprise Administrator (VEA) console to launch the Setup Replicated Data Set Wizard from the cluster node on the Primary where the cluster disk group is imported.Start VEA from Start > All Programs > Veritas > Veritas Storage Foundation > Veritas Enterprise Administrator. On Windows 2012 operating systems, from the Apps menu in the Start screen. From the VEA console, click View > Connection > Replication Network. 
- Right-click Replication Network and select Setup Replicated Data Set.
- Read the information on the Welcome page and then click Next.
- Specify names for the Replicated Data Set (RDS) and Replicated Volume Group (RVG) and then click Next.By default, the local host is selected as the Primary Host. To specify a different host name, make sure the required host is connected to the VEA console and select it in the Primary Host list. If the required primary host is not connected to the VEA console, it does not appear in the drop-down list of the Primary Host field. Use the VEA console to connect to the host. 
- Select from the table the dynamic disk group and data volumes that will undergo replication and then click Next.To select multiple volumes, press the Shift or Control key while using the up or down arrow keys. By default, a mirrored DCM log is automatically added for all selected volumes. If disk space is inadequate to create a DCM log with two plexes, a single plex is created. 
- Complete the Select or create a volume for Replicator Log page as follows:To select an existing volume - Select the volume for the Replicator Log in the table (APP_REPL_LOG). If the volume does not appear in the table, click Back and verify that the Replicator Log volume was not selected on the previous page. 
- Click Next. 
 To create a new volume - Click Create Volume and enter the following information in the dialog box that appears: - Name - Enter the name for the volume in the Name field. - Size - Enter a size for the volume in the Size field. - Layout - Select the desired volume layout. - Disk Selection - Enables you to specify the disk selection method. - Enable the Thin Provisioned Disks Only check box to ensure that the Replicator Log volume is created only on Thin Provisioned (TP) disks. - Note: - The check box will remain disabled if the diskgroup does not have any TP disk. - If this option is selected along with the Select disks automatically option, then the Replicator Log volume will be created only on TP disks. However, if you enable this check box along with Select disks manually option, then the user can select only TP disks from Available Disks. - For more information on Thin Provisioning, refer to the Storage Foundation Administrator's Guide. 
- Choose the Select disks automatically option if you want Volume Replicator to select the disks. 
- Choose the Select disks manually option to use specific disks from the Available disks pane for creating the volume. Either double-click on it or select Add to move the disks into the Selected disks pane. 
 
- Click OK to create the Replicator Log volume. 
- Click Next in the Select or create a volume for Replicator Log dialog box. 
 
- Review the information on the summary page and click Create Primary RVG.
- After the Primary RVG has been created successfully, Volume Replicator displays the following message:RDS with Primary RVG has been created successfully. Do you want to add Secondary host to this RDS for replication now? Click No to exit the Setup Replicated Data Set wizard without adding the Secondary host. To add the Secondary host later, use the Add Secondary option from the RDS right-click menu. Click Yes to add the Secondary host to the Primary RDS now. The Specify Secondary host for replication page appears. 
- On the Specify Secondary host for replication page, enter the name or IP address of the Secondary host in the Secondary Host field and then click Next.If the Secondary host is not connected to VEA, the wizard tries to connect it when you click Next. This wizard allows you to specify only one Secondary host. Additional Secondary hosts can be added using the Add Secondary option from the RDS right-click menu. Wait till the connection process is complete and then click Next again. 
- If only a disk group without any data volumes or Replicator Log, as on the Primary host exists on the Secondary, then Volume Replicator displays a message. Read the message carefully.The option to automatically create volumes on the Secondary host is available only if the disks that are part of the disk group have: - The same or larger amount of space as that on the Primary 
- Enough space to create volumes with the same layout as on the Primary - Otherwise, the RDS setup wizard enables you to create the required volumes manually. 
- Click Yes to automatically create the Secondary data volumes and the Replicator Log. 
- Click No to create the Secondary data volumes and the Replicator Log manually, using the Volume Information on the connected hosts page. 
 
- The Volume Information on connected hosts page appears. This page displays information on the availability of volumes on the Secondary nodes, if the Primary and Secondary hosts are connected to VEA.This page does not appear if all the required volumes that are available on the Primary host are also available on the Secondary hosts. - If the required data volumes and the Replicator Log have not been created on the Secondary host, then the page displays the appropriate message against the volume name on the Secondary. 
- If an error occurs or a volume needs to be created, a volume displays with a red icon and a description of the situation. To address the error, or to create a new Replicator Log volume on the secondary site, click the volume on the secondary site, click the available task button and follow the wizard. - Depending on the discrepancies between the volumes on the primary site and the secondary site, you may have to create a new volume, recreate or resize a volume (change attributes), or remove either a DRL or DCM log. - When all the replicated volumes meet the replication requirements and display a green check mark, click Next. 
- If all the data volumes to be replicated meet the requirements, this screen does not occur. 
 
- Complete the Edit replication settings page to specify the basic and advanced replication settings for a Secondary host as follows:- To modify each of the default values listed on this page, select the required value from the drop-down list for each property. If you do not wish to modify basic properties then replication can be started with the default values when you click Next. - Primary side - IP Enter the virtual IP address for the Primary IP resource that will be used for replication. If there is more than one IP address available for replication, you can choose the one that you want to use from the drop-down list. If the required IP address is not displayed in the list then edit the field to add the IP address. - Secondary side IP - Enter the virtual IP address on the Secondary that is to be used for replication. If there is more than one IP address available for replication, you can choose the one that you want to use from the drop-down list. If the required IP address is not displayed in the list then edit the field to add the IP address. - Replication Mode - Select the required mode of replication: - Synchronous Override (default) enables synchronous updates under typical operating conditions. If the Secondary site is disconnected from the Primary site, and write operations occur on the Primary site, the mode of replication temporarily switches to Asynchronous. 
- Synchronous determines updates from the application on the Primary site are completed only after the Secondary site successfully receives the updates. 
- Asynchronous determines updates from the application on the Primary site are completed after Volume Replicator updates in the Replicator Log. From there, Volume Replicator writes the data to the data volume and replicates the updates to the secondary site asynchronously. 
 - If the Secondary is set to the synchronous mode of replication and is disconnected, the Primary data volumes with NTFS file systems may be displayed with the status as missing. - Replicator Log Protection - AutoDCM is the default selected mode for the Replicator Log overflow protection when all the volumes in the Primary RVG have a DCM log. The DCM is enabled when the Replicator Log overflows. 
- The DCM option enables the Replicator Log protection for the Secondary host when the Replicator Log overflows, and the connection between the Primary and Secondary is lost. This option is available only if all the data volumes under the Primary RVG have a DCM Log associated with them. 
- The Off option disables Replicator Log Overflow protection. - In the case of the Bunker node. Replicator Log protection is set to Off, by default. Thus, if the Primary RLINK overflows due to the Bunker RLINK, then this RLINK is detached. 
- The Override option enables log protection. If the Secondary node is still connected and the Replicator Log is about to overflow then the writes are stalled until a predetermined amount of space, that is, 5% or 20 MB (whichever is lesser) becomes available in the Replicator Log. - If the Secondary becomes inactive due to disconnection or administrative action then Replicator Log protection is disabled, and the Replicator Log overflows. 
- The Fail option enables log protection. If the log is about to overflow the writes are stalled until a predetermined amount of space, that is, 5% or 20 MB (whichever is lesser) becomes available in the Replicator Log. If the connection between Primary and Secondary RVG is broken, then, any new writes to the Primary RVG are failed. 
 - Primary RLINK Name - This option enables you to specify a Primary RLINK name of your choice. If you do not specify any name then Volume Replicator assigns a default name. - Secondary RLINK Name - This option enables you to specify a Secondary RLINK name of your choice. If you do not specify any name then Volume Replicator assigns a default name. 
- If you want to specify advanced replication settings, click Advanced. Edit the replication settings for a secondary host as needed. - Caution: - When determining the high mark and low mark values for latency protection, select a range that is sufficient but not too large to prevent long durations of throttling for write operations. - Latency protection - Determines the extent of stalling write operations on the primary site to allow the secondary site to "catch up" with the updates before new write operations can occur. - Off is the default option and disables latency protection. - Fail enables latency protection. If the number of outstanding write operations reaches the High Mark Value (described below), and the secondary site is connected, Volume Replicator stalls the subsequent write operations until the number of outstanding write operations is lowered to the Low Mark Value (described below). If the secondary site is disconnected, the subsequent write operations fail. - Override enables latency protection. This option resembles the Off option when the secondary site is disconnected, and the Fail option when the secondary site is connected. - Throttling of write operations affects application performance on the primary site; use this protection only when necessary according to replication throughput and application write patterns. - High Mark Value - Is enabled only when either the Override or Fail latency protection option is selected. This value triggers the stalling of write operations and specifies the maximum number of pending updates on the Replicator Log waiting for replication to the secondary site. The default value is 10000, the maximum number of updates allowed in a Replicator Log. - Low Mark Value - Is enabled only when either the Override or Fail latency protection options is selected. After reaching the High Mark Value, write operations on the Replicator Log are stalled until the number of pending updates drops to an acceptable point at which the secondary site can "catch up" to the activity on the primary site; this acceptable point is determined by the Low Mark Value. The default value is 9950. - Protocol - UDP/IP is the default protocol for replication. - Packet Size - Updates to the host on the secondary site are sent in packets; the default size 1400 bytes. The option to select the packet size is enabled only when UDP/IP protocol is selected. - Bandwidth - By default, Volume Replicator uses the maximum available bandwidth. To control the bandwidth used, specify the bandwidth limit in Mbps. - Enable Compression - Enable this checkbox if you want to enable Compression for the secondary host. - Click OK to close the dialog box and then click Next. 
 
- On the Start Replication page, choose the appropriate option as follows:- To add the Secondary and start replication immediately, select Start Replication with one of the following options: - Synchronize Automatically - If virtual IPs have been created, select the Synchronize Automatically option, which is the default recommended for initial setup to start synchronization of Secondary and start replication immediately. - If the virtual IPs for replication are not yet created, automatic synchronization remains paused and resumes after the Replication Service Group is created and brought online. - When this option is selected, Volume Replicator by default performs intelligent synchronization to replicate only those blocks on a volume that are being used by the file system. If required, you can disable intelligent synchronization. - Note: - Intelligent synchronization is applicable only to volumes with the NTFS and ReFS file systems and not to raw volumes or volumes with FAT/FAT32 file systems. - Synchronize from Checkpoint - If you want to use this method, then you must first create a checkpoint. - If you have considerable amount of data on the Primary data volumes, then you may first want to synchronize the secondary for existing data using the backup-restore method with checkpoint. After the restore is complete, use the Synchronize from Checkpoint option to start replication from checkpoint to synchronize the secondary with the writes that happened when backup-restore was in progress. - For information on synchronizing from checkpoints, refer Volume Replicator Administrator's Guide. 
- To add the secondary without starting replication, deselect the Start Replication option. You can start replication later by using the Start Replication option from the Secondary RVG right-click menu. 
- Click Next to display the Summary page. 
 
- Review the information.Click Back to change any information you had specified. Otherwise, click Finish to add the secondary host to the RDS and exit the wizard. 
If you have set up additional disk groups for the application, repeat this procedure for each additional disk group. Provide unique names for the Replicated Data Set name, and the Replicated Volume Group name.