Storage Foundation and High Availability Solutions 7.4.1 Solutions Guide - Windows
- Section I. Introduction- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center- About the Solutions Configuration Center
- Starting the Solutions Configuration Center
- Options in the Solutions Configuration Center
- About launching wizards from the Solutions Configuration Center
- Remote and local access to Solutions wizards
- Solutions wizards and logs
- Workflows in the Solutions Configuration Center
 
- SFW best practices for storage
 
- Section II. Quick Recovery
- Section III. High Availability- High availability: Overview- About high availability
- About clusters
- How VCS monitors storage components- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
 
 
- Deploying InfoScale Enterprise for high availability: New installation- About the high availability solution
- Tasks for a new high availability (HA) installation - additional applications
- Reviewing the InfoScale installation requirements
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Enabling fast failover for disk groups (optional)
 
- Configuring the service group in a non-shared storage environment
- Verifying the cluster configuration
- Possible tasks after completing the configuration
- Adding nodes to a cluster
- Modifying the application service groups
 
- Adding DMP to a clustering configuration
 
- High availability: Overview
- Section IV. Campus Clustering- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster- About the Campus Cluster solution
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Installing and configuring the hardware
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes- About cluster disk groups and volumes
- Example disk group and volume configuration in campus cluster
- Considerations when creating disks and volumes for campus clusters
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for campus clusters
 
- Installing the application on cluster nodes
- Configuring service groups
- Verifying the cluster configuration
 
 
- Section V. Replicated Data Clusters- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation- Tasks for a new replicated data cluster installation - additional applications
- Notes and recommendations for cluster and application configuration
- Sample configuration
- Configuring the storage hardware and network
- About installing the Veritas InfoScale products
- Setting up security for Volume Replicator
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
 
- Creating the primary system zone for the application service group
- Verifying the cluster configuration
- Creating a parallel environment in the secondary zone
- Adding nodes to a cluster
- Creating the Replicated Data Sets with the wizard
- Configuring a RVG service group for replication- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the application service group
 
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
 
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding the nodes from the secondary zone to the application service group
- Configuring the zones in the application service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
 
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
 
 
- Section VI. Disaster Recovery- Disaster recovery: Overview
- Deploying disaster recovery: New application installation- Tasks for a new disaster recovery installation - additional applications
- Tasks for setting up DR in a non-shared storage environment
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the storage hardware and network
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Verifying that your application or server role is configured for HA at the primary site
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring the application or server role (secondary site)
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the application service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Establishing secure communication within the global cluster (optional)
- Adding multiple DR sites (optional)
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Recovery procedures for service group dependencies
 
- Testing fault readiness by running a fire drill- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
 
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
 
 
- Section VII. Microsoft Clustering Solutions- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering- Tasks for deploying InfoScale Storage with Microsoft failover clustering
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Creating a group for the application in the failover cluster
- Installing the application on cluster nodes
- Completing the setup of the application group in the failover cluster
- Implementing a dynamic quorum resource
- Verifying the cluster configuration
- Configuring InfoScale Storage in an existing Microsoft Failover Cluster
 
- Deploying SFW with Microsoft failover clustering in a campus cluster- Tasks for deploying InfoScale Storage with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Configuring the storage hardware and network
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Setting up a group for the application in the failover cluster
- Installing the application on the cluster nodes
- Completing the setup of the application group in the cluster
- Verifying the cluster configuration
 
- Deploying SFW and VVR with Microsoft failover clustering- Tasks for deploying InfoScale Storage and Volume Replicator with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site- Reviewing the prerequisites and the configuration
- Installing and configuring the hardware
- Installing Windows and configuring network settings
- Establishing a Microsoft failover cluster
- Installing InfoScale Storage (primary site)
- Setting up security for Volume Replicator
- Creating SFW disk groups and volumes
- Completing the primary site configuration
 
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
 
 
- Section VIII. Server Consolidation- Server consolidation overview
- Server consolidation configurations- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP- About this configuration
- Adding the new hardware
- Establishing the Microsoft failover cluster
- Adding SFW support to the cluster
- Setting up Microsoft failover cluster groups for the applications
- Installing applications on the second computer
- Completing the setup of the application group in the Microsoft cluster
- Changing the quorum resource to the dynamic quorum resource
- Verifying the cluster configuration
- Enabling DMP
 
- SFW features that support server consolidation
 
 
- Appendix A. Using Veritas AppProtect for vSphere- About Just In Time Availability
- Prerequisites
- Setting up a plan
- Deleting a plan
- Managing a plan
- Viewing the history tab
- Limitations of Just In Time Availability
- Getting started with Just In Time Availability
- Supported operating systems and configurations
- Viewing the properties
- Log files
- Plan states
- Troubleshooting Just In Time Availability
 
Striping for I/O-request-intensive applications
A good compromise stripe unit size for I/O-request-intensive applications is one that results in a 3% to 5% probability of splitting in a uniform distribution of requests. For example, a 2 KB (four-block) database page size would have an ideal stripe unit size of 100 blocks. This would typically be rounded up to the nearest power of two (128 blocks, or 65,536 bytes) for simplicity.
I/O-request-intensive applications are typically characterized by small (for example, 2 to 16 KB) data transfers for each request. These applications are I/O bound because they make so many I/O requests, not because they transfer large amounts of data.
For example, an application that makes 1,000 I/O requests per second with an average request size of 2 KB uses at most 2 MB per second of data transfer bandwidth. Because each I/O request occupies a disk completely for the duration of its execution, the way to maximize I/O throughput for I/O-request-intensive applications is to maximize the number of disks that can be executing requests concurrently. Clearly, the largest number of concurrent I/O requests that can be executed on a volume is the number of disks that contribute to the volume's storage. Each application I/O request that "splits" across two stripe units occupies two disks for the duration of its execution, reducing the number of requests that can be executed concurrently and thus the efficiency of I/O response.
Therefore, try to minimize the probability that I/O requests "split" across stripe units in I/O-request-intensive applications.
The following factors influence whether an I/O request with a random starting address will split across two stripe units:
- The request starting address relative to the starting address of the storage allocation unit (the file extent) 
- The size of the request relative to the stripe unit size 
Most database management systems will allocate pages in alignment with the blocks in a file, so that requests for the first page will almost never split across stripe units. However, database requests for two or more consecutive pages may split across stripe units. In this case, larger stripe unit sizes reduce the probability of split I/O requests. However, the primary objective of striping data across a volume is to cause I/O requests to be spread across the volume's disks. Too large a stripe unit size is likely to reduce this spreading effect.