Storage Foundation and High Availability Solutions 8.0.2 HA and DR Solutions Guide for Microsoft SQL Server - Windows
- Section I. Getting started with Storage Foundation and High Availability Solutions for SQL Server
- Introducing SFW HA and the VCS agents for SQL Server
- About the Veritas InfoScale solutions for monitoring SQL Server
- How application availability is achieved in a physical environment
- How is application availability achieved in a VMware virtual environment
- Managing storage using VMware virtual disks
- Notes and recommendations
- Modifying the ESXDetails attribute
- How VCS monitors storage components
- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
- What must be protected in an SQL Server environment
- About the VCS agents for SQL Server
- About the VCS agent for SQL Server Database Engine
- About the VCS agent for SQL Server FILESTREAM
- About the VCS GenericService agent for SQL Server Agent service and Analysis service
- About the agent for MSDTC service
- About the monitoring options
- Typical SQL Server configuration in a VCS cluster
- Typical SQL Server disaster recovery configuration
- SQL Server sample dependency graph
- MSDTC sample dependency graph
- Deployment scenarios for SQL Server
- Workflows in the Solutions Configuration Center
- Reviewing the active-passive HA configuration
- Reviewing the prerequisites for a standalone SQL Server
- Reviewing a standalone SQL Server configuration
- Reviewing the MSDTC configuration
- VCS campus cluster configuration
- Reviewing the campus cluster configuration
- VCS Replicated Data Cluster configuration
- Reviewing the Replicated Data Cluster configuration
- About setting up a Replicated Data Cluster configuration
- Disaster recovery configuration
- Reviewing the disaster recovery configuration
- Notes and recommendations for cluster and application configuration
- Configuring the storage hardware and network
- Configuring disk groups and volumes for SQL Server
- About disk groups and volumes
- Prerequisites for configuring disk groups and volumes
- Considerations for a fast failover configuration
- Considerations for converting existing shared storage to cluster disk groups and volumes
- Considerations when creating disks and volumes for campus clusters
- Considerations for volumes for a Volume Replicator configuration
- Considerations for disk groups and volumes for multiple instances
- Sample disk group and volume configuration
- MSDTC sample disk group and volume configuration
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for high availability clusters
- Creating volumes for campus clusters
- About managing disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- Installing SQL Server
- About installing and configuring SQL Server
- About installing multiple SQL Server instances
- Verifying that the SQL Server databases and logs are moved to shared storage
- About installing SQL Server for high availability configuration
- About installing SQL Server on the first system
- About installing SQL Server on additional systems
- Creating a SQL Server user-defined database
- Completing configuration steps in SQL Server
- Introducing SFW HA and the VCS agents for SQL Server
- Section II. Configuring SQL Server in a physical environment
- Configuring SQL Server for failover
- Tasks for configuring a new server for high availability
- Tasks for configuring an existing server for high availability
- About configuring the SQL Server service group
- Configuring the service group in a non-shared storage environment
- Verifying the SQL Server cluster configuration
- About the modifications required for tagged VLAN or teamed network
- Tasks for configuring MSDTC for high availability
- Configuring an MSDTC Server service group
- About configuring the MSDTC client for SQL Server
- About the VCS Application Manager utility
- Viewing DTC transaction information
- Modifying a SQL Server service group to add VMDg and MountV resources
- Determining additional steps needed
- Configuring campus clusters for SQL Server
- Configuring Replicated Data Clusters for SQL Server
- Tasks for configuring Replicated Data Clusters
- Creating the primary system zone for the application service group
- Creating a parallel environment in the secondary zone
- Setting up security for Volume Replicator
- Setting up the Replicated Data Sets (RDS)
- Configuring a RVG service group for replication
- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication
- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Modifying the DGGuid attribute for the new disk group resource in the RVG service group
- Configuring the VMDg or VMNSDg resources for the disk group for the user-defined database
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the SQL Server service group
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC
- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding nodes from the secondary zone to the SQL Server service group
- Configuring the zones in the SQL Server service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
- Configuring disaster recovery for SQL Server
- Tasks for configuring disaster recovery for SQL Server
- Tasks for setting up DR in a non-shared storage environment
- Guidelines for installing Veritas InfoScale Enterprise and configuring the cluster on the secondary site
- Verifying your primary site configuration
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring SQL Server on the secondary site
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the SQL Server service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Adding multiple DR sites (optional)
- Recovery procedures for service group dependencies
- Configuring DR manually without the DR wizard
- Testing fault readiness by running a fire drill
- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
- Configuring SQL Server for failover
Notes and recommendations
Note the following prerequisites before configuring application monitoring:
Verify that the boot sequence of the virtual machine is such that the boot disk (OS hard disk) is placed before the removable disks.
If the sequence places the removable disks before the boot disk, the virtual machine may not reboot after an application failover. The reboot may halt with an "OS not found" error.
This issue occurs because during the application failover the removable disks are detached from the current virtual machine and are attached on the failover target system.
Verify that VMware Tools is installed on the virtual machine.
Install the version that is similar to or later than that available with VMware ESX 4.1.
Verify that all the systems on which you want to configure application monitoring belong to the same domain.
Verify that the ESX/ESXi host user account has administrative privileges or is a root user.
If the ESX/ESXi user account fails to have the administrative privileges or is not a root user, then in event of a failure the disk deattach and attach operation may fail.
If you do not want to use the administrator user account or the root user, then you must create a role, add the required privileges to the created role and then add the ESX user to that role.
See Assigning privileges for non-administrator ESX/ESXi user account.
Verify that the SQL Server instances that you want to monitor are installed on the non-shared local disk that can be deported from the system and imported to another system.
If you have configured a firewall, ensure that your firewall settings allow access to ports used by Veritas High Availability installer, wizard, and services.
You must run the Veritas High Availability Configuration wizard from the system to which the disk residing on the shared datastore is attached (first system on which you installed SQL Server).
After configuring SQL Server databases for monitoring, if you create another database or service, then these new components are not monitored as part of the existing configuration.
In this case, you can either use the VCS commands to add the components to the configuration or unconfigure the existing configuration and then run the wizard again to configure the required components.
In case the VMwareDisks agent resource is configured manually, care should be taken not to add the operating system disk in the configuration. The VMwareDisks agent does not block this operation. This might lead to a system crash during failover.
If VMware vMotion is triggered at the same time as an application fails over, the VMwareDisks resource may either fail to go offline or may report an unknown status. The resource will eventually failover and report online after the vMotion is successful and the application is online on the target system.
VMware snapshot operations may fail if VMwareDisks agent is configured for a physical RDM type of disk. Currently only virtual RDM disks are supported.
Non-shared disks partitioned using GUID Partition Table (GPT) are not supported. Currently only Master Boot Record (MBR) partition is supported.
VMwareDisks agent does not support disks attached to the virtual machine using IDE controllers. The agent resource reports an unknown if IDE type of disks are configured.
In case VMware HA is disabled and the ESX itself faults, VCS moves the application to the target failover system on another ESX host. VMwareDisks agent registers the faulted system on the new ESX host. When you try to power on the faulted system, you may see the following message in the vSphere Client:
This virtual machine might have been moved or copied. In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied. If you don't know, answer "I copied it".
You must select "I moved it" (instead of the default "I copied it") on this message prompt.
You must not restore a snapshot on a virtual machine where an application is currently online, if the snapshot was taken when the application was offline on that virtual machine. Doing this may cause an unwanted fail over.
This also applies in the reverse scenario; you should not restore a snapshot where the application was online on a virtual machine, where the application is currently offline. This may lead to a misconfiguration where the application is online on multiple systems simultaneously.
If you want to suspend a system on which an application is currently online, then you must first switch the application to a failover target system.
If you suspend the system without switching the application, then VCS moves the disks along with the application to another system.
Later, when you try to restore the suspended system, VMware does not allow the operation because the disks that were attached before the system was suspended are no longer with the system.
While creating a VCS cluster in a virtual environment, you must configure one of the cluster communication link over a public adapter in addition to the link configured over a private adapter. To have less VCS cluster communication over the link using the public adapter, you may assign it low priority. This keeps the VCS cluster communication intact even if the private network adapters fail. If the cluster communication is configured over the private adapters only, the cluster systems may fail to communicate with each other in case of network failure. In this scenario, each system considers that the other system has faulted, and then try to gain access to the disks, thereby leading to an application fault.
VMware Fault Tolerance does not support adding or removing of non-shared disks between virtual machines. During a failover, disks that contain application data cannot be moved to alternate failover systems. Applications that are being monitored thus cannot be brought online on the failover systems.
For cluster communication, you must not select the teamed network adapter or the independently listed adapters that are a part of the teamed NIC.
A teamed network adapter is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address, due to which you may experience the following issues:
The application monitoring configuration wizard may fail to discover the specified network adapters
The application monitoring configuration wizard may fail to discover/validate the specified system name