Storage Foundation and High Availability Solutions 8.0.2 HA and DR Solutions Guide for Microsoft SQL Server - Windows
- Section I. Getting started with Storage Foundation and High Availability Solutions for SQL Server
- Introducing SFW HA and the VCS agents for SQL Server
- About the Veritas InfoScale solutions for monitoring SQL Server
- How application availability is achieved in a physical environment
- How is application availability achieved in a VMware virtual environment
- Managing storage using VMware virtual disks
- Notes and recommendations
- Modifying the ESXDetails attribute
- How VCS monitors storage components
- Shared storage - if you use NetApp filers
- Shared storage - if you use SFW to manage cluster dynamic disk groups
- Shared storage - if you use Windows LDM to manage shared disks
- Non-shared storage - if you use SFW to manage dynamic disk groups
- Non-shared storage - if you use Windows LDM to manage local disks
- Non-shared storage - if you use VMware storage
- What must be protected in an SQL Server environment
- About the VCS agents for SQL Server
- About the VCS agent for SQL Server Database Engine
- About the VCS agent for SQL Server FILESTREAM
- About the VCS GenericService agent for SQL Server Agent service and Analysis service
- About the agent for MSDTC service
- About the monitoring options
- Typical SQL Server configuration in a VCS cluster
- Typical SQL Server disaster recovery configuration
- SQL Server sample dependency graph
- MSDTC sample dependency graph
- Deployment scenarios for SQL Server
- Workflows in the Solutions Configuration Center
- Reviewing the active-passive HA configuration
- Reviewing the prerequisites for a standalone SQL Server
- Reviewing a standalone SQL Server configuration
- Reviewing the MSDTC configuration
- VCS campus cluster configuration
- Reviewing the campus cluster configuration
- VCS Replicated Data Cluster configuration
- Reviewing the Replicated Data Cluster configuration
- About setting up a Replicated Data Cluster configuration
- Disaster recovery configuration
- Reviewing the disaster recovery configuration
- Notes and recommendations for cluster and application configuration
- Configuring the storage hardware and network
- Configuring disk groups and volumes for SQL Server
- About disk groups and volumes
- Prerequisites for configuring disk groups and volumes
- Considerations for a fast failover configuration
- Considerations for converting existing shared storage to cluster disk groups and volumes
- Considerations when creating disks and volumes for campus clusters
- Considerations for volumes for a Volume Replicator configuration
- Considerations for disk groups and volumes for multiple instances
- Sample disk group and volume configuration
- MSDTC sample disk group and volume configuration
- Viewing the available disk storage
- Creating a dynamic disk group
- Adding disks to campus cluster sites
- Creating volumes for high availability clusters
- Creating volumes for campus clusters
- About managing disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- Installing SQL Server
- About installing and configuring SQL Server
- About installing multiple SQL Server instances
- Verifying that the SQL Server databases and logs are moved to shared storage
- About installing SQL Server for high availability configuration
- About installing SQL Server on the first system
- About installing SQL Server on additional systems
- Creating a SQL Server user-defined database
- Completing configuration steps in SQL Server
- Introducing SFW HA and the VCS agents for SQL Server
- Section II. Configuring SQL Server in a physical environment
- Configuring SQL Server for failover
- Tasks for configuring a new server for high availability
- Tasks for configuring an existing server for high availability
- About configuring the SQL Server service group
- Configuring the service group in a non-shared storage environment
- Verifying the SQL Server cluster configuration
- About the modifications required for tagged VLAN or teamed network
- Tasks for configuring MSDTC for high availability
- Configuring an MSDTC Server service group
- About configuring the MSDTC client for SQL Server
- About the VCS Application Manager utility
- Viewing DTC transaction information
- Modifying a SQL Server service group to add VMDg and MountV resources
- Determining additional steps needed
- Configuring campus clusters for SQL Server
- Configuring Replicated Data Clusters for SQL Server
- Tasks for configuring Replicated Data Clusters
- Creating the primary system zone for the application service group
- Creating a parallel environment in the secondary zone
- Setting up security for Volume Replicator
- Setting up the Replicated Data Sets (RDS)
- Configuring a RVG service group for replication
- Creating the RVG service group
- Configuring the resources in the RVG service group for RDC replication
- Configuring the IP and NIC resources
- Configuring the VMDg or VMNSDg resources for the disk groups
- Modifying the DGGuid attribute for the new disk group resource in the RVG service group
- Configuring the VMDg or VMNSDg resources for the disk group for the user-defined database
- Adding the Volume Replicator RVG resources for the disk groups
- Linking the Volume Replicator RVG resources to establish dependencies
- Deleting the VMDg or VMNSDg resource from the SQL Server service group
- Configuring the RVG Primary resources
- Configuring the primary system zone for the RVG service group
- Setting a dependency between the service groups
- Adding the nodes from the secondary zone to the RDC
- Adding the nodes from the secondary zone to the RVG service group
- Configuring secondary zone nodes in the RVG service group
- Configuring the RVG service group NIC resource for fail over (VMNSDg only)
- Configuring the RVG service group IP resource for failover
- Configuring the RVG service group VMNSDg resources for fail over
- Adding nodes from the secondary zone to the SQL Server service group
- Configuring the zones in the SQL Server service group
- Configuring the application service group IP resource for fail over (VMNSDg only)
- Configuring the application service group NIC resource for fail over (VMNSDg only)
- Verifying the RDC configuration
- Additional instructions for GCO disaster recovery
- Configuring disaster recovery for SQL Server
- Tasks for configuring disaster recovery for SQL Server
- Tasks for setting up DR in a non-shared storage environment
- Guidelines for installing Veritas InfoScale Enterprise and configuring the cluster on the secondary site
- Verifying your primary site configuration
- Setting up your replication environment
- Assigning user privileges (secure clusters only)
- About configuring disaster recovery with the DR wizard
- Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
- Creating temporary storage on the secondary site using the DR wizard (array-based replication)
- Installing and configuring SQL Server on the secondary site
- Cloning the service group configuration from the primary site to the secondary site
- Configuring the SQL Server service group in a non-shared storage environment
- Configuring replication and global clustering
- Creating the replicated data sets (RDS) for Volume Replicator replication
- Creating the Volume Replicator RVG service group for replication
- Configuring the global cluster option for wide-area failover
- Verifying the disaster recovery configuration
- Adding multiple DR sites (optional)
- Recovery procedures for service group dependencies
- Configuring DR manually without the DR wizard
- Testing fault readiness by running a fire drill
- About disaster recovery fire drills
- About the Fire Drill Wizard
- About post-fire drill scripts
- Tasks for configuring and running fire drills
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- System Selection panel details
- Service Group Selection panel details
- Secondary System Selection panel details
- Fire Drill Service Group Settings panel details
- Disk Selection panel details
- Hitachi TrueCopy Path Information panel details
- HTCSnap Resource Configuration panel details
- SRDFSnap Resource Configuration panel details
- Fire Drill Preparation panel details
- Running a fire drill
- Re-creating a fire drill configuration that has changed
- Restoring the fire drill system to a prepared state
- Deleting the fire drill configuration
- Considerations for switching over fire drill service groups
- Configuring SQL Server for failover
Notes and recommendations for cluster and application configuration
Review the hardware compatibility list (HCL) and software compatibility list (SCL).
Note:
Solutions wizards cannot be used to perform Disaster Recovery, Fire Drill, or Quick Recovery remotely on Windows Server Core systems.
The DR, FD, and QR wizards require that the .NET Framework is present on the system where these operations are to be performed. As the .NET Framework is not supported on the Windows Server Core systems, the wizards cannot be used to perform DR, FD, or QR on these systems.
Refer to the following Microsoft knowledge database article for more details:
Refer to the Microsoft documentation for SQL Server memory requirements.
Shared disks to support applications that migrate between nodes in the cluster. Campus clusters require more than one array for mirroring. Disaster recovery configurations require one array for each site. Replicated data clusters with no shared storage are also supported.
If your storage devices are SCSI-3 compliant, and you wish to use SCSI-3 Persistent Group Reservations (PGR), you must enable SCSI-3 support using the Veritas Enterprise Administrator (VEA).
See the Storage Foundation Administrator's Guide for more information.
SCSI, Fibre Channel, iSCSI host bus adapters (HBAs), or iSCSI Initiator supported NICs to access shared storage.
A minimum of two NICs is required. One NIC will be used exclusively for private network communication between the nodes of the cluster. The second NIC will be used for both private cluster communications and for public access to the cluster. Veritas recommends three NICs.
NIC teaming is not supported for the VCS private network.
Static IP addresses are required for certain purposes when configuring high availability or disaster recovery solutions. For IPv4 networks, ensure that you have the addresses available to enter. For IPv6 networks, ensure that the network advertises the prefix so that addresses are autogenerated.
Static IP addresses are required for the following purposes:
One static IP address per site for each SQL Virtual Server.
A minimum of one static IP address for each physical node in the cluster.
One static IP address per cluster used when configuring Notification or the Global Cluster Option. The same IP address may be used for all options.
For Volume Replicator replication in a disaster recovery configuration, a minimum of one static IP address per site for each application instance running in the cluster.
For Volume Replicator replication in a Replicated Data Cluster configuration, a minimum of one static IP address per zone for each application instance running in the cluster.
Configure name resolution for each node.
Verify the availability of DNS Services. AD-integrated DNS or BIND 8.2 or higher are supported.
Make sure a reverse lookup zone exists in the DNS. Refer to the application documentation for instructions on creating a reverse lookup zone.
DNS scavenging affects virtual servers configured in SFW HA because the Lanman agent uses Dynamic DNS (DDNS) to map virtual names with IP addresses. If you use scavenging, then you must set the DNSRefreshInterval attribute for the Lanman agent. This enables the Lanman agent to refresh the resource records on the DNS servers.
See the Cluster Server Bundled Agents Reference Guide.
In an IPv6 environment, the Lanman agent relies on the DNS records to validate the virtual server name on the network. If the virtual servers configured in the cluster use IPv6 addresses, you must specify the DNS server IP, either in the network adapter settings or in the Lanman agent's AdditionalDNSServers attribute.
If Network Basic Input/Output System (NetBIOS) is disabled over the TCP/IP, then you must set the Lanman agent's DNSUpdateRequired attribute to 1 (True).
You must have write permissions for the Active Directory objects corresponding to all the nodes.
If you plan to create a new user account for the VCS Helper service, you must have Domain Administrator privileges or belong to the Account Operators group. If you plan to use an existing user account context for the VCS Helper service, you must know the password for the user account.
If User Access Control (UAC) is enabled on Windows systems, then you cannot log on to VEA GUI with an account that is not a member of the Administrators group, such as a guest user. This happens because such user does not have the "Write" permission for the "Veritas" folder in the installation directory (typically,
C:\Program Files\Veritas). As a workaround, an OS administrator user can set "Write" permission for the guest user using the Security tab of the "Veritas" folder's properties.For a Replicated Data Cluster, install only in a single domain.
Route each private NIC through a separate hub or switch to avoid single points of failure.
NIC teaming is not supported for the VCS private network.
Verify that your DNS server is configured for secure dynamic updates. For the Forward and Reverse Lookup Zones, set the Dynamic updates option to "Secure only". (DNS > Zone Properties > General tab)
This is applicable for a Replicated Data Cluster configuration.
This is applicable for a Replicated Data Cluster configuration. You can configure single node clusters as the primary and secondary zones. However, if using a shared storage configuration, you must create the disk groups as clustered disk groups. If you cannot create a clustered disk group due to the unavailability of disks on a shared bus, use the vxclus UseSystemBus ON command.
To configure a RDC cluster, you need to create virtual IP addresses for the following:
Application virtual server; this IP address should be the same on all nodes at the primary and secondary zones
Replication IP address for the primary zone
Replication IP address for the secondary zone
Before you start deploying your environment, you should have these IP addresses available.