InfoScale™ 9.0 Storage Foundation Administrator's Guide - Windows
- Overview
- Setup and configuration
- Setup and configuration overview
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Review the Veritas Enterprise Administrator GUI
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Protecting your SFW configuration with vxcbr
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Overview
- Adding storage
- Disk tasks
- Remove a disk from a dynamic disk group
- Remove a disk from the computer
- Offline a disk
- Update disk information by using rescan
- Set disk usage
- Evacuate disk
- Replace disk
- Changing the internal name of a disk
- Renaming an enclosure
- Work with removable media
- Working with disks that support thin provisioning
- View disk properties
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Delete a volume
- Delete a partition or logical drive
- Shredding a volume
- Refresh drive letter, file system, and partition or volume information
- Add, change, or remove a drive letter or path
- Renaming a mirror (plex)
- Changing the internal name of a volume
- Mount a volume at an empty folder (Drive path)
- View all drive paths
- Format a partition or volume with the file system command
- Cancel format
- Change file system options on a partition or volume
- Set a volume to read only
- Check partition or volume properties
- Expand a dynamic volume
- Expand a partition
- Safeguarding the expand volume operation in SFW against limitations of NTFS
- Safeguarding the expand volume operation in SFW against limitations of ReFS
- Shrink a dynamic volume
- Dynamic LUN expansion
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Disk media types
- Supported Solid-State Devices
- Icon for SSD
- Enclosure and VDID for automatically discovered On-Host Fusion-IO disks
- Enclosure and VDID for automatically discovered On-Host Intel disks
- Enclosure and VDID for automatically discovered Violin disks
- Classifying disks as SSD
- Limitations for classifying SSD devices
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Upgrading the dynamic disk group version
- Converting a Microsoft Disk Management Disk Group
- Importing a dynamic disk group to a cluster disk group
- Rename a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Importing a cloned disk group
- Partitioned shared storage with private dynamic disk group protection
- Dynamic disk group properties
- Troubleshooting problems with dynamic disk groups
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Overview
- Event monitoring and notification
- Event notification
- Disk monitoring
- Capacity monitoring
- Configuring Automatic volume growth
- SMTP configuration for email notification
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap overview
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- About Dynamic Disk Group Split and Join
- Dynamic disk group split
- Recovery for the split command
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Limitations when using dynamic disk group split and join with Volume Replicator
- Dynamic Disk Group Split and Join troubleshooting tips
- CLI FlashSnap commands
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- About SmartIO
- Typical deployment scenarios
- How SmartIO works
- SmartIO benefits
- SmartIO limitations
- About cache area
- About SmartIO caching support
- Configuring SmartIO
- Frequently asked questions about SmartIO
- How to configure a volume to use a non-default cache area?
- What is write-through I/O caching?
- Does SmartIO with SFW support write caching?
- Are there any logs that I can refer to, if caching fails for a particular volume?
- I have deleted a cache area, but the disk is still present in the Cachepool. How can I remove it from the Cachepool?
- Is the VxVM cached data persistent?
- Is an application's performance affected if the cache device becomes inaccessible while caching is enabled?
- Are there any tools available to measure SmartIO performance?
- Will there be a performance drop after vMotion?
- Will the cache area recreation fail, if the SmartDisk assigned has insufficient space?
- A cache area recreation is in process in a VMware environment with vMotion, does it affect the sfcache operations?
- Does SmartIO continue to use the previous cache area if the VM is moved back to the previous host?
- How does SmartIO behave if the vxsvc service fails during vMotion?
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Overview
- Configuring a CVM cluster
- Administering CVM
- Configuring CVM links for multi-subnet cluster networks
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- About implementing Hyper-V virtual machine live migration on SFW storage
- Tasks for deploying live migration support for Hyper-V virtual machines
- Installing Windows Server
- Preparing the host machines
- Installing the SFW option for Microsoft failover cluster option
- Using the SFW Configuration Wizard for Microsoft Failover Cluster for Hyper-V live migration support
- Configuring the SFW storage
- Creating a virtual machine service group
- Setting the dependency of the virtual machine on the VMDg resource
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- About storage migration
- About performance tunables for storage migration
- Setting performance tunables for storage migration
- About performing online storage migration
- Storage migration limitations
- About changing the layout while performing volume migration
- Migrating volumes belonging to SFW dynamic disk groups
- Migrating volumes belonging to Hyper-V virtual machines
- Migrating data from SFW dynamic disks of one enclosure to another
- Converting your existing Hyper-V configuration to live migration supported configuration
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Volume encryption
- Secure file system (SecureFS) for protection against ransomware
- Troubleshooting and recovery
- Overview
- Using disk and volume status information
- SFW error symbols
- Resolving common problem situations
- Bring an offline dynamic disk back to an imported state
- Bring a basic disk back to an online state
- Remove a disk from the computer
- Bring a foreign disk back to an online state
- Bring a basic volume back to a healthy state
- Bring a dynamic volume back to a healthy state
- Repair a volume with degraded data after moving disks between computers
- Deal with a provider error on startup
- Commands or procedures used in troubleshooting and recovery
- Refresh command
- Rescan command
- Replace disk command
- Merge foreign disk command
- Reactivate disk command
- Reactivate volume command
- Repair volume command for dynamic RAID-5 volumes
- Repair volume command for dynamic mirrored volumes
- Starting and stopping the Storage Foundation Service
- Accessing the CLI history
- Additional troubleshooting issues
- Disk issues
- Volume issues
- After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume
- Cannot create a RAID-5 volume
- Cannot create a mirror
- Cannot extend a volume
- When creating a spanned volume over multiple disks within a disk group, you cannot customize the size of subdisks on each disk
- Disk group issues
- Sometimes, creating dynamic disk group operation fails even if disk is connected to a shared bus
- Unknown group appears after upgrading a basic disk to dynamic and immediately deporting its dynamic disk group
- Cannot use SFW disk groups in disk management after uninstalling InfoScale Storage management software
- After uninstalling and reinstalling InfoScale Storage management software, the private dynamic disk group protection is removed
- Cannot import a cluster dynamic disk group or a secondary disk group with private dynamic disk group protection when SCSI reservations have not been released
- Connection issues
- Issues related to boot or restart
- During restart, a message may appear about a "Corrupt drive" and suggest that you run autocheck
- Error that the boot device is inaccessible, bugcheck 7B
- Error message "vxboot- failed to auto-import disk group repltest_dg. all volumes of the disk group are not available."
- Error message "Bugcheck 7B, Inaccessible Boot Device"
- When Attempting to Boot from a Stale or Damaged Boot Plex
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- Live migration fails if VM VHD is hosted on an SFW volume mounted as a folder mount
- Disk group deletion fails if ReFS volume is marked as read-only
- ReFS volume deletion from VEA GUI fails if Symantec Endpoint Protection (SEP) is installed.
- An option is grayed out
- Disk view on a mirrored volume does not display the DCO volume
- CVM issues
- After a storage disconnect, unable to bring volume resources online on the CVM cluster nodes
- Error may occur while unconfiguring a node from CVM cluster
- Shutdown of all the nodes except one causes CVM to hang
- Sometimes, CSDG Deport causes Master node to hang due to IRP getting stuck in QLogic driver
- Unknown disk groups seen on nodes after splitting a cluster-shared disk group into cluster disk groups from Slave node
- In some cases, missing disks are seen on target Secondary dynamic disk groups after splitting a cluster-shared disk group from Slave node
- Cannot stop VxSVC if SFW resources are online on the node
- Cluster-shared volume fails to come online on Slave if a stale CSDG of the same name is present on it
- CVM does not start if all cluster nodes are shut down and then any of the nodes are not restarted
- Incorrect errors shown while creating a CSDG if Volume Manager Shared Volume is not registered
- After splitting or joining disk group having mirrored volume with DRL, VEA GUI shows incorrect volume file system if volumes move to another disk group
- Enclosure-level storage migration fails, but adds disks if a cluster-shared volume is offline
- Volume Manager Shared Volume resource fails to come online or cannot be deleted from Failover Cluster Manager
- Sometimes, source cluster-shared volumes are missing after joining two cluster-shared disk groups
- If private CVM links are removed, then nodes may remain out of cluster after network reconnect
- Format dialog box appears after storage disconnect
- Volume Manager Shared Volume resources fail to come online on failover nodes if VxSVC is stopped before stopping clussvc
- One or more nodes have invalid configuration or are not running or reachable
- After node crash or network disconnect, volume resources failover to other node but the drive letters are left behind mounted on the failing node even after it joins cluster successfully
- Shutdown of Master node in a CVM cluster makes the Slave nodes to hang in "Joining" state while joining to new Master
- CVM stops if Microsoft Failover Clustering and CVM cluster networks are not in sync because of multiple, independent network failures or disconnect
- Restarting CVM
- Administering CVM using the CLI
- Tuning the VDS software provider logging
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist make
- vxassist growby
- vxassist querymax
- vxassist shrinkby
- vxassist shrinkabort
- vxassist mirror
- vxassist break
- vxassist remove
- vxassist delete
- vxassist shred
- vxassist addlog
- vxassist online (read/write)
- vxassist offline
- vxassist prepare
- vxassist snapshot
- vxassist snapback
- vxassist snapclear
- vxassist snapabort
- vxassist rescan
- vxassist refresh
- vxassist resetbus
- vxassist version
- vxassist (Windows-specific)
- vxevac
- vxsd
- vxstat
- vxtask
- vxedit
- vxunreloc
- vxdmpadm
- vxdmpadm dsminfo
- vxdmpadm arrayinfo
- vxdmpadm deviceinfo
- vxdmpadm pathinfo
- vxdmpadm arrayperf
- vxdmpadm deviceperf
- vxdmpadm pathperf
- vxdmpadm allperf
- vxdmpadm iostat
- vxdmpadm cleardeviceperf
- vxdmpadm cleararrayperf
- vxdmpadm clearallperf
- vxdmpadm setdsmscsi3
- vxdmpadm setarrayscsi3
- vxdmpadm setattr dsm
- vxdmpadm setattr array
- vxdmpadm setattr device
- vxdmpadm setattr path
- vxdmpadm set isislog
- vxdmpadm rescan
- vxdmpadm disk list
- vxdmpadm getdsmattrib
- vxdmpadm getmpioparam
- vxdmpadm setmpioparam
- vxcbr
- vxsnap
- vxfsync
- vxscrub
- vxverify
- vxprint
- vxschadm
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
- Appendix C. InfoScale event logging
Additional considerations for SFW Microsoft Failover Clustering support
This section contains additional information that is important in working with Microsoft Failover Clustering and Storage Foundation for Windows.
Note the following considerations:
When a cluster disk group resource is offline or a cluster disk group that is not a failover cluster resource is in a Deported state, it is not protected from access by other computers. For maximum data protection, keep Volume Manager Disk Group resources online. Note that the SFW disk group resources still retain the "Volume Manager" name.
When using the Windows Server's Failover Cluster Manager snap-in to create a disk group resource, the Volume Manager Disk Group Parameters screen might not list all the available Storage Foundation for Windows cluster disk groups in the drop-down list. If this happens, exit the New Resource wizard and use the Windows Server's Failover Cluster Manager snap-in to select the cluster group to which the resource is to be assigned. Next, move the cluster group to the cluster node where the Storage Foundation for Windows cluster disk group is currently online. Then create the Storage Foundation for Windows disk group resource.
Under the following circumstances, the VEA Disk View may not reflect the latest state of the disk(s) until a refresh is performed:
When you change the state of a cluster disk resource on one node and try to view the disks under this resource from another node on the same cluster.
When you change the state of a cluster disk resource on one node and try to view the disks under this resource from a remote computer.
SFW support of the Microsoft Failover Clustering environment allows the selection of SCSI-2 reservation mode or SCSI-3 reservation mode. Selecting the type of SCSI support for the Microsoft Failover Clustering environment is done by using the System Settings portion of the SFW Control Panel.
When selecting the type of SCSI support in a Microsoft Failover Clustering environment, it is important to know if your storage arrays support SCSI-3. SFW SCSI-3 clustering support does not let you mix the storage arrays that support SCSI-3 with the storage arrays that cannot. In a situation of mixed storage arrays, you must use SFW SCSI-2 clustering support. Refer to the HCL for arrays that support SCSI-3.
Note:
Arctera maintains a hardware compatibility list (HCL) for InfoScale for Windows products on the Arctera support website. Check the HCL for details about your storage arrays before selecting the type of SCSI support in a Microsoft Failover Clustering environment.
After selecting the type of SCSI support, you must issue the following CLI commands to complete the setting on your system:
net stop vxsvc
net start vxsvc
Note:
If a cluster disk group is imported on the system, you must deport or move the cluster disk group to another system before issuing these CLI commands.
If SFW SCSI-2 clustering support is selected and Active/Active load balancing is desired, the SCSI-3 Persistent Group Reservations (SCSI-3 PGR) support mode must be enabled for the DMP DSM.
A cluster dynamic disk group that is part of the cluster resources cannot be a source disk group for a join command. However, it can be a target disk group for the command.
Change in Bringing a Two-Disk Cluster Group Online
In earlier versions of Volume Manager for Windows, it was possible to bring a two-disk cluster disk group online when only one disk was available. If a cluster were to lose all network communication, this allowed the disk group to be brought online on two cluster nodes simultaneously, with each node owning a single disk, possibly resulting in data loss or a partitioned cluster. Though the likelihood of this situation occurring is slim for most customers, the consequences if it does happen may be severe. However, this is no longer supported with recent versions of Volume Manager and it is not possible to bring a two-disk cluster disk group online in Volume Manager unless it complies with the normal majority algorithm which means both disks must be available.
The normal majority algorithm is (n/2 +1).
You are not allowed to deport a cluster disk group that is also a Volume Manager disk group resource for Microsoft Failover Clustering.
Connecting to a Cluster Node
If you connect to a computer from the VEA GUI using the virtual name or the virtual IP address, the VEA GUI displays the computer name of the cluster node that currently owns the virtual name and IP resources. Therefore, it is not recommended to use the virtual name or virtual IP address when connecting and administering a cluster node through SFW HA.
Instead, use the host name or the IP address of the cluster node.
Dynamic Multi-Pathing (DMP) does not support using a basic disk as a cluster resource under Microsoft Failover Clustering.
Failover may not function properly when using Dynamic Multi-Pathing with a Microsoft Failover Clustering basic disk cluster resource. Refer to Tech Note 251662 on the Arctera Support site for details.
If you want to use Dynamic Multi-Pathing with SFW and Microsoft Failover Clustering, you must convert any Microsoft Failover Clustering basic disk cluster resources to dynamic disk cluster resources before activating Dynamic Multi-Pathing. The initial setup of Microsoft Failover Clustering requires that you use a basic disk as the quorum disk. Once InfoScale Storage is installed, you should upgrade the basic disk to dynamic by including it in a dynamic cluster disk group and then convert the quorum resource from a basic disk resource to a dynamic disk resource.
Note:
DMP DSMs do not support an Active/Active setting in a Microsoft Failover Clustering environment when a quorum disk is a basic disk.
Cluster dynamic disk groups that contain iSCSI disks are not set up for persistent login on all nodes in the cluster.
SFW ensures that the iSCSI targets of cluster dynamic disk groups that contain iSCSI disks are configured for persistent login. If the persistent login is not configured for the target, SFW automatically configures it.
Cluster dynamic disk groups that contain iSCSI disks are only automatically configured for persistent login on the node where they were created. The other nodes in the cluster are not enabled for persistent login. You need to manually set up the persistent login for each of the other nodes in the cluster.
Copying the policy file, VxVolPolicies.xml, to Another Node
If the second node is configured the same as the first and if the first node's policy settings for Automatic Volume Growth are to be maintained on the second node, you need to copy the VxVolPolicies.xml file of the first node to the second node. Copy the VxVolPolicies.xml file to the same path location on the second node as its location on the first node. The default path of the VxVolPolicies.xml file is Documents and Settings\All Users\Application Data\Veritas.
More information about the policy file is available.
More information about using SFW and Microsoft Failover Clustering in a shared cluster environment with the FlashSnap off-host backup procedure is available.
If you install the Microsoft Failover Clustering feature on a server on which InfoScale Storage for Windows is already installed, then you must manually restart Veritas Enterprise Administrator Service (VxSvc) by running the following commands:
net stop vxsvc
net start vxsvc