InfoScale™ 9.0 Storage Foundation Administrator's Guide - Windows
- Overview
- Setup and configuration
- Setup and configuration overview
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Review the Veritas Enterprise Administrator GUI
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Protecting your SFW configuration with vxcbr
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Overview
- Adding storage
- Disk tasks
- Remove a disk from a dynamic disk group
- Remove a disk from the computer
- Offline a disk
- Update disk information by using rescan
- Set disk usage
- Evacuate disk
- Replace disk
- Changing the internal name of a disk
- Renaming an enclosure
- Work with removable media
- Working with disks that support thin provisioning
- View disk properties
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Delete a volume
- Delete a partition or logical drive
- Shredding a volume
- Refresh drive letter, file system, and partition or volume information
- Add, change, or remove a drive letter or path
- Renaming a mirror (plex)
- Changing the internal name of a volume
- Mount a volume at an empty folder (Drive path)
- View all drive paths
- Format a partition or volume with the file system command
- Cancel format
- Change file system options on a partition or volume
- Set a volume to read only
- Check partition or volume properties
- Expand a dynamic volume
- Expand a partition
- Safeguarding the expand volume operation in SFW against limitations of NTFS
- Safeguarding the expand volume operation in SFW against limitations of ReFS
- Shrink a dynamic volume
- Dynamic LUN expansion
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Disk media types
- Supported Solid-State Devices
- Icon for SSD
- Enclosure and VDID for automatically discovered On-Host Fusion-IO disks
- Enclosure and VDID for automatically discovered On-Host Intel disks
- Enclosure and VDID for automatically discovered Violin disks
- Classifying disks as SSD
- Limitations for classifying SSD devices
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Upgrading the dynamic disk group version
- Converting a Microsoft Disk Management Disk Group
- Importing a dynamic disk group to a cluster disk group
- Rename a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Importing a cloned disk group
- Partitioned shared storage with private dynamic disk group protection
- Dynamic disk group properties
- Troubleshooting problems with dynamic disk groups
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Overview
- Event monitoring and notification
- Event notification
- Disk monitoring
- Capacity monitoring
- Configuring Automatic volume growth
- SMTP configuration for email notification
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap overview
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- About Dynamic Disk Group Split and Join
- Dynamic disk group split
- Recovery for the split command
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Limitations when using dynamic disk group split and join with Volume Replicator
- Dynamic Disk Group Split and Join troubleshooting tips
- CLI FlashSnap commands
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- About SmartIO
- Typical deployment scenarios
- How SmartIO works
- SmartIO benefits
- SmartIO limitations
- About cache area
- About SmartIO caching support
- Configuring SmartIO
- Frequently asked questions about SmartIO
- How to configure a volume to use a non-default cache area?
- What is write-through I/O caching?
- Does SmartIO with SFW support write caching?
- Are there any logs that I can refer to, if caching fails for a particular volume?
- I have deleted a cache area, but the disk is still present in the Cachepool. How can I remove it from the Cachepool?
- Is the VxVM cached data persistent?
- Is an application's performance affected if the cache device becomes inaccessible while caching is enabled?
- Are there any tools available to measure SmartIO performance?
- Will there be a performance drop after vMotion?
- Will the cache area recreation fail, if the SmartDisk assigned has insufficient space?
- A cache area recreation is in process in a VMware environment with vMotion, does it affect the sfcache operations?
- Does SmartIO continue to use the previous cache area if the VM is moved back to the previous host?
- How does SmartIO behave if the vxsvc service fails during vMotion?
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Overview
- Configuring a CVM cluster
- Administering CVM
- Configuring CVM links for multi-subnet cluster networks
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- About implementing Hyper-V virtual machine live migration on SFW storage
- Tasks for deploying live migration support for Hyper-V virtual machines
- Installing Windows Server
- Preparing the host machines
- Installing the SFW option for Microsoft failover cluster option
- Using the SFW Configuration Wizard for Microsoft Failover Cluster for Hyper-V live migration support
- Configuring the SFW storage
- Creating a virtual machine service group
- Setting the dependency of the virtual machine on the VMDg resource
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- About storage migration
- About performance tunables for storage migration
- Setting performance tunables for storage migration
- About performing online storage migration
- Storage migration limitations
- About changing the layout while performing volume migration
- Migrating volumes belonging to SFW dynamic disk groups
- Migrating volumes belonging to Hyper-V virtual machines
- Migrating data from SFW dynamic disks of one enclosure to another
- Converting your existing Hyper-V configuration to live migration supported configuration
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Volume encryption
- Secure file system (SecureFS) for protection against ransomware
- Troubleshooting and recovery
- Overview
- Using disk and volume status information
- SFW error symbols
- Resolving common problem situations
- Bring an offline dynamic disk back to an imported state
- Bring a basic disk back to an online state
- Remove a disk from the computer
- Bring a foreign disk back to an online state
- Bring a basic volume back to a healthy state
- Bring a dynamic volume back to a healthy state
- Repair a volume with degraded data after moving disks between computers
- Deal with a provider error on startup
- Commands or procedures used in troubleshooting and recovery
- Refresh command
- Rescan command
- Replace disk command
- Merge foreign disk command
- Reactivate disk command
- Reactivate volume command
- Repair volume command for dynamic RAID-5 volumes
- Repair volume command for dynamic mirrored volumes
- Starting and stopping the Storage Foundation Service
- Accessing the CLI history
- Additional troubleshooting issues
- Disk issues
- Volume issues
- After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume
- Cannot create a RAID-5 volume
- Cannot create a mirror
- Cannot extend a volume
- When creating a spanned volume over multiple disks within a disk group, you cannot customize the size of subdisks on each disk
- Disk group issues
- Sometimes, creating dynamic disk group operation fails even if disk is connected to a shared bus
- Unknown group appears after upgrading a basic disk to dynamic and immediately deporting its dynamic disk group
- Cannot use SFW disk groups in disk management after uninstalling InfoScale Storage management software
- After uninstalling and reinstalling InfoScale Storage management software, the private dynamic disk group protection is removed
- Cannot import a cluster dynamic disk group or a secondary disk group with private dynamic disk group protection when SCSI reservations have not been released
- Connection issues
- Issues related to boot or restart
- During restart, a message may appear about a "Corrupt drive" and suggest that you run autocheck
- Error that the boot device is inaccessible, bugcheck 7B
- Error message "vxboot- failed to auto-import disk group repltest_dg. all volumes of the disk group are not available."
- Error message "Bugcheck 7B, Inaccessible Boot Device"
- When Attempting to Boot from a Stale or Damaged Boot Plex
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- Live migration fails if VM VHD is hosted on an SFW volume mounted as a folder mount
- Disk group deletion fails if ReFS volume is marked as read-only
- ReFS volume deletion from VEA GUI fails if Symantec Endpoint Protection (SEP) is installed.
- An option is grayed out
- Disk view on a mirrored volume does not display the DCO volume
- CVM issues
- After a storage disconnect, unable to bring volume resources online on the CVM cluster nodes
- Error may occur while unconfiguring a node from CVM cluster
- Shutdown of all the nodes except one causes CVM to hang
- Sometimes, CSDG Deport causes Master node to hang due to IRP getting stuck in QLogic driver
- Unknown disk groups seen on nodes after splitting a cluster-shared disk group into cluster disk groups from Slave node
- In some cases, missing disks are seen on target Secondary dynamic disk groups after splitting a cluster-shared disk group from Slave node
- Cannot stop VxSVC if SFW resources are online on the node
- Cluster-shared volume fails to come online on Slave if a stale CSDG of the same name is present on it
- CVM does not start if all cluster nodes are shut down and then any of the nodes are not restarted
- Incorrect errors shown while creating a CSDG if Volume Manager Shared Volume is not registered
- After splitting or joining disk group having mirrored volume with DRL, VEA GUI shows incorrect volume file system if volumes move to another disk group
- Enclosure-level storage migration fails, but adds disks if a cluster-shared volume is offline
- Volume Manager Shared Volume resource fails to come online or cannot be deleted from Failover Cluster Manager
- Sometimes, source cluster-shared volumes are missing after joining two cluster-shared disk groups
- If private CVM links are removed, then nodes may remain out of cluster after network reconnect
- Format dialog box appears after storage disconnect
- Volume Manager Shared Volume resources fail to come online on failover nodes if VxSVC is stopped before stopping clussvc
- One or more nodes have invalid configuration or are not running or reachable
- After node crash or network disconnect, volume resources failover to other node but the drive letters are left behind mounted on the failing node even after it joins cluster successfully
- Shutdown of Master node in a CVM cluster makes the Slave nodes to hang in "Joining" state while joining to new Master
- CVM stops if Microsoft Failover Clustering and CVM cluster networks are not in sync because of multiple, independent network failures or disconnect
- Restarting CVM
- Administering CVM using the CLI
- Tuning the VDS software provider logging
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist make
- vxassist growby
- vxassist querymax
- vxassist shrinkby
- vxassist shrinkabort
- vxassist mirror
- vxassist break
- vxassist remove
- vxassist delete
- vxassist shred
- vxassist addlog
- vxassist online (read/write)
- vxassist offline
- vxassist prepare
- vxassist snapshot
- vxassist snapback
- vxassist snapclear
- vxassist snapabort
- vxassist rescan
- vxassist refresh
- vxassist resetbus
- vxassist version
- vxassist (Windows-specific)
- vxevac
- vxsd
- vxstat
- vxtask
- vxedit
- vxunreloc
- vxdmpadm
- vxdmpadm dsminfo
- vxdmpadm arrayinfo
- vxdmpadm deviceinfo
- vxdmpadm pathinfo
- vxdmpadm arrayperf
- vxdmpadm deviceperf
- vxdmpadm pathperf
- vxdmpadm allperf
- vxdmpadm iostat
- vxdmpadm cleardeviceperf
- vxdmpadm cleararrayperf
- vxdmpadm clearallperf
- vxdmpadm setdsmscsi3
- vxdmpadm setarrayscsi3
- vxdmpadm setattr dsm
- vxdmpadm setattr array
- vxdmpadm setattr device
- vxdmpadm setattr path
- vxdmpadm set isislog
- vxdmpadm rescan
- vxdmpadm disk list
- vxdmpadm getdsmattrib
- vxdmpadm getmpioparam
- vxdmpadm setmpioparam
- vxcbr
- vxsnap
- vxfsync
- vxscrub
- vxverify
- vxprint
- vxschadm
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
- Appendix C. InfoScale event logging
The left pane
In the System perspective, the left pane shows a tree view of the system and storage objects detected by the Storage Foundation for Windows software. The tree view displays the hierarchical relationships of the objects. The node at the top of the tree represents the Storage Foundation for Windows client that you are connected to. In the screen below, the client is connected to "localhost." The objects under this node are the managed servers that the client is connected to and managing. In the screen below, there is only one managed server node, a server named "jktestmachine."
Below each managed server icon are the following object categories:
Default |
|
Systems configured for support of Microsoft multipath input/output (Microsoft MPIO) solution |
|
Systems running VSS-aware applications, such as Microsoft SQL Server |
|
The tree view can be expanded by clicking on a plus sign (+) in front of an object icon. When the tree view is fully expanded, all the objects have a minus (-) sign in front of them. By clicking on a minus sign at any level, you can collapse an object down to that level. The fully collapsed tree shows only the top-level object.
Right-clicking on an object in the tree view brings up a context menu that is appropriate to that object.
The following is additional information about the storage object categories under each managed server node.
Cache | Cache area is the storage space allocated on the SSD(s) for caching. It is used to store cache data corresponding to any caching-enabled volume. |
CD-ROMs | Any CD-ROM drives recognized by Storage Foundation for Windows as existing on the computer you are managing. |
Disk groups | A disk group is a grouping of disks within Storage Foundation for Windows. The two types of disk groups are basic and dynamic. See Disk groups overview. |
Disks | Disks are physical disks or logical disks recognized by the Windows operating system. Depending on the type of disk, a disk may be enabled to support thin provisioning and storage reclamation. Thin provisioning is a technology to efficiently allocate storage for a disk. Thin provisioning allocates physical storage only when actual data is written to the disk. Some disks that are enabled for thin provisioning also provide storage reclamation. Storage reclamation is the operation that decreases the physical storage allocation once data is deleted from the disk. A disk that supports thin provisioning is represented with a disk icon that includes a red colored sector. A disk that supports thin provisioning and storage reclamation is represented with a disk icon that includes a green colored sector with an asterisk (*). |
Enclosures | Enclosures are physical objects that contain one or more physical disks. For example, the disks may be contained in arrays or JBODs. Also the disks may be internal to your server. |
Saved Queries | Saved Queries refers to queries that were saved with the Search feature of SFW. If you saved queries with the Search feature, then this node would display the results of the saved query. See Search. |
Volumes | A volume is a logical entity that is made up of portions of one or more physical disks. A volume can be formatted with a file system and can be accessed by a drive letter or a mount point. Storage Foundation for Windows works with basic and dynamic volumes. A volume may be either read only or read/write. The icons for read-only volumes include a picture of a padlock to differentiate them from read/write volumes. Not all commands available in Storage Foundation for Windows for read/write volumes are enabled for read-only volumes because specific commands require write access to the volume. Check the access mode of a particular volume if a command is not available. |
DMP DSMs | On the servers that are configured for support of Microsoft multipath input/output (Microsoft MPIO) solution, a node for DMP DSMs appears. Completely expanding the DMP DSMs node displays DSM nodes being used, nodes of arrays being controlled by the DSM, and the disks contained in the array. These nodes let you manage the settings for the arrays and disks configured for Microsoft MPIO. See DMP overview. |
Applications | On the servers that are running VSS-aware applications, such as Microsoft SQL Server, a node for Applications appears. SFW provides an option of taking snapshots with Volume Shadow Copy Service (VSS). The VSS snapshot method lets you take snapshots of VSS-aware applications, such as Microsoft SQL Server, while the application files are open. When VSS-aware applications do not exist, the snapshot is taken with the SFW FlashSnap method (VM method). |
iSCSI | On the servers that are connected to an iSCSI SAN, the following nodes may appear:
|