InfoScale™ 9.0 Storage Foundation Administrator's Guide - Windows
- Overview
- Setup and configuration
- Setup and configuration overview
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Review the Veritas Enterprise Administrator GUI
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Protecting your SFW configuration with vxcbr
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Overview
- Adding storage
- Disk tasks
- Remove a disk from a dynamic disk group
- Remove a disk from the computer
- Offline a disk
- Update disk information by using rescan
- Set disk usage
- Evacuate disk
- Replace disk
- Changing the internal name of a disk
- Renaming an enclosure
- Work with removable media
- Working with disks that support thin provisioning
- View disk properties
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Delete a volume
- Delete a partition or logical drive
- Shredding a volume
- Refresh drive letter, file system, and partition or volume information
- Add, change, or remove a drive letter or path
- Renaming a mirror (plex)
- Changing the internal name of a volume
- Mount a volume at an empty folder (Drive path)
- View all drive paths
- Format a partition or volume with the file system command
- Cancel format
- Change file system options on a partition or volume
- Set a volume to read only
- Check partition or volume properties
- Expand a dynamic volume
- Expand a partition
- Safeguarding the expand volume operation in SFW against limitations of NTFS
- Safeguarding the expand volume operation in SFW against limitations of ReFS
- Shrink a dynamic volume
- Dynamic LUN expansion
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Disk media types
- Supported Solid-State Devices
- Icon for SSD
- Enclosure and VDID for automatically discovered On-Host Fusion-IO disks
- Enclosure and VDID for automatically discovered On-Host Intel disks
- Enclosure and VDID for automatically discovered Violin disks
- Classifying disks as SSD
- Limitations for classifying SSD devices
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Upgrading the dynamic disk group version
- Converting a Microsoft Disk Management Disk Group
- Importing a dynamic disk group to a cluster disk group
- Rename a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Importing a cloned disk group
- Partitioned shared storage with private dynamic disk group protection
- Dynamic disk group properties
- Troubleshooting problems with dynamic disk groups
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Overview
- Event monitoring and notification
- Event notification
- Disk monitoring
- Capacity monitoring
- Configuring Automatic volume growth
- SMTP configuration for email notification
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap overview
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- About Dynamic Disk Group Split and Join
- Dynamic disk group split
- Recovery for the split command
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Limitations when using dynamic disk group split and join with Volume Replicator
- Dynamic Disk Group Split and Join troubleshooting tips
- CLI FlashSnap commands
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- About SmartIO
- Typical deployment scenarios
- How SmartIO works
- SmartIO benefits
- SmartIO limitations
- About cache area
- About SmartIO caching support
- Configuring SmartIO
- Frequently asked questions about SmartIO
- How to configure a volume to use a non-default cache area?
- What is write-through I/O caching?
- Does SmartIO with SFW support write caching?
- Are there any logs that I can refer to, if caching fails for a particular volume?
- I have deleted a cache area, but the disk is still present in the Cachepool. How can I remove it from the Cachepool?
- Is the VxVM cached data persistent?
- Is an application's performance affected if the cache device becomes inaccessible while caching is enabled?
- Are there any tools available to measure SmartIO performance?
- Will there be a performance drop after vMotion?
- Will the cache area recreation fail, if the SmartDisk assigned has insufficient space?
- A cache area recreation is in process in a VMware environment with vMotion, does it affect the sfcache operations?
- Does SmartIO continue to use the previous cache area if the VM is moved back to the previous host?
- How does SmartIO behave if the vxsvc service fails during vMotion?
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Overview
- Configuring a CVM cluster
- Administering CVM
- Configuring CVM links for multi-subnet cluster networks
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- About implementing Hyper-V virtual machine live migration on SFW storage
- Tasks for deploying live migration support for Hyper-V virtual machines
- Installing Windows Server
- Preparing the host machines
- Installing the SFW option for Microsoft failover cluster option
- Using the SFW Configuration Wizard for Microsoft Failover Cluster for Hyper-V live migration support
- Configuring the SFW storage
- Creating a virtual machine service group
- Setting the dependency of the virtual machine on the VMDg resource
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- About storage migration
- About performance tunables for storage migration
- Setting performance tunables for storage migration
- About performing online storage migration
- Storage migration limitations
- About changing the layout while performing volume migration
- Migrating volumes belonging to SFW dynamic disk groups
- Migrating volumes belonging to Hyper-V virtual machines
- Migrating data from SFW dynamic disks of one enclosure to another
- Converting your existing Hyper-V configuration to live migration supported configuration
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Volume encryption
- Secure file system (SecureFS) for protection against ransomware
- Troubleshooting and recovery
- Overview
- Using disk and volume status information
- SFW error symbols
- Resolving common problem situations
- Bring an offline dynamic disk back to an imported state
- Bring a basic disk back to an online state
- Remove a disk from the computer
- Bring a foreign disk back to an online state
- Bring a basic volume back to a healthy state
- Bring a dynamic volume back to a healthy state
- Repair a volume with degraded data after moving disks between computers
- Deal with a provider error on startup
- Commands or procedures used in troubleshooting and recovery
- Refresh command
- Rescan command
- Replace disk command
- Merge foreign disk command
- Reactivate disk command
- Reactivate volume command
- Repair volume command for dynamic RAID-5 volumes
- Repair volume command for dynamic mirrored volumes
- Starting and stopping the Storage Foundation Service
- Accessing the CLI history
- Additional troubleshooting issues
- Disk issues
- Volume issues
- After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume
- Cannot create a RAID-5 volume
- Cannot create a mirror
- Cannot extend a volume
- When creating a spanned volume over multiple disks within a disk group, you cannot customize the size of subdisks on each disk
- Disk group issues
- Sometimes, creating dynamic disk group operation fails even if disk is connected to a shared bus
- Unknown group appears after upgrading a basic disk to dynamic and immediately deporting its dynamic disk group
- Cannot use SFW disk groups in disk management after uninstalling InfoScale Storage management software
- After uninstalling and reinstalling InfoScale Storage management software, the private dynamic disk group protection is removed
- Cannot import a cluster dynamic disk group or a secondary disk group with private dynamic disk group protection when SCSI reservations have not been released
- Connection issues
- Issues related to boot or restart
- During restart, a message may appear about a "Corrupt drive" and suggest that you run autocheck
- Error that the boot device is inaccessible, bugcheck 7B
- Error message "vxboot- failed to auto-import disk group repltest_dg. all volumes of the disk group are not available."
- Error message "Bugcheck 7B, Inaccessible Boot Device"
- When Attempting to Boot from a Stale or Damaged Boot Plex
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- Live migration fails if VM VHD is hosted on an SFW volume mounted as a folder mount
- Disk group deletion fails if ReFS volume is marked as read-only
- ReFS volume deletion from VEA GUI fails if Symantec Endpoint Protection (SEP) is installed.
- An option is grayed out
- Disk view on a mirrored volume does not display the DCO volume
- CVM issues
- After a storage disconnect, unable to bring volume resources online on the CVM cluster nodes
- Error may occur while unconfiguring a node from CVM cluster
- Shutdown of all the nodes except one causes CVM to hang
- Sometimes, CSDG Deport causes Master node to hang due to IRP getting stuck in QLogic driver
- Unknown disk groups seen on nodes after splitting a cluster-shared disk group into cluster disk groups from Slave node
- In some cases, missing disks are seen on target Secondary dynamic disk groups after splitting a cluster-shared disk group from Slave node
- Cannot stop VxSVC if SFW resources are online on the node
- Cluster-shared volume fails to come online on Slave if a stale CSDG of the same name is present on it
- CVM does not start if all cluster nodes are shut down and then any of the nodes are not restarted
- Incorrect errors shown while creating a CSDG if Volume Manager Shared Volume is not registered
- After splitting or joining disk group having mirrored volume with DRL, VEA GUI shows incorrect volume file system if volumes move to another disk group
- Enclosure-level storage migration fails, but adds disks if a cluster-shared volume is offline
- Volume Manager Shared Volume resource fails to come online or cannot be deleted from Failover Cluster Manager
- Sometimes, source cluster-shared volumes are missing after joining two cluster-shared disk groups
- If private CVM links are removed, then nodes may remain out of cluster after network reconnect
- Format dialog box appears after storage disconnect
- Volume Manager Shared Volume resources fail to come online on failover nodes if VxSVC is stopped before stopping clussvc
- One or more nodes have invalid configuration or are not running or reachable
- After node crash or network disconnect, volume resources failover to other node but the drive letters are left behind mounted on the failing node even after it joins cluster successfully
- Shutdown of Master node in a CVM cluster makes the Slave nodes to hang in "Joining" state while joining to new Master
- CVM stops if Microsoft Failover Clustering and CVM cluster networks are not in sync because of multiple, independent network failures or disconnect
- Restarting CVM
- Administering CVM using the CLI
- Tuning the VDS software provider logging
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist make
- vxassist growby
- vxassist querymax
- vxassist shrinkby
- vxassist shrinkabort
- vxassist mirror
- vxassist break
- vxassist remove
- vxassist delete
- vxassist shred
- vxassist addlog
- vxassist online (read/write)
- vxassist offline
- vxassist prepare
- vxassist snapshot
- vxassist snapback
- vxassist snapclear
- vxassist snapabort
- vxassist rescan
- vxassist refresh
- vxassist resetbus
- vxassist version
- vxassist (Windows-specific)
- vxevac
- vxsd
- vxstat
- vxtask
- vxedit
- vxunreloc
- vxdmpadm
- vxdmpadm dsminfo
- vxdmpadm arrayinfo
- vxdmpadm deviceinfo
- vxdmpadm pathinfo
- vxdmpadm arrayperf
- vxdmpadm deviceperf
- vxdmpadm pathperf
- vxdmpadm allperf
- vxdmpadm iostat
- vxdmpadm cleardeviceperf
- vxdmpadm cleararrayperf
- vxdmpadm clearallperf
- vxdmpadm setdsmscsi3
- vxdmpadm setarrayscsi3
- vxdmpadm setattr dsm
- vxdmpadm setattr array
- vxdmpadm setattr device
- vxdmpadm setattr path
- vxdmpadm set isislog
- vxdmpadm rescan
- vxdmpadm disk list
- vxdmpadm getdsmattrib
- vxdmpadm getmpioparam
- vxdmpadm setmpioparam
- vxcbr
- vxsnap
- vxfsync
- vxscrub
- vxverify
- vxprint
- vxschadm
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
- Appendix C. InfoScale event logging
Online volume encryption at rest
Volume Manager (VxVM) provides the online volume encryption at rest feature that lets you migrate unencrypted volumes to encrypted ones. Using this feature a volume can be migrated without application downtime, that is, while the file system is mounted and while the I/Os are running. It also avoids complexities, like having to modify application configurations, and has a controlled impact on the application I/O performance.
The online migration process involves mirroring the existing storage configured under a volume, which requires an equal amount of additional storage that gets used in the background.
The process comprises the following phases:
Start
When you initiate this phase, it runs as a background process, during which VxVM takes the following actions:
Creates an encrypted plex with a layered layout under the volume, with the layout type and allocation attributes that you specify. If you do not provide any specific values, the same layout and allocation attributes as those of the unencrypted plex are used.
Synchronizes the data from the original layout to the encrypted layout.
Commit
You can initiate this phase only after the Start phase is complete. In this phase, VxVM takes the following actions:
Swaps the newly created encrypted plex with the original unencrypted one.
Enables encryption on the top-level volume.
Stores the unencrypted plex under a temporary volume for later review and deletion.
Note:
A migration commit operation is not permitted on volumes that have associated snapshots. If the unencrypted volume is part of a volume replication configuration, it may have some associated snapshots. You must first delete any existing volume snapshots and then perform the commit operation. After the volume is successfully migrated to an encrypted volume, you can create a new snapshot policy and take those snapshots again.
Optionally, you can perform the following tasks after the Start phase is completed and before you initiate the Commit phase:
Switch Plex - Lets you switch the reads between the source (unencrypted) and the target (encrypted) plexes; meanwhile, the writes continue to happen on both the plexes. Use this option to verify the data that was copied from the source plex in the Start phase.
Abort - Lets you abort the migration. Any intermediate changes made to the storage configuration are rolled back, without any disruption to the application I/Os.
Additionally, you can monitor and control the online migration operations as follows:
You can monitor the operations by using the vxtask list command.
You can control the operations by using the vxtask -t <TaskID> {pause|resume} command.
The unencrypted volume is migrated to an encrypted one only when both the Start and the Commit phases are completed successfully.
Note:
After the online migration is committed successfully, volume encryption cannot be disabled.
Limitations:
Online migration is not supported in the following cases:
RAID 5 volumes
Volumes with mixed layouts
Volumes configured for VVR replication
Online migration cannot be initiated on an unencrypted volume when SecureFS is enabled. Also, SecureFS cannot be enabled on a volume while online migration is in progress; it can be done only after the migration process is committed successfully.
Only one online migration can be performed on an unencrypted volume at a time.
Only one top-level mirror plex can be migrated at a time.
You can use either the VEA GUI or VxVM commands to migrate unencrypted volumes to encrypted ones.
See Encrypting existing volumes.
See vxassist encmigrate.