Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Release Notes - Windows
Last Published:
2025-04-14
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Windows
- Introduction and product requirements
- Changes introduced in this release
- Windows Server 2025 support
- Upgraded OpenSSL and TLS versions for enhanced security
- Application monitoring on single-node clusters in VMware environments
- UEFI Secure Boot support
- Secure file system (SecureFS) support
- Collection and display of real-time and historical statistics using Arctera Enterprise Administrator is no longer supported
- Veritas High Availability Configuration wizard is no longer available
- Online volume encryption at rest
- Ability to attach regional disks in read-only mode in GCP environments
- Limitations
- Deployment limitations
- Cluster management limitations
- EBS Multi-Attach support in AWS cloud and InfoScale service group configuration using wizards
- Shared disk support in Azure cloud and InfoScale service group configuration using wizards
- Undocumented commands and command options
- Unable to restore user database using SnapManager for SQL
- MountV agent does not work when Volume Shadow Copy service is enabled
- WAN cards are not supported
- System names must not include periods
- Incorrect updates to path and name of types.cf with spaces
- VCW does not support configuring broadcasting for UDP
- Undefined behavior when using VCS wizards for modifying incorrectly configured service groups
- Service group dependency limitation - no failover for some instances of parent group
- Unable to monitor resources if Switch Independent NIC teaming mode is used
- Windows Safe Mode boot options not supported
- MountV agent does not detect file system change or corruption
- MirrorView agent resource faults when agent is killed
- Security issue when using Java GUI and default cluster admin credentials
- VCS Authentication Service does not support node renaming
- An MSMQ resource fails to come online after the virtual server name is changed
- Configuration wizards do not allow modifying IPv4 address to IPv6
- All servers in a cluster must run the same operating system
- Running Java Console on a non-cluster system is recommended
- Cluster Manager console does not update GlobalCounter
- Cluster address for global cluster requires resolved virtual IP
- Storage management limitations
- SFW does not support disks with unrecognized OEM partitions (3146848)
- Only one disk gets removed from an MSFT compatible disk group even if multiple disks are selected to be removed
- Cannot create MSFT compatible disk group if the host name has multibyte characters
- Fault detection is slower in case of Multipath I/O over Fibre Channel
- FlashSnap solution for EV does not support basic disks
- Incorrect mapping of snapshot and source LUNs causes VxSVC to stops working
- SFW does not support operations on disks with sector size greater than 512 bytes; VEA GUI displays incorrect size
- Database or log files must not be on same volume as SQL Server
- Operations in SFW may not be reflected in DISKPART
- Disk signatures of system and its mirror may switch after ASR recovery
- SFW does not support growing a LUN beyond 2 TB
- SCSI reservation conflict occurs when setting up cluster disk groups
- Snapshot operation fails when the Arctera VSS Provider is restarted while the Volume Shadow Copy service is running and the VSS providers are already loaded
- When a node is added to a cluster, existing snapshot schedules are not replicated to the new node
- Restore from Copy On Write (COW) snapshot of MSCS clustered shared volumes fails
- Dynamic Disk Groups are not imported after system reboot in a Hyper-V environment
- Storage Agent cannot reconnect to VDS service when restarting Storage Agent
- SFW does not support transportable snapshots on Windows Server
- vxsnapsql restores all SQL Server databases mounted on the same volume
- Windows Disk Management console does not display basic disk converted from SFW dynamic disk
- DCM or DRL log on thin provisioned disk causes all disks for volume to be treated as thin provisioned disks
- After import/deport operations on SFW dynamic disk group, DISKPART command or Microsoft Disk Management console do not display all volumes
- Restored Enterprise Vault components may appear inconsistent with other Enterprise Vault components
- Enterprise Vault restore operation may fail for some components
- Shrink volume operation may increase provisioned size of volume
- Reclaim operations on a volume residing on a Hitachi array may not give optimal results
- Storage migration of Hyper-V VM on cluster-shared volume resource is not supported from a Slave node
- In a CVM environment, disconnecting and reconnecting hard disks may display an error
- Limitations of SFW support for DMP
- Multi-pathing limitations
- Replication limitations
- Solution configuration limitations
- Virtual fire drill not supported in Windows environments
- Solutions wizard support in a 64-bit VMware environment
- Solutions wizards fail to load unless the user has administrative privileges on the system
- Discovery of SFW disk group and volume information sometimes fails when running Solutions wizards
- DR Wizard does not create or validate service group resources if a service group with the same name already exists on the secondary site
- Quick Recovery wizard displays only one XML file path for all databases, even if different file paths have been configured earlier
- Enterprise Vault Task Controller and Storage services fail to start after running the Enterprise Vault Configuration Wizard if the MSMQ service fails to start
- Wizard fails to discover SQL Server databases that contain certain characters
- Internationalization and localization limitations
- Interoperability limitations
- Known issues
- Deployment issues
- Reinstallation of an InfoScale product may fail due to pending cleanup tasks
- Delayed installation on certain systems
- Installation may fail with the "Windows Installer Service could not be accessed" error
- Installation may fail with "Unspecified error" on a remote system
- The installation may fail with "The system cannot find the file specified" error
- Installation may fail with a fatal error for VCS msi
- In SFW with Microsoft failover cluster, a parl.exe error message appears when system is restarted after SFW installation if Telemetry was selected during installation
- Side-by-side error may appear in the Windows Event Viewer
- FlashSnap License error message appears in the Application Event log after installing license key
- Uninstallation may fail to remove certain folders
- Error while uninstalling the product if licensing files are missing
- The vxlicrep.exe may crash when the machine reboots after InfoScale Enterprise is installed
- Cluster management issues
- Cluster Server (VCS) issues
- Cluster reconfiguration may fail post InfoScale installation (4117528)
- The VCS Cluster Configuration Wizard may not be able to delete a cluster if it fails to stop HAD
- Deleting a node from a cluster using the VCS Cluster Configuration Wizard may not remove it from main.cf
- NetAppSnapDrive resource may fail with access denied error
- Mount resource fails to bring file share service group online
- Mount agent may go in unknown state on virtual machines
- AutoStart may violate limits and prerequisites Load Policy
- Enterprise Vault Configuration Wizard may fail to connect to SQL Server
- File Share Configuration Wizard may create dangling VMDg resources
- For volumes under VMNSDg resource, capacity monitoring and automatic volume growth policies do not get available to all cluster nodes
- For creating VMNSDg resources, the VMGetDrive command not supported to retrieve a list of dynamic disk groups
- First failover attempt might fault for a NativeDisks configuration
- Resource fails to come online after failover on secondary
- Upgrading a secure cluster may require HAD restart
- New user does not have administrator rights in Java GUI
- HTC resource probe operation fails and reports an UNKNOWN state
- Resources in a parent service group may fail to come online if the AutoStart attribute for the resources is set to zero
- VCS wizards may fail to probe resources
- Changes to referenced attributes do not propagate
- ArgListValue attribute may not display updated values
- The Cluster Server High Availability Engine (HAD) service fails to stop
- Engine may hang in LEAVING state
- Timing issues with AutoStart policy
- The VCS Cluster Configuration Wizard (VCW) supports NIC teaming but the Arctera High Availability Configuration Wizard does not
- VCS engine HAD may not accept client connection requests even after the cluster is configured successfully
- Hyper-V DR attribute settings should be changed in the MonitorVM resource if a monitored VM is migrated to a new volume
- One or more VMNSDg resources may fail to come online during failover of a large service group
- VDS error reported while bringing the NativeDisks and Mount resources online after a failover
- SQL Server service resource does not fault even if detail monitoring fails
- Delay in refreshing the VCS Java Console
- NetBackup may fail to back up SQL Server database in VCS cluster environment
- Cluster Manager (Java Console) issues
- Cluster connection error while converting local service group to a global service group
- Repaint feature does not work properly when look and feel preference is set to Java
- Exception when selecting preferences
- Java Console errors in a localized environment
- Common system names in a global cluster setup
- Agent logs may not be displayed
- Login attempts to the Cluster Manager may fail after a product upgrade
- Global service group issues
- VCW configures a resource for GCO in a cluster without a valid GCO license
- Group does not go online on AutoStart node
- Cross-cluster switch may cause concurrency violation
- Declare cluster dialog may not display highest priority cluster as failover target
- Global group fails to come online on the DR site with a message that it is in the middle of a group operation
- VMware virtual environment-related issues
- Partial log entries for service group offline operations in case of application monitoring on single-node clusters (4188474)
- No corrective action is taken for a faulted service group even when its name is provided in the ServiceGroupName attribute of the AppMonHB resource (4187908)
- Irrelevant INFO message about corrective action for a faulted service group (4178969)
- VMwareDisks resource cannot go offline if VMware snapshot is taken when VMware disk is configured for monitoring
- Guest virtual machines fail to detect the network connectivity loss
- VMware vMotion fails to move a virtual machine back to an ESX host where the application was online
- VCS commands may fail if the snapshot of a system on which the application is configured is reverted
- VMWareDisks resource cannot probe (Unknown status) when virtual machine moves to a different ESX host
- Cluster Server (VCS) issues
- Storage management issues
- Performance overhead is observed in Windows encryption for sequential write workload. (4112733)
- Data corruption observed on snap plex when user reverts cow snapshot and creates the mirror snap. (4114185)
- The file with 1 KB size cannot be encrypted. (4104305)
- The filesystem of a replicated encrypted volume fails to change. (4102345)
- Mirrored volume cannot be created or added if the disks are created from array. (4113434)
- The GCO network selection page does not show network adapter cards. (4100888)
- Storage Foundation issues
- Disk fails to get added to a dynamic disk group after conversion from MBR to GPT (4055047)
- In Microsoft Azure environment, InfoScale Storage cannot be used in cluster configurations that require shared storage
- In a Microsoft Azure environment SFW fails to auto discover SSD media type for a VHD
- Incorrect message appears in the Event Viewer for a hot relocation operation
- A volume state continues to appear as "Healthy, Resynching"
- After a mirror break-off operation, the volume label does not show on the Slave node
- CVM cluster does not support node names with more than 15 characters
- For CSDG with mirrored volumes, sometimes disks incorrectly shows the yellow warning icon
- SSD is not removed successfully from the cache pool
- On fast failover enabled configurations, VMDg resource takes a longer time to come online
- On some configurations, the VSS snapshot operation may fail to add volumes to the snapshot set
- Using Failover Cluster Manager you cannot migrate volumes belonging to Hyper-V virtual machines to a location with VMDg or Volume Manager Shared Volume resource type
- Performance counters for Dynamic Disk and Dynamic Volume may fail to appear in the list of available counters
- VSS snapshot of a Hyper-V virtual machine on SFW storage does not work from NetBackup
- In some cases, when 80 or more mirrored volume resources fail over to another node in CVM, some volume resources fault causing all to fault
- In CVM, if subdisk move operation for CSDG fails because of a cluster reconfiguration, it does not start again automatically
- Stale volume entries under MountedDevices and CurrentControlSet registries not removed after the volume is deleted
- Some arrays do not show correct Enclosure ID in VDID
- In some cases, EV snapshot schedules fail
- In CVM, Master node incorrectly displays two missing disk objects for a single disk disconnect
- DRL plex is detached across CVM nodes if a disk with DRL plex is disconnected locally from the node where volume is online
- Error while converting a Microsoft Disk Management Disk Group created using iSCSI disks to an SFW dynamic disk group
- Snapshot schedules intermittently fail on the Slave node after failover
- In VEA GUI, Tasks tab does not display task progress for the resynchronization of a cluster-shared volume if it is offline
- Snapback operation from a Slave node always reports being successful on that node, even when it's in progress or resynchronization fails on Master
- For cluster-shared volumes, only one file share per Microsoft failover cluster is supported
- After cluster-shared volume resize operation, free space of the volume is not updated on nodes where volume is offline
- Issues due to Microsoft Failover Clustering not recognizing SFW VMDg resource as storage class
- If fast failover is enabled for a VMDg resource, then SFW volumes are not displayed in the New Share Wizard
- Volume information not displayed for VMDg and RVG resources in Failover Cluster Manager
- Failover of VMDg resource from one node to another does not mount the volume when disk group volume is converted from LDM to dynamic
- vxverify command may not work if SmartMove was enabled while creating the mirrored volume
- After installation of SFW or SFW HA, mirrored and RAID-5 volumes and disk groups cannot be created from LDM
- Some operations related to shrinking or expanding a dynamic volume do not work
- VEA GUI displays error while creating partitions
- For dynamic disk group configured as VMNSDg resource, application component snapshot schedules are not replicated to other nodes in a cluster if created using VSS Snapshot Scheduler Wizard
- In Microsoft failover cluster, if VxSVC is attached to Windows Debugger, it may stop responding when you try to bring offline a service group with VMDg resources
- In some cases, updated VSS components are not displayed in VEA console
- Storage reclamation commands do not work when SFW is run inside Hyper-V virtual machines
- Unknown disk group may be seen after deleting a disk group
- Wrong information related to disk information is displayed in the Veritas Enterprise Administrator (VEA) console. Single disk is displayed as two disks (harddisk and a missing disk)
- SFW Configuration Utility for Hyper-V Live Migration support wizard shows the hosts as Configured even if any service fails to be configured properly
- System shutdown or crash of one cluster node and subsequent reboot of other nodes resulting in the SFW messaging for Live Migration support to fail
- Changing the FastFailover attribute for a VMDg resource from FALSE to TRUE throws an error message
- A remote partition is assumed to be on the local node due to Enterprise Vault DNS alias check
- After performing a restore operation on a COW snapshot, the "Allocated size" shadow storage field value is not getting updated on the VEA console
- Messaging service does not retain its credentials after upgrading SFW and SFW HA
- Enterprise Vault (EV) snapshots are displayed in the VEA console log as successful even when the snapshots are skipped and no snapshot is created
- On a clustered setup, split-brain might cause the disks to go into a fail state
- Takeover and Failback operation on Sun Controllers cause disk loss
- Microsoft failover cluster disk resource may fail to come online on failover node in case of node crash or storage disconnect if DMP DSMs are installed
- The Veritas Enterprise Administrator (VEA) console cannot remove the Logical Disk Management (LDM) missing disk to basic ones
- After breaking a Logical Disk Manager (LDM) mirror volume through the LDM GUI, LDM shows 2 volumes with the same drive letter
- Unable to failover between cluster nodes. Very slow volume arrival
- VDS errors noticed in the event viewer log
- An extra GUI Refresh is required to ensure that changes made to the volumes on a cluster disk group having the Volume Manager Disk Group (VMDg) resource gets reflected in the Failover Cluster Manager Console
- DR wizard cannot create an RVG that contains more than 32 volumes
- For a cluster setup, configure the Veritas Scheduler Services with a domain user account
- If an inaccessible path is mentioned in the vxsnap create CLI, the snapshot gets created and the CLI fails
- If snapshot set files are stored on a Fileshare path, then they are visible and accessible by all nodes in the VCS cluster
- Sharing property of folders not persistent after system reboot
- Microsoft Disk Management console displays an error when a basic disk is encapsulated
- Results of a disk group split query on disks that contain a shadow storage area may not report the complete set of disks
- Extending a simple volume in Microsoft Disk Management Disk Group fails
- SFW cannot merge recovered disk back to RAID5 volume
- Request for format volume occurs when importing dynamic disk group
- Logging on to SFW as a member of the Windows Administrator group requires additional credentials
- Certain operations on a dynamic volume cause a warning
- Avoid encapsulating a disk that contains a system-critical basic volume
- Sharing property of folders in clustering environment is not persistent
- Entries under Task Tab may not be displayed with the correct name
- Attempting to add a gatekeeper device to a dynamic disk group can cause problems with subsequent operations on that disk group until the storage agent is restarted
- ASR fails to restore a disk group that has a missing disk
- Mirrored volume in Microsoft Disk Management Disk Group does not resynchronize
- Expand volume operation not supported for certain types of volumes created by Microsoft Disk Management
- MirrorView resource cannot be brought online because of invalid security file
- Known behavior with disk configuration in campus clusters
- VEA console issues
- Login to VEA on an IPv6-enabled system with the Logged On User on this computer option may cause incorrect privileges to be assigned
- VEA may fail to start when launched through the SCC, PowerShell, or Windows Start menu or Apps menu
- On Windows operating systems, non-administrator user cannot log on to VEA GUI if UAC is enabled
- VEA GUI sometimes does not show all the EV components
- VEA GUI incorrectly shows yellow caution symbol on the disk icon
- Reclaim storage space operation may not update progress in GUI
- VEA GUI fails to log on to iSCSI target
- VEA does not display properly when Windows color scheme is set to High Contrast Black
- VEA displays objects incorrectly after Online/Offline disk operations
- Disks displayed in Unknown disk group after system reboot
- Diskgroup creation fails on Ultradisk in Azure cloud for logical sector size 4096-byte sector (4101641)
- Snapshot and restore issues
- Vxsnap restore CLI command fails when specifying a full path name for a volume
- Restoring COW snapshots causes earlier COW snapshots to be deleted
- COW restore wizard does not update selected volumes
- Snapshot operation requires additional time
- Incorrect message displayed when wrong target is specified in vxsnap diffarea command
- Restore operation on SQL Server component with missing volume fails
- Snapshot of Microsoft Hyper-V virtual machine results in deported disk group on Hyper-V guest
- Enterprise Vault restore operation fails for remote components
- Persistent shadow copies are not supported for FAT and FAT32 volumes
- Copy On Write (COW) snapshots are automatically deleted after shrink volume operation
- Shadow storage settings for a Copy On Write (COW) snapshot persist after shrinking target volume
- Copy On Write (COW) shadow storage settings for a volume persist on newly created volume after breaking its snapshot mirror
- Conflict occurs when VSS snapshot schedules or VSS snapshots have identical snapshot set names
- VSS Writers cannot be refreshed or contacted
- Time-out errors may occur in Volume Shadow Copy Service (VSS) writers and result in snapshots that are not VSS compliant
- vxsnapsql restore may fail to restore SQL Server database
- VSS Snapshot of a volume fails after restarting the VSS provider service
- CLI command, vxsnap prepare, does not create snapshot mirrors in a stripe layout
- After taking a snapshot of a volume, the resize option of the snapshot is disabled
- If the snapshot plex and original plex are of different sizes, the snapback fails
- Snapshot scheduling issues
- Snapshot schedule fails as result of reattach operation error
- Next run date information of snapshot schedule does not get updated automatically
- VEA GUI may not display correct snapshot schedule information after Veritas Scheduler Service configuration update
- Scheduled snapshots affected by transition to Daylight Savings Time
- In a cluster environment, the scheduled snapshot configuration succeeds on the active node but fails on another cluster node
- After a failover occurs, a snapshot operation scheduled within two minutes of the failover does not occur
- Unable to create or delete schedules on a Microsoft failover cluster node while another cluster node is shutting down
- Quick Recovery Wizard schedules are not executed if service group fails over to secondary zone in a replicated data cluster
- On Windows Server, a scheduled snapshot operation may fail due to mounted volumes being locked by the OS
- Multi-pathing issues
- Replication issues
- VVR replication may fail if Symantec Endpoint Protection (SEP) version 12.1 is installed
- VVR replication fails to start on systems where Symantec Endpoint Protection (SEP) version 12.1 or 12.1 RU2 is installed
- RVGPrimary resource fails to come online if VCS engine debug logging is enabled
- "Invalid Arguments" error while performing the online volume shrink
- vxassist shrinkby or vxassist querymax operation fails with "Invalid Arguments"
- In synchronous mode of replication, file system may incorrectly report volumes as raw and show "scan and fix" dialog box for fast failover configurations
- VxSAS configuration wizard doesn't work in NAT environments
- File system may incorrectly report volumes as raw due to I/O failure
- NTFS errors are displayed in Event Viewer if fast-failover DR setup is configured with VVR
- Volume shrink fails because RLINK cannot resume due to heavy I/Os
- Online volume shrink operation fails for data volumes with multiple Secondaries if I/Os are active
- RLINKs cannot connect after changing the heartbeat port number
- On a DR setup, if Replicated Data Set (RDS) components are browsed for on the secondary site, then the VEA console does not respond
- Secondary host is getting removed and added when scheduled sync snapshots are taken
- Replication may stop if the disks are write cache enabled
- Discrepancy in the Replication Time Lag Displayed in VEA and CLI
- The vxrlink updates command displays inaccurate values
- Some VVR operations may fail to complete in a cluster environment
- IBC IOCTL Failed Error Message
- Pause and Resume commands take a long time to complete
- Replication keeps switching between the pause and resume state
- VEA GUI has problems in adding secondary if all NICs on primary are DHCP enabled
- Pause secondary operation fails when SQLIO is used for I/Os
- Performance counter cannot be started for VVR remote hosts in perfmon GUI
- VVR Graphs get distorted when bandwidth value limit is set very high
- BSOD seen on a Hyper-V setup
- Unable to start statistics collection for VVR Memory and VVR remote hosts object in Perfmon
- Bunker primary fails to respond when trying to perform stop replication operation on secondary
- CLI shows the "Volume in use" error when you dismount the ReFS data volumes on the Secondary RVG.
- Solution configuration issues
- Oracle Enterprise Manager cannot be used for database control
- Unexplained errors with DR wizard and QR wizard
- VCS FD and DR wizards fail to configure application and hardware replication agent settings
- Disaster recovery (DR) configuration issues
- The Disaster Recovery Configuration Wizard or the Fire Drill Wizard cannot proceed when configuring an application in an EMC SRDF replication environment
- The DR Wizard does not provide a separate "GCO only" option for VVR-based replication
- The Disaster Recovery Wizard fails if the primary and secondary sites are in different domains or if you run the wizard from another domain
- The Disaster Recovery Wizard may fail to bring the RVGPrimary resources online
- The Disaster Recovery Wizard requires that an existing storage layout for an application on a secondary site matches the primary site layout
- The Disaster Recovery Wizard may fail to create the Secondary Replicator Log (SRL) volume
- The Disaster Recovery Wizard may display a failed to discover NIC error on the Secondary system selection page
- Service group cloning fails if you save and close the configuration in the Java Console while cloning is in progress
- If RVGs are created manually with mismatched names, the DR Wizard does not recognize the RVG on the secondary site and attempts to create the secondary RVG
- Cloned service group faults and fails over to another node during DR Wizard execution resulting in errors
- DR wizard may display database constraint exception error after storage validation in EMC SRDF environment
- DR wizard creation of secondary RVGs may fail due to mounted volumes being locked by the OS
- DR wizard with VVR replication requires configuring the preferred network setting in VEA
- DR wizard displays error message on failure to attach DCM logs for VVR replication
- Disaster Recovery (DR) Wizard fails to automatically set the correct storage replication option in case of SRDF
- Disaster Recovery (DR) Wizard reports an error during storage cloning operation in case of SRDF
- Fire drill (FD) configuration issues
- Fire Drill Wizard may fail to recognize that a volume fits on a disk if the same disk is being used for another volume
- Fire drill may fail if run again after a restore without exiting the wizard first
- Fire Drill Wizard may time out before completing fire drill service group configuration
- RegRep resource may fault while bringing the fire drill service group online during "Run Fire Drill" operation
- Fire Drill Wizard in an HTC environment is untested in a configuration that uses the same horcm file for both regular and snapshot replication
- FireDrill attribute is not consistently enabled or disabled
- MountV resource state incorrectly set to UNKNOWN
- Quick recovery (QR) configuration issues
- Internationalization and localization issues
- Only US-ASCII characters are supported
- Use only U.S. ASCII characters in the SFW or SFW HA installation directory name
- VEA GUI cannot show double-byte characters correctly on (English) Windows operating system
- VEA can't connect to the remote VEA server on non-English platforms
- SSO configuration fails if the system name contains non-English locale characters [2910613]
- VCS cluster may display "stale admin wait" state if the virtual computer name and the VCS cluster name contains non-English locale characters
- Issues faced while configuring application monitoring for a Windows service having non-English locale characters in its name
- Interoperability issues
- Backup Exec 12 installation fails in a VCS environment
- Symantec Endpoint Protection security policy may block the VCS Cluster Configuration Wizard
- VCS services do not start on systems where SEP 12.1 or later is installed
- Several issues while you configure VCS on systems where Symantec Endpoint Protection (SEP) version 12.1 is installed
- Miscellaneous issues
- Cluster node may become unresponsive if you try to modify network properties of adapters assigned to the VCS private network
- MSMQ resource fails to come online if the MSMQ directory path contains double byte characters
- Saving large configuration results in very large file size for main.cf
- AutoStart may violate limits and prerequisites load policy
- Trigger not invoked in REMOTE_BUILD state
- Some alert messages do not display correctly
- If VCS upgrade fails on one or more nodes, HAD fails to start and cluster becomes unusable
- Custom settings in the cluster configuration are lost after an upgrade if attribute values contain double quote characters
- Options on the Domain Selection panel in the VCS Cluster Configuration Wizard are disabled
- Live migration of a VM, which is part of a VCS cluster where LLT is configured over Ethernet, from one Hyper-V host to another may result in inconsistent HAD state
- If a NIC that is configured for LLT protocol is disabled, LLT does not notify clients
- Fibre Channel adapter issues
- Storage agent issues and limitations in VMware virtual environments
- Deployment issues
Expand volume operation not supported for certain types of volumes created by Microsoft Disk Management
The resize operation to expand a volume created by Microsoft Disk Management is not supported for mirror, stripe, or RAID-5 volumes. Also, extending a volume to more than one disk in a single operation is not supported. A volume can only be extended on one other disk during a resize operation. However, the resize operation can be repeated so that the volume can be extended to more than one disk. (1128016)