Veritas InfoScale 7.3.1 Release Notes - Windows
- Release notes for Veritas InfoScale
- About this document
- Requirements
- Limitations
- Deployment limitations
- Cluster management limitations
- Undocumented commands and command options
- Unable to restore user database using SnapManager for SQL
- MountV agent does not work when Volume Shadow Copy service is enabled
- WAN cards are not supported
- System names must not include periods
- Incorrect updates to path and name of types.cf with spaces
- VCW does not support configuring broadcasting for UDP
- Undefined behavior when using VCS wizards for modifying incorrectly configured service groups
- Service group dependency limitation - no failover for some instances of parent group
- Unable to monitor resources if Switch Independent NIC teaming mode is used
- Windows Safe Mode boot options not supported
- MountV agent does not detect file system change or corruption
- MirrorView agent resource faults when agent is killed
- Security issue when using Java GUI and default cluster admin credentials
- VCS Authentication Service does not support node renaming
- An MSMQ resource fails to come online after the virtual server name is changed
- All servers in a cluster must run the same operating system
- Running Java Console on a non-cluster system is recommended
- Cluster Manager console does not update GlobalCounter
- Cluster address for global cluster requires resolved virtual IP
- Cluster operations performed using the Symantec High Availability dashboard may fail
- Storage management limitations
- SFW does not support disks with unrecognized OEM partitions
- Only one disk gets removed from an MSFT compatible disk group even if multiple disks are selected to be removed
- Cannot create MSFT compatible disk group if the host name has multibyte characters
- Fault detection is slower in case of Multipath I/O over Fibre Channel
- FlashSnap solution for EV does not support basic disks
- Incorrect mapping of snapshot and source LUNs causes VxSVC to stops working
- SFW does not support operations on disks with sector size greater than 512 bytes; VEA GUI displays incorrect size
- Database or log files must not be on same volume as Exchange or SQL Server
- Operations in SFW may not be reflected in DISKPART
- Disk signatures of system and its mirror may switch after ASR recovery
- Adding a storage group that contains many disks and volumes causes SFW and Microsoft Exchange System Manager to respond very slowly.
- SFW does not support growing a LUN beyond 2 TB
- SCSI reservation conflict occurs when setting up cluster disk groups
- Snapshot operation fails when the Veritas VSS Provider is restarted while the Volume Shadow Copy service is running and the VSS providers are already loaded
- When a node is added to a cluster, existing snapshot schedules are not replicated to the new node
- Restore from Copy On Write (COW) snapshot of MSCS clustered shared volumes fails
- Dynamic Disk Groups are not imported after system reboot in a Hyper-V environment
- Storage Agent cannot reconnect to VDS service when restarting Storage Agent
- SFW does not support transportable snapshots on Windows Server
- Windows Disk Management console does not display basic disk converted from SFW dynamic disk
- SharePoint components must have unique names
- DCM or DRL log on thin provisioned disk causes all disks for volume to be treated as thin provisioned disks
- After import/deport operations on SFW dynamic disk group, DISKPART command or Microsoft Disk Management console do not display all volumes
- Restored Enterprise Vault components may appear inconsistent with other Enterprise Vault components
- Enterprise Vault restore operation may fail for some components
- Shrink volume operation may increase provisioned size of volume
- Reclaim operations on a volume residing on a Hitachi array may not give optimal results
- Storage migration of Hyper-V VM on cluster-shared volume resource is not supported from a Slave node
- In a CVM environment, disconnecting and reconnecting hard disks may display an error
- Limitations of SFW support for DMP
- Multi-pathing limitations
- Replication limitations
- Solution configuration limitations
- Virtual fire drill not supported in Windows environments
- Solutions wizard support in a 64-bit VMware environment
- Solutions wizards fail to load unless the user has administrative privileges on the system
- Discovery of SFW disk group and volume information sometimes fails when running Solutions wizards
- DR Wizard does not create or validate service group resources if a service group with the same name already exists on the secondary site
- Quick Recovery wizard displays only one XML file path for all databases, even if different file paths have been configured earlier
- Enterprise Vault Task Controller and Storage services fail to start after running the Enterprise Vault Configuration Wizard if the MSMQ service fails to start
- Limitation on SnapManager for Exchange
- VCS locks shared volumes during Exchange recovery
- Schedule backups on online nodes
- SQL Server Agent Configuration Wizard fails to discover SQL Server databases that contain certain characters
- Internationalization and localization limitations
- Interoperability limitations
- Known issues
- Deployment issues
- Entry for a cumulative patch (CP) installed may exists in the Windows Add Remove Programs
- "Run Configuration Checker" link available on the CD browser only downloads the Configuration Checker
- Reinstallation of an InfoScale product may fail due to pending cleanup tasks
- WinLogo certification issues
- Delayed installation on certain systems
- Installation may fail with the "Windows Installer Service could not be accessed" error
- Installation may fail with "Unspecified error" on a remote system
- The installation may fail with "The system cannot find the file specified" error
- Installation may fail with a fatal error for VCS msi
- In SFW with Microsoft failover cluster, a parl.exe error message appears when system is restarted after SFW installation if Telemetry was selected during installation
- Side-by-side error may appear in the Windows Event Viewer
- FlashSnap License error message appears in the Application Event log after installing license key
- VCS Simulator installation may require a reboot
- VCS uninstallation halts and displays an error message; uninstall continues after clearing the message
- Uninstallation may fail to remove certain folders
- Error while uninstalling the product if licensing files are missing
- The vxlicrep.exe may crash when the machine reboots after InfoScale Enterprise is installed
- Cluster management issues
- Cluster Server (VCS) issues
- The VCS Cluster Configuration Wizard may not be able to delete a cluster if it fails to stop HAD
- Deleting a node from a cluster using the VCS Cluster Configuration Wizard may not remove it from main.cf
- NetAppSnapDrive resource may fail with access denied error
- Mount resource fails to bring file share service group online
- Mount agent may go in unknown state on virtual machines
- AutoStart may violate limits and prerequisites Load Policy
- Exchange Setup Wizard messages
- When running Enterprise Vault Configuration Wizard, Enterprise Vault may fail to connect to SQL Server
- File Share Configuration Wizard may create dangling VMDg resources
- For volumes under VMNSDg resource, capacity monitoring and automatic volume growth policies do not get available to all cluster nodes
- For creating VMNSDg resources, the VMGetDrive command not supported to retrieve a list of dynamic disk groups
- First failover attempt might fault for a NativeDisks configuration
- Resource fails to come online after failover on secondary
- Upgrading a secure cluster may require HAD restart
- New user does not have administrator rights in Java GUI
- HTC resource probe operation fails and reports an UNKNOWN state
- Resources in a parent service group may fail to come online if the AutoStart attribute for the resources is set to zero
- VCS wizards may fail to probe resources
- Changes to referenced attributes do not propagate
- ArgListValue attribute may not display updated values
- The Veritas High Availability Engine (HAD) service fails to stop
- Engine may hang in LEAVING state
- Timing issues with AutoStart policy
- The VCS Cluster Configuration Wizard (VCW) supports NIC teaming but the Symantec High Availability Configuration Wizard does not
- Configuration wizards do not allow modifying IPv4 address to IPv6
- VCS engine HAD may not accept client connection requests even after the cluster is configured successfully
- Hyper-V DR attribute settings should be changed in the MonitorVM resource if a monitored VM is migrated to a new volume
- One or more VMNSDg resources may fail to come online during failover of a large service group
- VDS error reported while bringing the NativeDisks and Mount resources online after a failover
- SQL Server service resource does not fault even if detail monitoring fails
- Delay in refreshing the VCS Java Console
- NetBackup may fail to back up SQL Server database in VCS cluster environment
- Cluster Manager (Java Console) issues
- Cluster connection error while converting local service group to a global service group
- Repaint feature does not work properly when look and feel preference is set to Java
- Exception when selecting preferences
- Java Console errors in a localized environment
- Common system names in a global cluster setup
- Agent logs may not be displayed
- Login attempts to the Cluster Manager may fail after a product upgrade
- Global service group issues
- VCW configures a resource for GCO in a cluster without a valid GCO license
- Group does not go online on AutoStart node
- Cross-cluster switch may cause concurrency violation
- Declare cluster dialog may not display highest priority cluster as failover target
- Global group fails to come online on the DR site with a message that it is in the middle of a group operation
- VMware virtual environment-related issues
- VMwareDisks resource cannot go offline if VMware snapshot is taken when VMware disk is configured for monitoring
- Guest virtual machines fail to detect the network connectivity loss
- VMware vMotion fails to move a virtual machine back to an ESX host where the application was online
- VCS commands may fail if the snapshot of a system on which the application is configured is reverted
- VMWareDisks resource cannot probe (Unknown status) when virtual machine moves to a different ESX host
- Error while unconfiguring the VCS cluster from the Symantec High Availability tab
- The Symantec High Availability Configuration Wizard gives an error for invalid user account details if the system password contains double quotes (")
- The Symantec High Availability view does not display any sign for the concurrency violation
- The Symantec High Availability installer may fail to block the installation of unrelated license keys
- The Symantec High Availability configuration wizard fails to configure the VCS cluster if UAC is enabled
- Symantec High Availability Configuration wizard may fail to configure monitoring for the selected mount points
- Cluster Server (VCS) issues
- Storage management issues
- Storage Foundation
- In Microsoft Azure environment, InfoScale Storage cannot be used in cluster configurations that require shared storage
- In a Microsoft Azure environment SFW fails to auto discover SSD media type for a VHD
- Disks may appear in "Failing" state with an I/O error on multipathed-LUNs
- Incorrect message appears in the Event Viewer for a hot relocation operation
- A volume state continues to appear as "Healthy, Resynching"
- After a mirror break-off operation, the volume label does not show on the Slave node
- CVM cluster does not support node names with more than 15 characters
- For CSDG with mirrored volumes, sometimes disks incorrectly shows the yellow warning icon
- SSD is not removed successfully from the cache pool
- On fast failover enabled configurations, VMDg resource takes a longer time to come online
- On some configurations, the VSS snapshot operation may fail to add volumes to the snapshot set
- Using Failover Cluster Manager you cannot migrate volumes belonging to Hyper-V virtual machines to a location with VMDg or Volume Manager Shared Volume resource type
- VMDg resources fault when one of the storage paths is disconnected
- Performance counters for Dynamic Disk and Dynamic Volume may fail to appear in the list of available counters
- VSS snapshot of a Hyper-V virtual machine on SFW storage does not work from NetBackup
- Virtual machine created using Failover Cluster Manager cannot be monitored and managed using SCVMM 2012 and 2012 R2
- In some cases, when 80 or more mirrored volume resources fail over to another node in CVM, some volume resources fault causing all to fault
- In CVM, if subdisk move operation for CSDG fails because of a cluster reconfiguration, it does not start again automatically
- Stale volume entries under MountedDevices and CurrentControlSet registries not removed after the volume is deleted
- Some arrays do not show correct Enclosure ID in VDID
- In some cases, EV snapshot schedules fails
- In CVM, Master node incorrectly displays two missing disk objects for a single disk disconnect
- DRL plex is detached across CVM nodes if a disk with DRL plex is disconnected locally from the node where volume is online
- Error while converting a Microsoft Disk Management Disk Group created using iSCSI disks to an SFW dynamic disk group
- Snapshot schedules intermittently fail on the Slave node after failover
- In VEA GUI, Tasks tab does not display task progress for the resynchronization of a cluster-shared volume if it is offline
- Snapback operation from a Slave node always reports being successful on that node, even when it's in progress or resynchronization fails on Master
- For cluster-shared volumes, only one file share per Microsoft failover cluster is supported
- After cluster-shared volume resize operation, free space of the volume is not updated on nodes where volume is offline
- Issues due to Microsoft Failover Clustering not recognizing SFW VMDg resource as storage class
- If fast failover is enabled for a VMDg resource, then SFW volumes are not displayed in the New Share Wizard
- Volume information not displayed for VMDg and RVG resources in Failover Cluster Manager
- Failover of VMDg resource from one node to another does not mount the volume when disk group volume is converted from LDM to dynamic
- vxverify command may not work if SmartMove was enabled while creating the mirrored volume
- After installation of SFW or SFW HA, mirrored and RAID-5 volumes and disk groups cannot be created from LDM
- Some operations related to shrinking or expanding a dynamic volume do not work
- VEA GUI displays error while creating partitions
- The VSS Snapback and Restore wizards incorrectly display "Exchange" in the titles
- For dynamic disk group configured as VMNSDg resource, application component snapshot schedules are not replicated to other nodes in a cluster if created using VSS Snapshot Scheduler Wizard
- In Microsoft failover cluster, if VxSVC is attached to Windows Debugger, it may stop responding when you try to bring offline a service group with VMDg resources
- In some cases, updated VSS components are not displayed in VEA console
- Storage reclamation commands do not work when SFW is run inside Hyper-V virtual machines
- Unknown disk group may be seen after deleting a disk group
- Wrong information related to disk information is displayed in the Veritas Enterprise Administrator (VEA) console. Single disk is displayed as two disks (harddisk and a missing disk)
- SFW Configuration Utility for Hyper-V Live Migration support wizard shows the hosts as Configured even if any service fails to be configured properly
- System shutdown or crash of one cluster node and subsequent reboot of other nodes resulting in the SFW messaging for Live Migration support to fail
- Changing the FastFailover attribute for a VMDg resource from FALSE to TRUE throws an error message
- A remote partition is assumed to be on the local node due to Enterprise Vault DNS alias check
- After performing a restore operation on a COW snapshot, the "Allocated size" shadow storage field value is not getting updated on the VEA console
- Messaging service does not retain its credentials after upgrading SFW and SFW HA
- Enterprise Vault (EV) snapshots are displayed in the VEA console log as successful even when the snapshots are skipped and no snapshot is created
- On a clustered setup, split-brain might cause the disks to go into a fail state
- Takeover and Failback operation on Sun Controllers cause disk loss
- Microsoft failover cluster disk resource may fail to come online on failover node in case of node crash or storage disconnect if DMP DSMs are installed
- The Veritas Enterprise Administrator (VEA) console cannot remove the Logical Disk Management (LDM) missing disk to basic ones
- After breaking a Logical Disk Manager (LDM) mirror volume through the LDM GUI, LDM shows 2 volumes with the same drive letter
- Unable to failover between cluster nodes. Very slow volume arrival
- VDS errors noticed in the event viewer log
- Restore with -a option for component-based snapshot fails for Exchange mailboxes on a VCS setup
- An extra GUI Refresh is required to ensure that changes made to the volumes on a cluster disk group having the Volume Manager Disk Group (VMDg) resource gets reflected in the Failover Cluster Manager Console
- DR wizard cannot create an RVG that contains more than 32 volumes
- Allow restore using the vxsnap restore command if mailbox is removed or missing
- Scheduled VSS snapshots of an Exchange mailbox database configured under a VCS cluster setup starts with some delay of around two to three minutes
- Event viewer shows error message "Could not impersonate Veritas Scheduler Service login user ....." when VSS restore and snapback operations are performed
- For a cluster setup, configure the Veritas Scheduler Services with a domain user account
- Snapshot metadata files are not deleted after VSS Snapback and PIT Restore operation
- If an inaccessible path is mentioned in the vxsnap create CLI, the snapshot gets created and the CLI fails
- If snapshot set files are stored on a Fileshare path, then they are visible and accessible by all nodes in the VCS cluster
- Sharing property of folders not persistent after system reboot
- Microsoft Disk Management console displays an error when a basic disk is encapsulated
- Results of a disk group split query on disks that contain a shadow storage area may not report the complete set of disks
- Extending a simple volume in Microsoft Disk Management Disk Group fails
- SFW cannot merge recovered disk back to RAID5 volume
- Request for format volume occurs when importing dynamic disk group
- Logging on to SFW as a member of the Windows Administrator group requires additional credentials
- Certain operations on a dynamic volume cause a warning
- Avoid encapsulating a disk that contains a system-critical basic volume
- Sharing property of folders in clustering environment is not persistent
- Entries under Task Tab may not be displayed with the correct name
- Attempting to add a gatekeeper device to a dynamic disk group can cause problems with subsequent operations on that disk group until the storage agent is restarted
- ASR fails to restore a disk group that has a missing disk
- Mirrored volume in Microsoft Disk Management Disk Group does not resynchronize
- Expand volume operation not supported for certain types of volumes created by Microsoft Disk Management
- MirrorView resource cannot be brought online because of invalid security file
- Known behavior with disk configuration in campus clusters
- VEA console issues
- VEA GUI does not display the task logs
- VEA may fail to start when launched through the SCC, PowerShell, or Windows Start menu or Apps menu
- On Windows operating systems, non-administrator user cannot log on to VEA GUI if UAC is enabled
- VEA GUI sometimes does not show all the EV components
- VEA GUI incorrectly shows yellow caution symbol on the disk icon
- Reclaim storage space operation may not update progress in GUI
- VEA GUI fails to log on to iSCSI target
- VEA does not display properly when Windows color scheme is set to High Contrast Black
- VEA displays objects incorrectly after Online/Offline disk operations
- Disks displayed in Unknown disk group after system reboot
- Snapshot and restore issues
- Vxsnap restore CLI command fails when specifying a full path name for a volume
- Restoring COW snapshots causes earlier COW snapshots to be deleted
- COW restore wizard does not update selected volumes
- Snapshot operation requires additional time
- Incorrect message displayed when wrong target is specified in vxsnap diffarea command
- Restore operation specifying missing volume for SQL component fails
- Snapshot operation of remote Sharepoint database fails when it resides on local SharePoint server
- Snapshot of Microsoft Hyper-V virtual machine results in deported disk group on Hyper-V guest
- Enterprise Vault restore operation fails for remote components
- Persistent shadow copies are not supported for FAT and FAT32 volumes
- Copy On Write (COW) snapshots are automatically deleted after shrink volume operation
- Shadow storage settings for a Copy On Write (COW) snapshot persist after shrinking target volume
- Copy On Write (COW) shadow storage settings for a volume persist on newly created volume after breaking its snapshot mirror
- Conflict occurs when VSS snapshot schedules or VSS snapshots have identical snapshot set names
- Microsoft Outlook 2007 Client (caching mode enabled) does not display restore messages after VSS Exchange restore operation completes
- Volume information not displayed correctly in VSS Restore wizard
- VSS Writers cannot be refreshed or contacted
- Time-out errors may occur in Volume Shadow Copy Service (VSS) writers and result in snapshots that are not VSS compliant
- The vxsnapsql restore CLI command may fail when restoring an SQL database
- VSS objects may not display correctly in VEA
- VSS Snapshot of a volume fails after restarting the VSS provider service
- Restoring SQL databases mounted on the same volume
- Snapshot operation fails if components with the same name exist in different Exchange virtual servers
- CLI command, vxsnap prepare, does not create snapshot mirrors in a stripe layout
- After taking a snapshot of a volume, the resize option of the snapshot is disabled
- If the snapshot plex and original plex are of different sizes, the snapback fails
- Snapshot scheduling issues
- Snapshot schedule fails as result of reattach operation error
- Next run date information of snapshot schedule does not get updated automatically
- VEA GUI may not display correct snapshot schedule information after Veritas Scheduler Service configuration update
- Scheduled snapshots affected by transition to Daylight Savings Time
- In a cluster environment, the scheduled snapshot configuration succeeds on the active node but fails on another cluster node
- After a failover occurs, a snapshot operation scheduled within two minutes of the failover does not occur
- Unable to create or delete schedules on a Microsoft failover cluster node while another cluster node is shutting down
- Quick Recovery Wizard schedules are not executed if service group fails over to secondary zone in a replicated data cluster
- On Windows Server, a scheduled snapshot operation may fail due to mounted volumes being locked by the OS
- Storage Foundation
- Multi-pathing issues
- Multi-pathing may be disabled on Windows Server 2016 systems
- Bug check may occur when adding DMP DSM option
- Changes made to a multipathing policy of a LUN using the Microsoft Disk Management console, do not appear on the VEA GUI
- VEA or CLI operations for DMP DSMs fail without providing error message if WMI service is disabled
- vxdmpadm's deviceinfo and pathinfo with disk specified in p#c#t#l# parameter displays information only by one path
- After upgrading firmware to version 6.7.x, VCOMPLNT DSM claims DELL Compellent LUN incorrectly.
- Replication issues
- Volume Replicator replication may fail if Symantec Endpoint Protection (SEP) version 12.1 is installed
- Volume Replicator replication fails to start on systems where Symantec Endpoint Protection (SEP) version 12.1 or 12.1 RU2 is installed
- RVGPrimary resource fails to come online if VCS engine debug logging is enabled
- "Invalid Arguments" error while performing the online volume shrink
- vxassist shrinkby or vxassist querymax operation fails with "Invalid Arguments"
- In synchronous mode of replication, file system may incorrectly report volumes as raw and show "scan and fix" dialog box for fast failover configurations
- VxSAS configuration wizard fails to discover hosts in IPv6 DNS
- VxSAS configuration wizard doesn't work in NAT environments
- File system may incorrectly report volumes as raw due to I/O failure
- NTFS errors are displayed in Event Viewer if fast-failover DR setup is configured with Volume Replicator
- Volume shrink fails because RLINK cannot resume due to heavy I/Os
- Online volume shrink operation fails for data volumes with multiple Secondaries if I/Os are active
- RLINKs cannot connect after changing the heartbeat port number
- On a DR setup, if Replicated Data Set (RDS) components are browsed for on the secondary site, then the VEA console does not respond
- Secondary host is getting removed and added when scheduled sync snapshots are taken
- Replication may stop if the disks are write cache enabled
- Discrepancy in the Replication Time Lag Displayed in VEA and CLI
- The vxrlink updates command displays inaccurate values
- Some Volume Replicator operations may fail to complete in a cluster environment
- IBC IOCTL Failed Error Message
- Pause and Resume commands take a long time to complete
- Replication keeps switching between the pause and resume state
- VEA GUI has problems in adding secondary if all NICs on primary are DHCP enabled
- Pause secondary operation fails when SQLIO is used for I/Os
- Performance counter cannot be started for Volume Replicator remote hosts in perfmon GUI
- Volume Replicator Graphs get distorted when bandwidth value limit is set very high
- BSOD seen on a Hyper-V setup
- Unable to start statistics collection for Volume Replicator Memory and Volume Replicator remote hosts object in Perfmon
- Bunker primary fails to respond when trying to perform stop replication operation on secondary
- CLI shows the "Volume in use" error when you dismount the ReFS data volumes on the Secondary RVG.
- Solution configuration issues
- Exchange 2010 Configuration Wizard does not allow to select the application databases
- Permission issues after upgrading an Exchange cluster
- Exchange service group does not fail over after installing ScanMail 8.0
- Error while performing Exchange post-installation steps
- Exchange Setup Wizard does not allow a node to be rebuilt and fails during installation
- Resource for Exchange Information Store may take time to online
- Oracle Enterprise Manager cannot be used for database control
- Unexplained errors with DR wizard and QR wizard
- VCS FD and DR wizards fail to configure application and hardware replication agent settings
- Disaster recovery (DR) configuration issues
- The Disaster Recovery Configuration Wizard or the Fire Drill Wizard cannot proceed when configuring an application in an EMC SRDF replication environment
- The DR Wizard does not provide a separate "GCO only" option for Volume Replicator-based replication
- The Disaster Recovery Wizard fails if the primary and secondary sites are in different domains or if you run the wizard from another domain
- The Disaster Recovery Wizard may fail to bring the RVGPrimary resources online
- The Disaster Recovery Wizard requires that an existing storage layout for an application on a secondary site matches the primary site layout
- The Disaster Recovery Wizard may fail to create the Secondary Replicator Log (SRL) volume
- The Disaster Recovery Wizard may display a failed to discover NIC error on the Secondary system selection page
- Service group cloning fails if you save and close the configuration in the Java Console while cloning is in progress
- If RVGs are created manually with mismatched names, the DR Wizard does not recognize the RVG on the secondary site and attempts to create the secondary RVG
- Cloned service group faults and fails over to another node during DR Wizard execution resulting in errors
- DR wizard may display database constraint exception error after storage validation in EMC SRDF environment
- DR wizard creation of secondary RVGs may fail due to mounted volumes being locked by the OS
- DR wizard with Volume Replicator replication requires configuring the preferred network setting in VEA
- DR wizard displays error message on failure to attach DCM logs for Volume Replicator replication
- Disaster Recovery (DR) Wizard fails to automatically set the correct storage replication option in case of SRDF
- Disaster Recovery (DR) Wizard reports an error during storage cloning operation in case of SRDF
- Fire drill (FD) configuration issues
- Fire Drill Wizard may fail to recognize that a volume fits on a disk if the same disk is being used for another volume
- Fire drill may fail if run again after a restore without exiting the wizard first
- Fire Drill Wizard may time out before completing fire drill service group configuration
- RegRep resource may fault while bringing the fire drill service group online during "Run Fire Drill" operation
- Fire Drill Wizard in an HTC environment is untested in a configuration that uses the same horcm file for both regular and snapshot replication
- FireDrill attribute is not consistently enabled or disabled
- MountV resource state incorrectly set to UNKNOWN
- Quick recovery (QR) configuration issues
- Internationalization and localization issues
- Only US-ASCII characters are supported
- Use only U.S. ASCII characters in the SFW or SFW HA installation directory name
- Language preference in Veritas Enterprise Administrator (VEA) must be set to English (United States) or Japanese (Japan)
- VEA GUI cannot show double-byte characters correctly on (English) Windows operating system
- VEA can't connect to the remote VEA server on non-English platforms
- Unable to output correct results for Japanese commands
- SSO configuration fails if the system name contains non-English locale characters [2910613]
- VCS cluster may display "stale admin wait" state if the virtual computer name and the VCS cluster name contains non-English locale characters
- Issues faced while configuring application monitoring for a Windows service having non-English locale characters in its name
- Interoperability issues
- In an InfoScale Storage and InfoScale Availability co-existence scenario, an application service group configuration wizard may display an option to configure NetApp SnapMirror resources
- Backup Exec 12 installation fails in a VCS environment
- Symantec Endpoint Protection security policy may block the VCS Cluster Configuration Wizard
- VCS cluster configuration fails if Symantec Endpoint Protection 11.0 MR3 version is installed
- VCS services do not start on systems where Symantec Endpoint Protection (SEP) version 12.1 is installed
- Several issues while you configure VCS on systems where Symantec Endpoint Protection (SEP) version 12.1 is installed
- Miscellaneous issues
- Cluster node may become unresponsive if you try to modify network properties of adapters assigned to the VCS private network
- SharePoint 2010 resource faults with an initialization error
- Sharepoint 2010 resource fails to come online on non-central administrator node
- MSMQ resource fails to come online if the MSMQ directory path contains double byte characters
- Error while switching global service groups using Veritas Operations Manager 3.0
- Saving large configuration results in very large file size for main.cf
- AutoStart may violate limits and prerequisites load policy
- Trigger not invoked in REMOTE_BUILD state
- Some alert messages do not display correctly
- If VCS upgrade fails on one or more nodes, HAD fails to start and cluster becomes unusable
- Custom settings in the cluster configuration are lost after an upgrade if attribute values contain double quote characters
- Options on the Domain Selection panel in the VCS Cluster Configuration Wizard are disabled
- Live migration of a VM, which is part of a VCS cluster where LLT is configured over Ethernet, from one Hyper-V host to another may result in inconsistent HAD state
- If a NIC that is configured for LLT protocol is disabled, LLT does not notify clients
- Fibre Channel adapter issues
- Storage agent issues and limitations in VMware virtual environments
- Deployment issues
- Fixed issues
- Documentation errata
Exchange 2010 Configuration Wizard does not allow to select the application databases
During the service group configuration, on the Exchange Database Selection panel, the Exchange 2010 Configuration Wizard does not allow to select the databases even if they are present on the shared storage. (3808895)
This issue occurs if InfoScale Storage and InfoScale Availability co-exist in your deployment setup. In such a deployment setup, InfoScale Storage is used to manage the application data and InfoScale Availability is used for application high availability. Both the InfoScale products are installed on the same systems.
When InfoScale Availability is installed, the Exchange 2010 Configuration Wizard attempts to discover the databases that are created on the shared NetApp LUNs or the shared volumes that are created on the LDM disks. In the co-existence scenario, instead of the NetApp LUNs or the LDM disks, the data is present on the shared disks managed using SFW. As a result, during the service group configuration, the Exchange 2010 Configuration Wizard fails to discover the disks and does not allow to select the databases.
Workaround:
To resolve the issue, perform the following steps:
- Before you run the service group configuration wizard, rename the following registry key to any other value:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Veritas\VPI\ {F834E070-8D71-4c4b-B688-06964B88F3E8}\Solutions\vrts.soln.ha_adv.server - Run the service group configuration wizard and complete the service group configuration.
- After the service group is configured, reset the registry key value.