Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.0 Release Notes - Solaris
Last Published:
2017-12-18
Product(s):
InfoScale & Storage Foundation (7.0)
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Symantec Operations Readiness Tools
- Changes introduced in 7.0
- Licensing changes for InfoScale 7.0
- Support for SmartIO caching on SSD devices exported by FSS
- VMwareDisks agent
- Stronger security with 2048 bit key and SHA256 signature certificates
- Co-existence of SF 6.0.5 and Availability 7.0
- Inter Process Messaging (IPM) protocol used for secure communication is not supported
- ApplicationHA is not included in the 7.0 Veritas InfoScale product family
- Changes related to installation and upgrade
- Not supported in this release
- Changes related to documents
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- In the SF 6.0.5 and Availability 7.0 co-existence scenario, messages are displayed when running the local 6.0.5 installer script [3841305, 3841598]
- In the SF 6.0.5 and Availability 7.0 co-existence scenario, VRTSsfcpi601 cannot be removed [3841218]
- Notify sink resource and generic application resource moves to OFFLINE|UNKNOWN state after VCS upgrade [3806690]
- Switch fencing in enable or disable mode may not take effect if VCS is not reconfigured [3798127]
- After the upgrade to version 7.0, the installer may fail to stop the Asynchronous Monitoring Framework (AMF) process [3781993]
- LLT may fail to start after upgrade on Solaris 11 [3770835]
- On SunOS, drivers may not be loaded after a reboot [3798849]
- On Oracle Solaris, drivers may not be loaded after stop and then reboot [3763550]
- During an upgrade process, the AMF_START or AMF_STOP variable values may be inconsistent [3763790]
- Uninstallation fails on global zone on Solaris 11 if product packages are installed on both global zone and local zone [3762814]
- On Solaris 11, when you install the operating system together with SFHA products using Automated Installer, the local installer scripts do not get generated. (3640805)
- Node panics after upgrade from Solaris 11 to Solaris 11.1 on systems running version 6.0.1 or earlier (3560268)
- Stopping the installer during an upgrade and then resuming the upgrade might freeze the service groups (2574731)
- Installing VRTSvlic package during live upgrade on Solaris system non-global zones displays error messages [3623525]
- On Solaris 10, a flash archive installed through JumpStart may cause a new system to go into maintenance mode on reboot (2379123)
- VCS installation with CPI fails when a non-global zone is in installed state and zone root is not mounted on the node (2731178)
- Log messages are displayed when VRTSvcs is uninstalled on Solaris 11 [2919986]
- Cluster goes into STALE_ADMIN_WAIT state during upgrade from VCS 5.1 to 6.1 or later [2850921]
- Flash Archive installation not supported if the target system's root disk is encapsulated
- The Configure Sybase ASE CE Instance in VCS option creates duplicate service groups for Sybase binary mount points (2560188)
- The Installer fails to unload GAB module while installation of SF packages [3560458]
- On Solaris 11 non-default ODM mount options will not be preserved across package upgrade (2745100)
- Upgrade or uninstallation of SFHA or SFCFSHA may encounter module unload failures (2159652)
- The vxdisksetup command fails to initialize disks in cdsdisk format for disks in logical domains greater than 1 TB (2557072)
- Upgrade fails because there is zone installed on the VxFS file system which is offline. The packages in the zone are not updated. (3319753)
- If you choose to upgrade nodes without zones first, the rolling upgrade or phased upgrade is not blocked in the beginning, but fails later (3319961)
- Upgrades from previous SF Oracle RAC versions may fail on Solaris systems (3256400)
- After a locale change restart the vxconfig daemon (2417547, 2116264)
- Verification of Oracle binaries incorrectly reports as failed during Oracle Grid Infrastructure installation
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- (Solaris 11 x64) VxVM commands doesn't give output within the timeout value specified in the VCS script after disk attach or detach operations [3846006]
- vxassist fails to create volume with maxsize option on FSS disk group that has disk local to one node and exported for FSS disk group [3736095]
- vxdisksetup -if fails on PowerPath disks of sizes 1T to 2T [3752250]
- vxdmpraw creates raw devices for the whole disk, which causes problems on Oracle ASM 11.2.0.4 [3738639]
- VRAS verifydata command fails without cleaning up the snapshots created [3558199]
- Root disk encapsulation fails for root volume and swap volume configured on thin LUNs (3538594)
- The vxdisk resize command does not claim the correct LUN size on Solaris 11 during expansion of the LUN from array side (2858900)
- SmartIO VxVM cache invalidated after relayout operation (3492350)
- Disk greater than 1TB goes into error state [3761474, 3269099]
- Importing an exported zpool can fail when DMP native support is on (3133500)
- vxmirror to SAN destination failing when 5 partition layout is present: for example, root, swap, home, var, usr (2815311)
- Server panic after losing connectivity to the voting disk (2787766)
- Performance impact when a large number of disks are reconnected (2802698)
- Veritas Volume Manager (VxVM) might report false serial split brain under certain scenarios (1834513)
- Suppressing the primary path of an encapsulated SAN boot disk from Veritas Volume Manager causes the system reboot to fail (1933631)
- After changing the preferred path from the array side, the secondary path becomes active (2490012)
- Disk group import of BCV LUNs using -o updateid and -ouseclonedev options is not supported if the disk group has mirrored volumes with DCO or has snapshots (2831658)
- After devices that are managed by EMC PowerPath lose access to storage, Veritas Volume Manager commands are delayed (2757198)
- vxresize does not work with layered volumes that have multiple plexes at the top level (3301991)
- In a clustered configuration with Oracle ASM and DMP and AP/F array, when all the storage is removed from one node in the cluster, the Oracle DB is unmounted from other nodes of the cluster (3237696)
- Importing a clone disk group fails after splitting pairs (3134882)
- The DMP EMC CLARiiON ASL does not recognize mirror view not ready LUNs (3272940)
- When all Primary/Optimized paths between the server and the storage array are disconnected, ASM disk group dismounts and the Oracle database may go down (3289311)
- Running the vxdisk disk set clone=off command on imported clone disk group luns results in a mix of clone and non-clone disks (3338075)
- The administrator must explicitly enable and disable support for a clone device created from an existing root pool (3110589)
- Restarting the vxconfigd daemon on the slave node after a disk is removed from all nodes may cause the disk groups to be disabled on the slave node (3591019)
- Failback to primary paths does not occur if the node that initiated the failover leaves the cluster (1856723)
- Issues if the storage connectivity to data disks is lost on a CVM slave node while vxconfigd was not running on the node (2562889)
- The vxcdsconvert utility is supported only on the master node (2616422)
- Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)
- Issues with the disk state on the CVM slave node when vxconfigd is restarted on all nodes (2615680)
- Plex synchronization is not completed after resuming synchronization on a new master when the original master lost connectivity (2788077)
- A master node is not capable of doing recovery if it cannot access the disks belonging to any of the plexes of a volume (2764153)
- CVM fails to start if the first node joining the cluster has no connectivity to the storage (2787713)
- CVMVolDg agent may fail to deport CVM disk group when CVMDeportOnOffline is set to 1
- The vxsnap print command shows incorrect value for percentage dirty [2360780]
- For Solaris 11.1 or later, uninstalling DMP or disabling DMP native support requires steps to enable booting from alternate root pools (3178642)
- For Solaris 11.1 or later, after enabling DMP native support for ZFS, only the current boot environment is bootable (3157394)
- When dmp_native_support is set to on, commands hang for a long time on SAN failures (3084656)
- System hangs on a boot up after Boot Environment upgrades to Solaris 11 Update 2 and SF 6.2 from Solaris 11 GA.[3628743]
- vxdisk export operation fails if length of hostprefix and device name exceeds 30 characters (3543668)
- Virtualization known issues
- Veritas File System known issues
- Warning message sometimes appear in the console during system startup (2354829)
- vxresize may fail when you shrink a file system with the "blocks are currently in use" error (3762935)
- On Solaris11U2, /dev/odm may show 'Device busy' status when the system mounts ODM [3661567]
- Delayed allocation may be turned off automatically when one of the volumes in a multi-volume file system nears 100%(2438368)
- The file system deduplication operation fails with the error message "DEDUP_ERROR Error renaming X checkpoint to Y checkpoint on filesystem Z error 16" (3348534)
- Enabling delayed allocation on a small file system may disable the file system (2389318)
- On the cluster file system, clone dispose may fail [3754906]
- VRTSvxfs verification reports error after upgrading to 7.0 [3463479]
- Taking a FileSnap over NFS multiple times with the same target name can result in the 'File exists' error (2353352)
- On the online cache device you should not perform the mkfs operation, because any subsequent fscache operation panics (3643800)
- Deduplication can fail with error 110 (3741016)
- A restored volume snapshot may be inconsistent with the data in the SmartIO VxFS cache (3760219)
- When in-place and relocate compression rules are in the same policy file, file relocation is unpredictable (3760242)
- The file system may hang when it has compression enabled (3331276)
- Replication known issues
- RVGPrimary agent operation to start replication between the original Primary and the bunker fails during failback (2036605)
- A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail [3761497]
- In an IPv6-only environment RVG, data volumes or SRL names cannot contain a colon (1672410, 1672417)
- vxassist relayout removes the DCM (145413)
- vradmin functionality may not work after a master switch operation [2158679]
- Cannot relayout data volumes in an RVG from concat to striped-mirror (2129601)
- vradmin verifydata operation fails when replicating between versions 5.1 and 6.0 or later (2360713)
- vradmin verifydata may report differences in a cross-endian environment (2834424)
- vradmin verifydata operation fails if the RVG contains a volume set (2808902)
- SRL resize followed by a CVM slave node join causes the RLINK to detach (3259732)
- Bunker replay does not occur with volume sets (3329970)
- During moderate to heavy I/O, the vradmin verifydata command may falsely report differences in data (3270067)
- The vradmin repstatus command does not show that the SmartSync feature is running [3343141]
- While vradmin commands are running, vradmind may temporarily lose heartbeats (3347656, 3724338)
- Write I/Os on the primary logowner may take a long time to complete (2622536)
- After performing a CVM master switch on the secondary node, both rlinks detach (3642855)
- The RVGPrimary agent may fail to bring the application service group online on the new Primary site because of a previous primary-elect operation not being run or not completing successfully (3761555, 2043831)
- A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail (1558257)
- Cluster Server known issues
- Operational issues for VCS
- Some VCS components do not work on the systems where a firewall is configured to block TCP traffic
- Stale legacy_run services seen when VCS is upgraded to support SMF [2431741]
- The hastop -all command on VCS cluster node with AlternateIO resource and StorageSG having service groups may leave the node in LEAVING state
- Missing characters in system messages [2334245]
- After OS upgrade from Solaris 10 update 8 or 9 to Solaris 10 update 10 or 11, Samba server, SambaShare and NetBios agents fail to come online [3321120]
- CP server does not allow adding and removing HTTPS virtual IP or ports when it is running [3322154]
- CP server does not support IPv6 communication with HTTPS protocol [3209475]
- System encounters multiple VCS resource timeouts and agent core dumps [3424429]
- Some VCS components do not work on the systems where a firewall is configured to block TCP traffic [3545338]
- Issues related to the VCS engine
- Extremely high CPU utilization may cause HAD to fail to heartbeat to GAB [1744854]
- Missing host names in engine_A.log file (1919953)
- The hacf -cmdtocf command generates a broken main.cf file [1919951]
- Character corruption observed when executing the uuidconfig.pl -clus -display -use_llthost command [2350517]
- Trigger does not get executed when there is more than one leading or trailing slash in the triggerpath [2368061]
- Service group is not auto started on the node having incorrect value of EngineRestarted [2653688]
- Group is not brought online if top level resource is disabled [2486476]
- NFS resource goes offline unexpectedly and reports errors when restarted [2490331]
- Parent group does not come online on a node where child group is online [2489053]
- Cannot modify temp attribute when VCS is in LEAVING state [2407850]
- Oracle service group faults on secondary site during failover in a disaster recovery scenario [2653704]
- Service group may fail to come online after a flush and a force flush operation [2616779]
- Elevated TargetCount prevents the online of a service group with hagrp -online -sys command [2871892]
- Auto failover does not happen in case of two successive primary and secondary cluster failures [2858187]
- GCO clusters remain in INIT state [2848006]
- The ha commands may fail for non-root user if cluster is secure [2847998]
- Startup trust failure messages in system logs [2721512]
- Running -delete -keys for any scalar attribute causes core dump [3065357]
- Veritas Infoscale enters into admin_wait state when Cluster Statistics is enabled with load and capacity defined [3199210]
- Agent reports incorrect state if VCS is not set to start automatically and utmp file is empty before VCS is started [3326504]
- VCS crashes if feature tracking file is corrupt [3603291]
- RemoteGroup agent and non-root users may fail to authenticate after a secure upgrade [3649457]
- Issues related to the bundled agents
- Entry points that run inside a zone are not cancelled cleanly [1179694]
- Solaris mount agent fails to mount Linux NFS exported directory
- The zpool command runs into a loop if all storage paths from a node are disabled
- Zone remains stuck in down state if tried to halt with file system mounted from global zone [2326105]
- Process and ProcessOnOnly agent rejects attribute values with white spaces [2303513]
- The zpool commands hang and remain in memory till reboot if storage connectivity is lost [2368017]
- Offline of zone resource may fail if zoneadm is invoked simultaneously [2353541]
- Password changed while using hazonesetup script does not apply to all zones [2332349]
- RemoteGroup agent does not failover in case of network cable pull [2588807]
- CoordPoint agent remains in faulted state [2852872]
- Prevention of Concurrency Violation (PCV) is not supported for applications running in a container [2536037]
- Share resource goes offline unexpectedly causing service group failover [1939398]
- Mount agent does not support all scenarios of loopback mounts
- Invalid Netmask value may display code errors [2583313]
- Zone root configured on ZFS with ForceAttach attribute enabled causes zone boot failure (2695415)
- Error message is seen for Apache resource when zone is in transient state [2703707]
- Monitor falsely reports NIC resource as offline when zone is shutting down (2683680)
- Apache resource does not come online if the directory containing Apache pid file gests deleted when a node or zone restarts (2680661)
- Online of LDom resource may fail due to incompatibility of LDom configuration file with host OVM version (2814991)
- Online of IP or IPMultiNICB resource may fail if its IP address specified does not fit within the values specified in the allowed-address property (2729505)
- Application resource running in a container with PidFiles attribute reports offline on upgrade to VCS 6.0 or later [2850927]
- NIC resource may fault during group offline or failover on Solaris 11 [2754172]
- NFS client reports error when server is brought down using shutdown command [2872741]
- NFS client reports I/O error because of network split brain [3257399]
- Mount resource does not support spaces in the MountPoint and BlockDevice attribute values [3335304]
- IP Agent fails to detect the online state for the resource in an exclusive-IP zone [3592683]
- SFCache Agent fails to enable caching if cache area is offline [3644424]
- RemoteGroup agent may stop working on upgrading the remote cluster in secure mode [3648886]
- (Solaris 11 x64) Application does not come online after the ESX server crashes or is isolated [3838654]
- (Solaris 11 x64) Application may not failover when a cable is pulled off from the ESX host [3842833]
- (Solaris 11 x64) Disk may not be visible on VM even after the VMwareDisks resource is online [3838644]
- (Solaris 11 x64) Virtual machine may hang when the VMwareDisks resource is trying to come online [3849480]
- Issues related to the VCS database agents
- Netlsnr agent monitoring can't detect tnslsnr running on Solaris if the entire process name exceeds 79 characters [3784547]
- The ASMInstAgent does not support having pfile/spfile for the ASM Instance on the ASM diskgroups
- VCS agent for ASM: Health check monitoring is not supported for ASMInst agent
- NOFAILOVER action specified for certain Oracle errors
- ASMInstance resource monitoring offline resource configured with OHASD as application resource logs error messages in VCS logs [2846945]
- Oracle agent fails to offline pluggable database (PDB) resource with PDB in backup mode [3592142]
- Clean succeeds for PDB even as PDB staus is UNABLE to OFFLINE [3609351]
- Second level monitoring fails if user and table names are identical [3594962]
- Monitor entry point times out for Oracle PDB resources when CDB is moved to suspended state in Oracle 12.1.0.2 [3643582]
- Oracle agent fails to online and monitor Oracle instance if threaded_execution parameter is set to true [3644425]
- Issues related to the agent framework
- Agent framework cannot handle leading and trailing spaces for the dependent attribute (2027896)
- The agent framework does not detect if service threads hang inside an entry point [1442255]
- IMF related error messages while bringing a resource online and offline [2553917]
- Delayed response to VCS commands observed on nodes with several resources and system has high CPU usage or high swap usage [3208239]
- CFSMount agent may fail to heartbeat with VCS engine and logs an error message in the engine log on systems with high memory load [3060779]
- Logs from the script executed other than the agent entry point goes into the engine logs [3547329]
- Issues related to Intelligent Monitoring Framework (IMF)
- Registration error while creating a Firedrill setup [2564350]
- IMF does not fault zones if zones are in ready or down state [2290883]
- IMF does not detect the zone state when the zone goes into a maintenance state [2535733]
- IMF does not provide notification for a registered disk group if it is imported using a different name (2730774)
- Direct execution of linkamf displays syntax error [2858163]
- Error messages displayed during reboot cycles [2847950]
- Error message displayed when ProPCV prevents a process from coming ONLINE to prevent concurrency violation does not have I18N support [2848011]
- AMF displays StartProgram name multiple times on the console without a VCS error code or logs [2872064]
- VCS engine shows error for cancellation of reaper when Apache agent is disabled [3043533]
- Terminating the imfd daemon orphans the vxnotify process [2728787]
- Agent cannot become IMF-aware with agent directory and agent file configured [2858160]
- ProPCV fails to prevent a script from running if it is run with relative path [3617014]
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- The cpsadm command fails after upgrading CP server to 6.0 or above in secure mode (2846727)
- Delay in rebooting Solaris 10 nodes due to vxfen service timeout issues (1897449)
- CP server repetitively logs unavailable IP addresses (2530864)
- Fencing port b is visible for few seconds even if cluster nodes have not registered with CP server (2415619)
- The cpsadm command fails if LLT is not configured on the application cluster (2583685)
- When I/O fencing is not up, the svcs command shows VxFEN as online (2492874)
- In absence of cluster details in CP server, VxFEN fails with pre-existing split-brain message (2433060)
- The vxfenswap utility does not detect failure of coordination points validation due to an RSH limitation (2531561)
- Fencing does not come up on one of the nodes after a reboot (2573599)
- Common product installer cannot setup trust between a client system on release version 5.1SP1 and a server on release version 6.0 or later [3226290]
- Hostname and username are case sensitive in CP server (2846392)
- Server-based fencing comes up incorrectly if default port is not mentioned (2403453)
- Secure CP server does not connect from localhost using 127.0.0.1 as the IP address (2554981)
- Unable to customize the 30-second duration (2551621)
- CoordPoint agent does not report the addition of new disks to a Coordinator disk group [2727672]
- Fencing may show the RFSM state as replaying for some nodes in the cluster (2555191)
- The vxfenswap utility deletes comment lines from the /etc/vxfemode file, if you run the utility with hacli option (3318449)
- When you configure CP server only for HTTPS-based communication, the engine_A.log displays a misleading message (3321101)
- The vxfentsthdw utility may not run on systems installed with partial SFHA stack [3333914]
- When a client node goes down, for reasons such as node panic, I/O fencing does not come up on that client node after node restart (3341322)
- The vxfenconfig -l command output does not list Coordinator disks that are removed using the vxdmpadm exclude dmpnodename=<dmp_disk/node> command [3644431]
- Stale .vxfendargs file lets hashadow restart vxfend in Sybase mode (2554886)
- CP server configuration fails while setting up secure credentials for CP server hosted on an SFHA cluster (2621029)
- Coordination point server-based fencing may fail if it is configured on 5.1SP1RP1 using 6.0.1 coordination point servers (3226290)
- The CoordPoint agent faults after you detach or reattach one or more coordination disks from a storage array (3317123)
- The upper bound value of FaultTolerance attribute of CoordPoint agent should be less than the majority of the coordination points. (2846389)
- The vxfenswap utility deletes comment lines from the /etc/vxfemode file, if you run the utility with hacli option (3318449)
- GAB known issues
- While deinitializing GAB client, "gabdebug -R GabTestDriver" command logs refcount value 2 (2536373)
- Cluster panics during reconfiguration (2590413)
- GAB may fail to stop during a phased upgrade on Oracle Solaris 11 (2858157)
- Cannot run pfiles or truss files on gablogd (2292294)
- (Oracle Solaris 11) On virtual machines, sometimes the common product installer (CPI) may report that GAB failed to start and may exit (2879262)
- During upgrade, GAB kernel module fails to unload [3560458]
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Cache area is lost after a disk failure (3158482)
- NFS issues with VxFS Storage Checkpoints (2027492)
- Some SmartTier for Oracle commands do not work correctly in non-POSIX locales (2138030)
- In an IPv6 environment, db2icrt and db2idrop commands return a segmentation fault error during instance creation and instance removal (1602444)
- Boot fails after installing or removing SFHA packages from a Solaris 9 system to a remote Solaris 10 system (1747640)
- Oracle 11gR1 may not work on pure IPv6 environment (1819585)
- Sybase ASE version 15.0.3 causes segmentation fault on some Solaris version (1819595)
- Not all the objects are visible in the VOM GUI (1821803)
- An error message is received when you perform off-host clone for RAC and the off-host node is not part of the CVM cluster (1834860)
- A volume's placement class tags are not visible in the Veritas Enterprise Administrator GUI when creating a SmartTier placement policy (1880081)
- NULL pointer dereference panic with Solaris 10 Update 10 on x64 and Hitachi Data Systems storage (2616044)
- Storage Foundation Cluster File System High Availability known issues
- Write back cache is not supported on the cluster in FSS scenario [3723701]
- CVMVOLDg agent is not going into the FAULTED state. [3771283]
- CFS commands might hang when run by non-root (3038283)
- The fsappadm subfilemove command moves all extents of a file (3258678)
- Certain I/O errors during clone deletion may lead to system panic. (3331273)
- Panic due to null pointer de-reference in vx_bmap_lookup() (3038285)
- In a CFS cluster, that has multi-volume file system of a small size, the fsadm operation may hang (3348520)
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- ASM disk groups configured with normal or high redundancy are dismounted if the CVM master panics due to network failure in FSS environment or if CVM I/O shipping is enabled (3600155)
- PrivNIC and MultiPrivNIC agents not supported with Oracle RAC 11.2.0.2 and later versions
- CSSD agent forcibly stops Oracle Clusterware if Oracle Clusterware fails to respond (3352269)
- Intelligent Monitoring Framework (IMF) entry point may fail when IMF detects resource state transition from online to offline for CSSD resource type (3287719)
- The vxconfigd daemon fails to start after machine reboot (3566713)
- Health check monitoring fails with policy-managed databases (3609349)
- Issue with format of the last 8-bit number in private IP addresses (1164506)
- CVMVolDg agent may fail to deport CVM disk group
- PrivNIC resource faults in IPMP environments on Solaris 11 systems (2838745)
- Warning message displayed on taking cssd resource offline if LANG attribute is set to "eucJP" (2123122)
- Error displayed on removal of VRTSjadba language package (2569224)
- Veritas Volume Manager can not identify Oracle Automatic Storage Management (ASM) disks (2771637)
- Oracle Universal Installer fails to start on Solaris 11 systems (2784560)
- CVM requires the T10 vendor provided ID to be unique (3191807)
- Preserving Flexible Storage Sharing attributes with vxassist grow and vxresize commands is not supported (3225318)
- FSS Disk group creation with 510 exported disks from master fails with Transaction locks timed out error (3311250)
- vxdisk export operation fails if length of hostprefix and device name exceeds 30 characters (3543668)
- Change in naming scheme is not reflected on nodes in an FSS environment (3589272)
- vxassist does not create data change logs on all mirrored disks, if an FSS volume is created using DM lists (3559362)
- Storage Foundation for Databases (SFDB) tools known issues
- Sometimes SFDB may report the following error message: SFDB remote or privileged command error (2869262)
- SFDB commands do not work in IPV6 environment (2619958)
- When you attempt to move all the extents of a table, the dbdst_obj_move(1M) command fails with an error (3260289)
- Attempt to use SmartTier commands fails (2332973)
- Attempt to use certain names for tiers results in error (2581390)
- Clone operation failure might leave clone database in unexpected state (2512664)
- Clone command fails if PFILE entries have their values spread across multiple lines (2844247)
- Data population fails after datafile corruption, rollback, and restore of offline checkpoint (2869259)
- FileSnap detail listing does not display the details of a particular snap (2846382)
- Flashsnap clone fails under some unusual archivelog configuration on RAC (2846399)
- vxdbd process is online after Flash archive installation (2869269)
- On Solaris 11.1 SPARC, setting up the user-authentication process using the sfae_auth_op command fails with an error message (3556996)
- In the cloned database, the seed PDB remains in the mounted state (3599920)
- Cloning of a container database may fail after a reverse resync commit operation is performed (3509778)
- If one of the PDBs is in the read-write restricted state, then cloning of a CDB fails (3516634)
- Cloning of a CDB fails for point-in-time copies when one of the PDBs is in the read-only mode (3513432)
- If a CDB has a tablespace in the read-only mode, then the cloning fails (3512370)
- If any SFDB installation prior to 6.2 with authentication setup is upgraded to 7.0, the commands fail with an error (3644030)
- Storage Foundation for Sybase ASE CE known issues
- Issues related to installation and upgrade
- Software Limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- DMP does not support devices in the same enclosure that are configured in different modes (2643506)
- DMP support for the Solaris format command (2043956)
- DMP settings for NetApp storage attached environment
- ZFS pool in unusable state if last path is excluded from DMP (1976620)
- When an I/O domain fails, the vxdisk scandisks or vxdctl enable command take a long time to complete (2791127)
- Veritas Volume Manager software limitations
- Snapshot configuration with volumes in shared disk groups and private disk groups is not supported (2801037)
- Storage reclamation does not happen on volumes with break-off snapshot (2798523)
- SmartSync is not supported for Oracle databases running on raw VxVM volumes
- Veritas Infoscale does not support thin reclamation of space on a linked mirror volume (2729563)
- A 1 TB disk that is not labeled using operating system commands goes into an error state after the vxconfigd daemon is restarted
- Converting a multi-pathed disk
- Thin reclamation requests are not redirected even when the ioship policy is enabled (2755982)
- Veritas Operations Manager does not support disk, disk group, and volume state information related to CVM I/O shipping feature (2781126)
- Veritas File System software limitations
- Recommended limit of number of files in a directory
- The vxlist command cannot correctly display numbers greater than or equal to 1 EB
- Limitations with delayed allocation for extending writes feature
- FlashBackup feature of NetBackup 7.5 (or earlier) does not support disk layout Version 8, 9, or 10
- Compressed files that are backed up using NetBackup 7.1 or prior become uncompressed when you restore the files
- SmartIO software limitations
- Dynamic Multi-Pathing software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Programs using networked services may stop responding if the host is disconnected
- Volume agent clean may forcibly stop volume resources
- False concurrency violation when using PidFiles to monitor application resources
- Volumes in a disk group start automatically irrespective of the value of the StartVolumes attribute in VCS [2162929]
- Online for LDom resource fails [2517350]
- Zone agent registered to IMF for Directory Online event
- LDom resource calls clean entry point when primary domain is gracefully shut down
- Application agent limitations
- Interface object name must match net<x>/v4static for VCS network reconfiguration script in Solaris 11 guest domain [2840193]
- Share agent limitation (2717636)
- Campus cluster fire drill does not work when DSM sites are used to mark site boundaries [3073907]
- On Solaris 10, the online operation of IP resource may fail if ifconfig -a returns an error [3609861]
- Mount agent reports resource state as OFFLINE if the configured mount point does not exist [3435266]
- Limitations related to VCS engine
- Loads fail to consolidate and optimize when multiple groups fault [3074299]
- Preferred fencing ignores the forecasted available capacity [3077242]
- Failover occurs within the SystemZone or site when BiggestAvailable policy is set [3083757]
- Load for Priority groups is ignored in groups with BiggestAvailable and Priority in the same group[3074314]
- Cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Engine hangs when you perform a global cluster upgrade from 5.0MP3 in mixed-stack environments [1820327]
- Systems in a cluster must have same system locale setting
- Limitations with DiskGroupSnap agent [1919329]
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Preferred fencing limitation when VxFEN activates RACER node re-election
- Stopping systems in clusters with I/O fencing configured
- Uninstalling VRTSvxvm causes issues when VxFEN is configured in SCSI3 mode with dmp disk policy (2522069)
- Node may panic if HAD process is stopped by force and then node is shut down or restarted [3640007]
- Limitations related to global clusters
- Clusters must run on VCS 6.0.5 and later to be able to communicate after upgrading to 2048 bit key and SHA256 signature certificates [3812313]
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Supportability constraints for normal or high redundancy ASM disk groups with CVM I/O shipping and FSS (3600155)
- Limitations of CSSD agent
- Oracle Clusterware/Grid Infrastructure installation fails if the cluster name exceeds 14 characters
- Policy-managed databases not supported by CRSResource agent
- Health checks may fail on clusters that have more than 10 nodes
- Cached ODM not supported in Veritas Infoscale environments
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Storage Foundation software limitations
- Documentation
Cluster panics during reconfiguration (2590413)
While a cluster is reconfiguring, GAB broadcast protocol encounters a race condition in the sequence request path. This condition occurs in an extremely narrow window which eventually causes the GAB master to panic.
Workaround: There is no workaround for this issue.