Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.0 Release Notes - Linux
Last Published:
2017-12-18
Product(s):
InfoScale & Storage Foundation (7.0)
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Symantec Operations Readiness Tools
- Changes introduced in 7.0
- Licensing changes for InfoScale 7.0
- Btrfs support in Mount Agent
- Support for SmartIO caching on SSD devices exported by FSS
- Stronger security with 2048 bit key and SHA256 signature certificates
- Inter Process Messaging (IPM) protocol used for secure communication is not supported
- ApplicationHA is not included in the 7.0 Veritas InfoScale product family
- Changes related to installation and upgrade
- Not supported in this release
- Changes related to documents
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Notify sink resource and generic application resource moves to OFFLINE|UNKNOWN state after VCS upgrade [3806690]
- Switch fencing in enable or disable mode may not take effect if VCS is not reconfigured [3798127]
- During an upgrade process, the AMF_START or AMF_STOP variable values may be inconsistent [3763790]
- Stopping the installer during an upgrade and then resuming the upgrade might freeze the service groups (2574731)
- The uninstaller does not remove all scripts (2696033)
- NetBackup 6.5 or older version is installed on a VxFS file system (2056282)
- Error messages in syslog (1630188)
- Ignore certain errors after an operating system upgrade - after a product upgrade with encapsulated boot disks (2030970)
- After a locale change restart the vxconfig daemon (2417547, 2116264)
- Dependency may get overruled when uninstalling multiple RPMs in a single command [3563254]
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- vxdmpraw creates raw devices for the whole disk, which causes problems on Oracle ASM 11.2.0.4 [3738639]
- After installing DMP 6.0.1 on a host with the root disk under LVM on a cciss controller, the system is unable to boot using the vxdmp_kernel command [3599030]
- VRAS verifydata command fails without cleaning up the snapshots created [3558199]
- SmartIO VxVM cache invalidated after relayout operation (3492350)
- VxVM fails to create volume by the vxassist(1M) command with maxsize parameter on Oracle Linux 6 Update 5 (OEL6U5) [3736647]
- Performance impact when a large number of disks are reconnected (2802698)
- Machine fails to boot after root disk encapsulation on servers with UEFI firmware (1842096)
- Veritas Volume Manager (VxVM) might report false serial split brain under certain scenarios (1834513)
- Root disk encapsulation issue (1603309)
- VxVM starts before OS device scan is done (1635274)
- DMP disables subpaths and initiates failover when an iSCSI link is failed and recovered within 5 seconds. (2100039)
- During system boot, some VxVM volumes fail to mount (2622979)
- Unable to upgrade the kernel on an encapsulated boot disk on SLES 11 (2612301)
- Removing an array node from an IBM Storwize V7000 storage system also removes the controller (2816589)
- Continuous trespass loop when a CLARiiON LUN is mapped to a different host than its snapshot (2761567)
- Disk group import of BCV LUNs using -o updateid and -ouseclonedev options is not supported if the disk group has mirrored volumes with DCO or has snapshots (2831658)
- After devices that are managed by EMC PowerPath lose access to storage, Veritas Volume Manager commands are delayed (2757198)
- vxresize does not work with layered volumes that have multiple plexes at the top level (3301991)
- Importing a clone disk group fails after splitting pairs (3134882)
- The DMP EMC CLARiiON ASL does not recognize mirror view not ready LUNs (3272940)
- Running the vxdisk disk set clone=off command on imported clone disk group luns results in a mix of clone and non-clone disks (3338075)
- vxunroot cannot encapsulate a root disk when the root partition has XFS mounted on it (3614362)
- Restarting the vxconfigd daemon on the slave node after a disk is removed from all nodes may cause the disk groups to be disabled on the slave node (3591019)
- DMP panics if a DDL device discovery is initiated immediately after loss of connectivity to the storage (2040929)
- Failback to primary paths does not occur if the node that initiated the failover leaves the cluster (1856723)
- Issues if the storage connectivity to data disks is lost on a CVM slave node while vxconfigd was not running on the node (2562889)
- The vxcdsconvert utility is supported only on the master node (2616422)
- Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)
- Issues with the disk state on the CVM slave node when vxconfigd is restarted on all nodes (2615680)
- Plex synchronization is not completed after resuming synchronization on a new master when the original master lost connectivity (2788077)
- A master node is not capable of doing recovery if it cannot access the disks belonging to any of the plexes of a volume (2764153)
- CVM fails to start if the first node joining the cluster has no connectivity to the storage (2787713)
- CVMVolDg agent may fail to deport CVM disk group when CVMDeportOnOffline is set to 1
- cvm_clus resource goes into faulted state after the resource is manually panicked and rebooted in a 32 node cluster (2278894)
- DMP uses OS device physical path to maintain persistence of path attributes from 6.0 [3761441]
- The vxsnap print command shows incorrect value for percentage dirty [2360780]
- device.map must be up to date before doing root disk encapsulation (3761585, 2202047)
- Virtualization known issues
- Configuring application for high availability with storage using VCS wizard may fail on a VMware virtual machine which is configured with more than two storage controllers [3640956]
- Agent kill on source during migration may lead to resource concurrency violation (3042499)
- Host fails to reboot when the resource gets stuck in ONLINE|STATE UNKNOWN state [2738864]
- VM state is in PAUSED state when storage domain is inactive [2747163]
- Switching KVMGuest resource fails due to inadequate swap space on the other host [2753936]
- Policies introduced in SLES 11SP2 may block graceful shutdown if a VM in SUSE KVM environment [2792889]
- Load on libvirtd may terminate it in SUSE KVM environment [2824952]
- Offline or switch of KVMGuest resource fails if the VM it is monitoring is undefined [2796817]
- Increased memory usage observed even with no VM running [2734970]
- Resource faults when it fails to ONLINE VM beacuse of insufficient swap percentage [2827214]
- Migration of guest VM on native LVM volume may cause libvirtd process to terminate abruptly (2582716)
- Virtual machine may return the not-responding state when the storage domain is inactive and the data center is down (2848003)
- Guest virtual machine may fail on RHEL 6.1 if KVM guest image resides on CVM-CFS [2659944]
- System panics after starting KVM virtualized guest or initiating KVMGuest resource online [2337626]
- KVMGuest agent fail to online the resource in a DR configuration with error 400 [3056096]
- CD ROM with empty file vmPayload found inside the guest when resource comes online [3060910]
- VCS fails to start virtual machine on another node if the first node panics [3042806]
- VM fails to start on the target node if the source node panics or restarts during migration [3042786]
- The High Availability tab does not report LVMVolumeGroup resources as online [2909417]
- Cluster communication breaks when you revert a snapshot in VMware environment [3409586]
- VCS may detect the migration event during the regular monitor cycle due to the timing issue [2827227]
- The vxrhevadm CLI is missing after upgrading the VRTSrhevm package from 6.2 to 6.2.1 [3733178]
- Veritas File System known issues
- Docker does not recognize VxFS backend file system
- On RHEL7 onwards, Pluggable Authentication Modules(PAM) related error messages for Samba daemon might occur in system logs [3765921]
- Delayed allocation may be turned off automatically when one of the volumes in a multi-volume file system nears 100%(2438368)
- The file system deduplication operation fails with the error message "DEDUP_ERROR Error renaming X checkpoint to Y checkpoint on filesystem Z error 16" (3348534)
- Enabling delayed allocation on a small file system may disable the file system (2389318)
- After upgrading a file system using the vxupgrade(1M) command, the sfcache(1M) command with the stat option shows garbage value on the secondary node. [3759788]
- XFS file system is not supported for RDE
- When hard links are present in the file system, the sfcache list command shows incorrect cache usage statistics (3059125, 3760172)
- The command tab auto-complete fails for the /dev/vx/ file tree; specifically for RHEL 7 (3602082)
- Task blocked messages display in the console for RHEL5 and RHEL6 (2560357)
- Deduplication can fail with error 110 (3741016)
- System unable to select ext4 from the file system (2691654)
- The system panics with the panic string "kernel BUG at fs/dcache.c:670!" (3323152)
- A restored volume snapshot may be inconsistent with the data in the SmartIO VxFS cache (3760219)
- When in-place and relocate compression rules are in the same policy file, file relocation is unpredictable (3760242)
- During a deduplication operation, the spoold script fails to start (3196423)
- The file system may hang when it has compression enabled (3331276)
- "rpc.statd" in the "nfs-utils" RPM in the various Linux distributions does not properly cleanse the untrusted format strings (3335691)
- Interrupted lazy unmounts can result in a system panic on all the Linux platforms (3736398)
- Replication known issues
- RVGPrimary agent operation to start replication between the original Primary and the bunker fails during failback (2036605)
- A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail [3761497]
- In an IPv6-only environment RVG, data volumes or SRL names cannot contain a colon (1672410, 1672417, 1825031)
- vxassist relayout removes the DCM (145413)
- vradmin functionality may not work after a master switch operation [2158679]
- Cannot relayout data volumes in an RVG from concat to striped-mirror (2129601)
- vradmin verifydata operation fails when replicating between versions 5.1 and 6.0 or later (2360713)
- vradmin verifydata may report differences in a cross-endian environment (2834424)
- vradmin verifydata operation fails if the RVG contains a volume set (2808902)
- Plex reattach operation fails with unexpected kernel error in configuration update (2791241)
- Bunker replay does not occur with volume sets (3329970)
- SmartIO does not support write-back caching mode for volumes configured for replication by Volume Replicator (3313920)
- During moderate to heavy I/O, the vradmin verifydata command may falsely report differences in data (3270067)
- The vradmin repstatus command does not show that the SmartSync feature is running [3343141]
- While vradmin commands are running, vradmind may temporarily lose heartbeats (3347656, 3724338)
- Write I/Os on the primary logowner may take a long time to complete (2622536)
- After performing a CVM master switch on the secondary node, both rlinks detach (3642855)
- vradmin -g dg repstatus rvg displays the following configuration error: vradmind not reachable on cluster peer (3648854)
- The RVGPrimary agent may fail to bring the application service group online on the new Primary site because of a previous primary-elect operation not being run or not completing successfully (3761555, 2043831)
- A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail (1558257)
- Cluster Server known issues
- Operational issues for VCS
- LVM SG transition fails in all paths disabled status [2081430]
- SG goes into Partial state if Native LVMVG is imported and activated outside VCS control
- Some VCS components do not work on the systems where a firewall is configured to block TCP traffic
- Switching service group with DiskGroup resource causes reservation conflict with UseFence set to SCSI3 and powerpath environment set [2749136]
- Stale NFS file handle on the client across failover of a VCS service group containing LVMLogicalVolume resource (2016627)
- NFS cluster I/O fails when storage is disabled [2555662]
- VVR configuration may go in a primary-primary configuration when the primary node crashes and restarts [3314749]
- CP server does not allow adding and removing HTTPS virtual IP or ports when it is running [3322154]
- CP server does not support IPv6 communication with HTTPS protocol [3209475]
- VCS fails to stop volume due to a transaction ID mismatch error [3292840]
- Some VCS components do not work on the systems where a firewall is configured to block TCP traffic [3545338]
- Issues related to the VCS engine
- Invalid argument message in the message log due to Red Hat Linux bug (3872083)
- Extremely high CPU utilization may cause HAD to fail to heartbeat to GAB [1744854]
- The hacf -cmdtocf command generates a broken main.cf file [1919951]
- Trigger does not get executed when there is more than one leading or trailing slash in the triggerpath [2368061]
- Service group is not auto started on the node having incorrect value of EngineRestarted [2653688]
- Group is not brought online if top level resource is disabled [2486476]
- NFS resource goes offline unexpectedly and reports errors when restarted [2490331]
- Parent group does not come online on a node where child group is online [2489053]
- Cannot modify temp attribute when VCS is in LEAVING state [2407850]
- Service group may fail to come online after a flush and a force flush operation [2616779]
- Elevated TargetCount prevents the online of a service group with hagrp -online -sys command [2871892]
- Auto failover does not happen in case of two successive primary and secondary cluster failures [2858187]
- GCO clusters remain in INIT state [2848006]
- The ha commands may fail for non-root user if cluster is secure [2847998]
- Running -delete -keys for any scalar attribute causes core dump [3065357]
- Veritas Infoscale enters into admin_wait state when Cluster Statistics is enabled with load and capacity defined [3199210]
- Agent reports incorrect state if VCS is not set to start automatically and utmp file is empty before VCS is started [3326504]
- Log messages are seen on every systemctl transaction on RHEL7 [3609196]
- VCS crashes if feature tracking file is corrupt [3603291]
- RemoteGroup agent and non-root users may fail to authenticate after a secure upgrade [3649457]
- Global Cluster Option (GCO) require NIC names in specific format [3641586]
- Issues related to the bundled agents
- LVM Logical Volume will be auto activated during I/O path failure [2140342]
- KVMGuest monitor entry point reports resource ONLINE even for corrupted guest or with no OS installed inside guest [2394235]
- Concurrency violation observed during migration of monitored virtual machine [2755936]
- LVM logical volume may get stuck with reiserfs file system on SLES11 [2120133]
- KVMGuest resource comes online on failover target node when started manually [2394048]
- IMF registration fails for Mount resource if the configured MountPoint path contains spaces [2442598]
- DiskGroup agent is unable to offline the resource if volume is unmounted outside VCS
- RemoteGroup agent does not failover in case of network cable pull [2588807]
- VVR setup with FireDrill in CVM environment may fail with CFSMount Errors [2564411]
- CoordPoint agent remains in faulted state [2852872]
- RVGsnapshot agent does not work with volume sets created using vxvset [2553505]
- No log messages in engine_A.log if VCS does not find the Monitor program [2563080]
- No IPv6 support for NFS [2022174]
- KVMGuest agent fails to recognize paused state of the VM causing KVMGuest resource to fault [2796538]
- Concurrency violation observed when host is moved to maintenance mode [2735283]
- Logical volume resources fail to detect connectivity loss with storage when all paths are disabled in KVM guest [2871891]
- Resource does not appear ONLINE immediately after VM appears online after a restart [2735917]
- Unexpected behavior in VCS observed while taking the disk online [3123872]
- LVMLogicalVolume agent clean entry point fails to stop logical volume if storage connectivity is lost [3118820]
- VM goes into paused state if the source node loses storage connectivity during migration [3085214]
- Virtual machine goes to paused state during migration if the public network cable is pulled on the destination node [3080930]
- NFS resource faults on the node enabled with SELinux and where rpc.statd process may terminate when access is denied to the PID file [3248903]
- NFS client reports I/O error because of network split brain [3257399]
- Mount resource does not support spaces in the MountPoint and BlockDevice attribute values [3335304]
- Manual configuration of RHEVMInfo attribute of KVMGuest agent requires all its keys to be configured [3277994]
- NFS lock failover is not supported on Linux [3331646]
- SambaServer agent may generate core on Linux if LockDir attribute is changed to empty value while agent is running [3339231]
- Independent Persistent disk setting is not preserved during failover of virtual disks in VMware environment [3338702]
- LVMLogicalVolume resource goes in UNABLE TO OFFLINE state if native LVM volume group is exported outside VCS control [3606516]
- DiskGroup resource online may take time if it is configured along with VMwareDisks resource [3638242]
- SFCache Agent fails to enable caching if cache area is offline [3644424]
- RemoteGroup agent may stop working on upgrading the remote cluster in secure mode [3648886]
- VMwareDisks agent may fail to start or storage discovery may fail if SELinux is running in enforcing mode [3106376]
- Issues related to the VCS database agents
- The ASMInstAgent does not support having pfile/spfile for the ASM Instance on the ASM diskgroups
- VCS agent for ASM: Health check monitoring is not supported for ASMInst agent
- NOFAILOVER action specified for certain Oracle errors
- Oracle agent fails to offline pluggable database (PDB) resource with PDB in backup mode [3592142]
- Clean succeeds for PDB even as PDB staus is UNABLE to OFFLINE [3609351]
- Second level monitoring fails if user and table names are identical [3594962]
- Monitor entry point times out for Oracle PDB resources when CDB is moved to suspended state in Oracle 12.1.0.2 [3643582]
- Oracle agent fails to online and monitor Oracle instance if threaded_execution parameter is set to true [3644425]
- Issues related to the agent framework
- Agent framework cannot handle leading and trailing spaces for the dependent attribute (2027896)
- The agent framework does not detect if service threads hang inside an entry point [1442255]
- IMF related error messages while bringing a resource online and offline [2553917]
- Delayed response to VCS commands observed on nodes with several resources and system has high CPU usage or high swap usage [3208239]
- CFSMount agent may fail to heartbeat with VCS engine and logs an error message in the engine log on systems with high memory load [3060779]
- Logs from the script executed other than the agent entry point goes into the engine logs [3547329]
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Registration error while creating a Firedrill setup [2564350]
- IMF does not provide notification for a registered disk group if it is imported using a different name (2730774)
- Direct execution of linkamf displays syntax error [2858163]
- Error messages displayed during reboot cycles [2847950]
- Error message displayed when ProPCV prevents a process from coming ONLINE to prevent concurrency violation does not have I18N support [2848011]
- AMF displays StartProgram name multiple times on the console without a VCS error code or logs [2872064]
- Core dump observed when amfconfig is run with set and reset commands simultaneously [2871890]
- VCS engine shows error for cancellation of reaper when Apache agent is disabled [3043533]
- Terminating the imfd daemon orphans the vxnotify process [2728787]
- Agent cannot become IMF-aware with agent directory and agent file configured [2858160]
- ProPCV fails to prevent a script from running if it is run with relative path [3617014]
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- VCS Cluster Configuration wizard does not automatically close in Mozilla Firefox [3281450]
- Configuration inputs page of VCS Cluster Configuration wizard shows multiple cluster systems for the same virtual machine [3237023]
- VCS Cluster Configuration wizard fails to display mount points on native LVM if volume groups are exported [3341937]
- IPv6 verification fails while configuring generic application using VCS Cluster Configuration wizard [3614680]
- LLT known issues
- LLT may fail to detect when bonded NICs come up (2604437)
- LLT connections are not formed when a vlan is configured on a NIC (2484856)
- LLT port stats sometimes shows recvcnt larger than recvbytes (1907228)
- LLT may incorrectly declare port-level connection for nodes in large cluster configurations [1810217]
- If you manually re-plumb (change) the IP address on a network interface card (NIC) which is used by LLT, then LLT may experience heartbeat loss and the node may panic (3188950)
- A network restart of the network interfaces may cause heartbeat loss for the NIC interfaces used by LLT
- When you execute the /etc/init.d/llt start script to load the LLT module, the syslog file may record messages related to kernel symbols associated with Infiniband (3136418)
- I/O fencing known issues
- One or more nodes in a cluster panic when a node in the cluster is ungracefully shutdown or rebooted [3750577]
- The cpsadm command fails after upgrading CP server to 6.0 or above in secure mode (2846727)
- CP server repetitively logs unavailable IP addresses (2530864)
- Fencing port b is visible for few seconds even if cluster nodes have not registered with CP server (2415619)
- The cpsadm command fails if LLT is not configured on the application cluster (2583685)
- In absence of cluster details in CP server, VxFEN fails with pre-existing split-brain message (2433060)
- The vxfenswap utility does not detect failure of coordination points validation due to an RSH limitation (2531561)
- Fencing does not come up on one of the nodes after a reboot (2573599)
- Common product installer cannot setup trust between a client system on release version 5.1SP1 and a server on release version 6.0 or later [3226290]
- Hostname and username are case sensitive in CP server (2846392)
- Server-based fencing comes up incorrectly if default port is not mentioned (2403453)
- Secure CP server does not connect from localhost using 127.0.0.1 as the IP address (2554981)
- Unable to customize the 30-second duration (2551621)
- CoordPoint agent does not report the addition of new disks to a Coordinator disk group [2727672]
- Fencing may show the RFSM state as replaying for some nodes in the cluster (2555191)
- The vxfenswap utility deletes comment lines from the /etc/vxfemode file, if you run the utility with hacli option (3318449)
- When you configure CP server only for HTTPS-based communication, the engine_A.log displays a misleading message (3321101)
- The vxfentsthdw utility may not run on systems installed with partial SFHA stack [3333914]
- When a client node goes down, for reasons such as node panic, I/O fencing does not come up on that client node after node restart (3341322)
- VCS fails to take virtual machines offline while restarting a physical host in RHEV and KVM environments (3320988)
- Fencing may panic the node while shut down or restart when LLT network interfaces are under Network Manager control [3627749]
- The vxfenconfig -l command output does not list Coordinator disks that are removed using the vxdmpadm exclude dmpnodename=<dmp_disk/node> command [3644431]
- Coordination point server-based fencing may fail if it is configured on 5.1SP1RP1 using 6.0.1 coordination point servers (3226290)
- The CoordPoint agent faults after you detach or reattach one or more coordination disks from a storage array (3317123)
- The upper bound value of FaultTolerance attribute of CoordPoint agent should be less than the majority of the coordination points. (2846389)
- The vxfenswap utility deletes comment lines from the /etc/vxfemode file, if you run the utility with hacli option (3318449)
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Cache area is lost after a disk failure (3158482)
- Installer exits upgrade to 5.1 RP1 with Rolling Upgrade error message (1951825, 1997914)
- In an IPv6 environment, db2icrt and db2idrop commands return a segmentation fault error during instance creation and instance removal (1602444)
- Process start-up may hang during configuration using the installer (1678116)
- Oracle 11gR1 may not work on pure IPv6 environment (1819585)
- Not all the objects are visible in the VOM GUI (1821803)
- An error message is received when you perform off-host clone for RAC and the off-host node is not part of the CVM cluster (1834860)
- A volume's placement class tags are not visible in the Veritas Enterprise Administrator GUI when creating a SmartTier placement policy (1880081)
- Storage Foundation Cluster File System High Availability known issues
- Write back cache is not supported on the cluster in FSS scenario [3723701]
- CVMVOLDg agent is not going into the FAULTED state. [3771283]
- On CFS, SmartIO is caching writes although the cache appears as nocache on one node (3760253)
- Unmounting the checkpoint using cfsumount(1M) may fail if SElinux is in enforcing mode [3766074]
- tail -f run on a cluster file system file only works correctly on the local node [3741020]
- In SFCFS on Linux, stack may overflow when the system creates ODM file [3758102]
- CFS commands might hang when run by non-root (3038283)
- The fsappadm subfilemove command moves all extents of a file (3258678)
- Certain I/O errors during clone deletion may lead to system panic. (3331273)
- Panic due to null pointer de-reference in vx_bmap_lookup() (3038285)
- In a CFS cluster, that has multi-volume file system of a small size, the fsadm operation may hang (3348520)
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- ASM disk groups configured with normal or high redundancy are dismounted if the CVM master panics due to network failure in FSS environment or if CVM I/O shipping is enabled (3600155)
- PrivNIC and MultiPrivNIC agents not supported with Oracle RAC 11.2.0.2 and later versions
- CSSD agent forcibly stops Oracle Clusterware if Oracle Clusterware fails to respond (3352269)
- Intelligent Monitoring Framework (IMF) entry point may fail when IMF detects resource state transition from online to offline for CSSD resource type (3287719)
- Node fails to join the SF Oracle RAC cluster if the file system containing Oracle Clusterware is not mounted (2611055)
- The vxconfigd daemon fails to start after machine reboot (3566713)
- Health check monitoring fails with policy-managed databases (3609349)
- Issue with format of the last 8-bit number in private IP addresses (1164506)
- CVMVolDg agent may fail to deport CVM disk group
- Rolling upgrade not supported for upgrades from SF Oracle RAC 5.1 SP1 with fencing configured in dmpmode.
- "Configuration must be ReadWrite : Use haconf -makerw" error message appears in VCS engine log when hastop -local is invoked (2609137)
- Veritas Volume Manager can not identify Oracle Automatic Storage Management (ASM) disks (2771637)
- vxdisk resize from slave nodes fails with "Command is not supported for command shipping" error (3140314)
- vxconfigbackup fails on Flexible Storage Sharing disk groups (3079819)
- CVR configurations are not supported for Flexible Storage Sharing (3155726)
- CVM requires the T10 vendor provided ID to be unique (3191807)
- Default volume layout with DAS disks spans across different disks for data plexes and DCO plexes (3087867)
- SG_IO ioctl hang causes disk group creation, CVM node joins, and storage connects/disconnects, and vxconfigd to hang in the kernel (3193119)
- Preserving Flexible Storage Sharing attributes with vxassist grow and vxresize commands is not supported (3225318)
- vxdg adddisk operation fails when adding nodes containing disks with the same name (3301085)
- In a Flexible Storage Sharing disk group, the default volume layout is not mirror when creating a volume with the mediatype:ssd attribute (3209064)
- FSS Disk group creation with 510 exported disks from master fails with Transaction locks timed out error (3311250)
- vxconfigrestore is unable to restore FSS cache objects in the pre-commit stage (3461928)
- Change in naming scheme is not reflected on nodes in an FSS environment (3589272)
- vxassist does not create data change logs on all mirrored disks, if an FSS volume is created using DM lists (3559362)
- Intel SSD cannot be initialized and exported (3584762)
- VxVM may report false serial split brain under certain FSS scenarios (3565845)
- Storage Foundation for Databases (SFDB) tools known issues
- Sometimes SFDB may report the following error message: SFDB remote or privileged command error (2869262)
- SFDB commands do not work in IPV6 environment (2619958)
- When you attempt to move all the extents of a table, the dbdst_obj_move(1M) command fails with an error (3260289)
- Attempt to use SmartTier commands fails (2332973)
- Attempt to use certain names for tiers results in error (2581390)
- Clone operation failure might leave clone database in unexpected state (2512664)
- Clone command fails if PFILE entries have their values spread across multiple lines (2844247)
- Clone command errors in a Data Guard environment using the MEMORY_TARGET feature for Oracle 11g (1824713)
- Clone fails with error "ORA-01513: invalid current time returned by operating system" with Oracle 11.2.0.3 (2804452)
- Data population fails after datafile corruption, rollback, and restore of offline checkpoint (2869259)
- FileSnap detail listing does not display the details of a particular snap (2846382)
- Flashsnap clone fails under some unusual archivelog configuration on RAC (2846399)
- In the cloned database, the seed PDB remains in the mounted state (3599920)
- Cloning of a container database may fail after a reverse resync commit operation is performed (3509778)
- If one of the PDBs is in the read-write restricted state, then cloning of a CDB fails (3516634)
- Cloning of a CDB fails for point-in-time copies when one of the PDBs is in the read-only mode (3513432)
- If a CDB has a tablespace in the read-only mode, then the cloning fails (3512370)
- If any SFDB installation prior to 6.2 with authentication setup is upgraded to 7.0, the commands fail with an error (3644030)
- Storage Foundation for Sybase ASE CE known issues
- Sybase Agent Monitor times out (1592996)
- Installer warning (1515503)
- Unexpected node reboot while probing a Sybase resource in transition (1593605)
- Unexpected node reboot when invalid attribute is given (2567507)
- "Configuration must be ReadWrite : Use haconf -makerw" error message appears in VCS engine log when hastop -local is invoked (2609137)
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Snapshot configuration with volumes in shared disk groups and private disk groups is not supported (2801037)
- Storage reclamation does not happen on volumes with break-off snapshot (2798523)
- SmartSync is not supported for Oracle databases running on raw VxVM volumes
- Veritas Infoscale does not support thin reclamation of space on a linked mirror volume (2729563)
- Cloned disks operations not supported for FSS disk groups
- Thin reclamation requests are not redirected even when the ioship policy is enabled (2755982)
- Veritas Operations Manager does not support disk, disk group, and volume state information related to CVM I/O shipping feature (2781126)
- Veritas File System software limitations
- Limitations while managing Docker containers
- Linux I/O Scheduler for Database Workloads
- Recommended limit of number of files in a directory
- The vxlist command cannot correctly display numbers greater than or equal to 1 EB
- Limitations with delayed allocation for extending writes feature
- FlashBackup feature of NetBackup 7.5 (or earlier) does not support disk layout Version 8, 9, or 10
- Compressed files that are backed up using NetBackup 7.1 or prior become uncompressed when you restore the files
- On SUSE, creation of a SmartIO cache of VxFS type hangs on Fusion-io device (3200586)
- A NetBackup restore operation on VxFS file systems does not work with SmartIO writeback caching
- VxFS file system writeback operation is not supported with volume level replication or array level replication
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Programs using networked services may stop responding if the host is disconnected
- Volume agent clean may forcibly stop volume resources
- False concurrency violation when using PidFiles to monitor application resources
- Mount agent limitations
- Share agent limitations
- Volumes in a disk group start automatically irrespective of the value of the StartVolumes attribute in VCS [2162929]
- Application agent limitations
- Campus cluster fire drill does not work when DSM sites are used to mark site boundaries [3073907]
- Mount agent reports resource state as OFFLINE if the configured mount point does not exist [3435266]
- Limitation of VMwareDisks agent to communicate with the vCenter Server [3528649]
- Limitations related to VCS engine
- Loads fail to consolidate and optimize when multiple groups fault [3074299]
- Preferred fencing ignores the forecasted available capacity [3077242]
- Failover occurs within the SystemZone or site when BiggestAvailable policy is set [3083757]
- Load for Priority groups is ignored in groups with BiggestAvailable and Priority in the same group[3074314]
- Cluster configuration wizard limitations
- Limitations related to IMF
- Limitations related to the VCS database agents
- Security-Enhanced Linux is not supported on SLES distributions
- Systems in a cluster must have same system locale setting
- VxVM site for the disk group remains detached after node reboot in campus clusters with fire drill [1919317]
- Limitations with DiskGroupSnap agent [1919329]
- System reboot after panic
- Host on RHEV-M and actual host must match [2827219]
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Preferred fencing limitation when VxFEN activates RACER node re-election
- Stopping systems in clusters with I/O fencing configured
- Uninstalling VRTSvxvm causes issues when VxFEN is configured in SCSI3 mode with dmp disk policy (2522069)
- Node may panic if HAD process is stopped by force and then node is shut down or restarted [3640007]
- Limitations related to global clusters
- Clusters must run on VCS 6.0.5 and later to be able to communicate after upgrading to 2048 bit key and SHA256 signature certificates [3812313]
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Supportability constraints for normal or high redundancy ASM disk groups with CVM I/O shipping and FSS (3600155)
- Limitations of CSSD agent
- Oracle Clusterware/Grid Infrastructure installation fails if the cluster name exceeds 14 characters
- SELinux supported in disabled and permissive modes only
- Policy-managed databases not supported by CRSResource agent
- Health checks may fail on clusters that have more than 10 nodes
- Cached ODM not supported in Veritas Infoscale environments
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Group is not brought online if top level resource is disabled [2486476]
If the top level resource which does not have any parent dependancy is disabled then the other resources do not come online and the following message is displayed:
VCS NOTICE V-16-1-50036 There are no enabled resources in the group cvm to online
Workaround: Online the child resources of the topmost resource which is disabled.