Veritas InfoScale™ 7.3.1 Release Notes - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1, 7.3.1, 7.3.1, 7.3.1)
  1. Introduction
    1.  
      About this document
  2. Changes introduced in 7.3.1
    1. Changes related to installation and upgrades
      1.  
        Change in upgrade path
    2. Changes related to the Cluster Server engine
      1.  
        Support for configuring LLT over TCP
      2.  
        Stale key detection to avoid a false preexisting split brain condition
      3.  
        VCS stop timeout
      4.  
        256-bit encryption for enhanced security
    3. Changes related to Cluster Server agents
      1.  
        Start-only option for applications
      2.  
        New attributes for Cluster Server agents
    4. Changes related to InfoScale in cloud environments
      1.  
        New High Availability agent for Amazon Web Services (AWS)
      2.  
        Support for configuring applications for HA in Azure cloud using InfoScale Enterprise
      3.  
        New High Availability agents for Azure environment
    5. Changes related to Veritas Volume Manager
      1.  
        Support for InfoScale deployments in Azure cloud
      2.  
        Support for Azure Blob storage connector
      3.  
        Erasure coded volume enhancements
      4.  
        Support for Root Disk Encapsulation (RDE) on All Linux Distributions is Deprecated
    6. Changes related to Veritas File System
      1.  
        Delayed allocation support extended to clustered file systems
      2.  
        New option [-i] included in fsadm command to exclude the actively used files during file system reorganization
      3.  
        Support for migrating Oracle database from Oracle ASM to Veritas File System (VxFS)
    7. Changes related to replication
      1.  
        Support for configuring volume replication in Azure cloud
      2.  
        Veritas Volume Replicator Performance Improvements
      3.  
        Support for replication of encrypted volumes
    8. Changes related to Dynamic Multipathing
      1.  
        Support for dynamic multipathing in the KVM guest virtualized machine
  3. System requirements
    1.  
      VCS system requirements
    2. Supported Linux operating systems
      1.  
        Required Linux RPMs for Veritas InfoScale
    3.  
      Storage Foundation for Databases features supported in database environments
    4.  
      Storage Foundation memory requirements
    5.  
      Supported database software
    6.  
      Supported hardware and software
    7.  
      VMware Environment
    8.  
      Number of nodes supported
  4. Fixed Issues
    1.  
      Installation and upgrades fixed issues
  5. Known Issues
    1. Issues related to installation and upgrade
      1.  
        InfoScale 7.3.1 behavior on RHEL 7.4.(3929407)
      2.  
        Installer fails on SLES 11 and SLES 12 for older NTP versions (3912493)
      3.  
        Switch fencing in enable or disable mode may not take effect if VCS is not reconfigured [3798127]
      4.  
        During an upgrade process, the AMF_START or AMF_STOP variable values may be inconsistent [3763790]
      5.  
        Stopping the installer during an upgrade and then resuming the upgrade might freeze the service groups (2574731)
      6.  
        The uninstaller does not remove all scripts (2696033)
      7.  
        NetBackup 6.5 or older version is installed on a VxFS file system (2056282)
      8.  
        Error messages in syslog (1630188)
      9.  
        Ignore certain errors after an operating system upgrade - after a product upgrade with encapsulated boot disks (2030970)
      10.  
        After a locale change restart the vxconfig daemon (2417547, 2116264)
      11.  
        Dependency may get overruled when uninstalling multiple RPMs in a single command [3563254]
      12.  
        Resource faults during Rolling upgrade due to perl changes (3930605)
    2. Issues related to Veritas InfoScale Storage in Amazon Web Services cloud environments
      1.  
        Incorrect media type displayed for AWS EC2 volumes
      2.  
        Inconsistencies in instance store volumes
      3.  
        Stale remote disks on some nodes after failure of vxdisk unexport operation
      4.  
        UDID of AWS volumes not updated after migration
      5.  
        Partial detachment of volumes from AWS console
      6.  
        Crash dump logs not available when EC2 instances crash
      7.  
        vxcloudd daemon fails with a core dump when the bucket name on the target exceeds 32 characters (3916980)
      8.  
        Migration of data to cloud volumes using S3 Connector fails with core dump (3915555)
    3. Storage Foundation known issues
      1. Dynamic Multi-Pathing known issues
        1.  
          kdump functionality does not work when DMP Native Support is enabled on Linux platform [3754715]
        2.  
          On SLES machine, after you enable the switch ports, some paths may not get enabled automatically [3782724]
      2. Veritas Volume Manager known issues
        1.  
          Multiple Issues with Root Disk Encapsulation on RHEL
        2.  
          Core dump issue after restoration of disk group backup (3909046)
        3.  
          VxVM tunables not updated on SLES 12 SP2 systems with 4.4 kernel (3916902)
        4.  
          Failed verifydata operation leaves residual cache objects that cannot be removed (3370667)
        5.  
          LUNs claimed but not in use by VxVM may report "Device Busy" when it is accessed outside VxVM (3667574)
        6.  
          If the disk with CDS EFI label is used as remote disk on the cluster node, restarting the vxconfigd daemon on that particular node causes vxconfigd to go into disabled state (3873123)
        7.  
          Unable to set master on the secondary site in VVR environment if any pending I/O's are on the secondary site (3874873)
        8.  
          After installing DMP 6.0.1 on a host with the root disk under LVM on a cciss controller, the system is unable to boot using the vxdmp_kernel command [3599030]
        9.  
          VRAS verifydata command fails without cleaning up the snapshots created [3558199]
        10.  
          SmartIO VxVM cache invalidated after relayout operation (3492350)
        11.  
          VxVM fails to create volume by the vxassist(1M) command with maxsize parameter on Oracle Enterprise Linux 6 Update 5 (OEL6U5) [3736647]
        12.  
          Performance impact when a large number of disks are reconnected (2802698)
        13.  
          Machine fails to boot after root disk encapsulation on servers with UEFI firmware (1842096)
        14.  
          device.map must be up to date before doing root disk encapsulation (2202047)
        15.  
          Veritas Volume Manager (VxVM) might report false serial split brain under certain scenarios (1834513)
        16.  
          VxVM starts before OS device scan is done (1635274)
        17.  
          DMP disables subpaths and initiates failover when an iSCSI link is failed and recovered within 5 seconds. (2100039)
        18.  
          During system boot, some VxVM volumes fail to mount (2622979)
        19.  
          Removing an array node from an IBM Storwize V7000 storage system also removes the controller (2816589)
        20.  
          Continuous trespass loop when a CLARiiON LUN is mapped to a different host than its snapshot (2761567)
        21.  
          Disk group import of BCV LUNs using -o updateid and -ouseclonedev options is not supported if the disk group has mirrored volumes with DCO or has snapshots (2831658)
        22.  
          After devices that are managed by EMC PowerPath lose access to storage, Veritas Volume Manager commands are delayed (2757198)
        23.  
          vxresize does not work with layered volumes that have multiple plexes at the top level (3301991)
        24.  
          Running the vxdisk disk set clone=off command on imported clone disk group luns results in a mix of clone and non-clone disks (3338075)
        25.  
          vxunroot cannot encapsulate a root disk when the root partition has XFS mounted on it (3614362)
        26.  
          Restarting the vxconfigd daemon on the slave node after a disk is removed from all nodes may cause the disk groups to be disabled on the slave node (3591019)
        27.  
          DMP panics if a DDL device discovery is initiated immediately after loss of connectivity to the storage (2040929)
        28.  
          Failback to primary paths does not occur if the node that initiated the failover leaves the cluster (1856723)
        29.  
          Issues if the storage connectivity to data disks is lost on a CVM slave node while vxconfigd was not running on the node (2562889)
        30.  
          The vxcdsconvert utility is supported only on the master node (2616422)
        31.  
          Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)
        32.  
          Issues with the disk state on the CVM slave node when vxconfigd is restarted on all nodes (2615680)
        33.  
          Plex synchronization is not completed after resuming synchronization on a new master when the original master lost connectivity (2788077)
        34.  
          A master node is not capable of doing recovery if it cannot access the disks belonging to any of the plexes of a volume (2764153)
        35.  
          CVM fails to start if the first node joining the cluster has no connectivity to the storage (2787713)
        36.  
          CVMVolDg agent may fail to deport CVM disk group when CVMDeportOnOffline is set to 1
        37.  
          cvm_clus resource goes into faulted state after the resource is manually panicked and rebooted in a 32 node cluster (2278894)
        38.  
          DMP uses OS device physical path to maintain persistence of path attributes from 6.0 [3761441]
        39.  
          The vxsnap print command shows incorrect value for percentage dirty [2360780]
        40.  
          Systems may panic after GPT disk resize operation (3930664)
        41.  
          If LVM volume group has mirror volume, the conversion operation to VxVM fails (3930536)
        42.  
          If recovery of columns on EC volumes fails, recovery of other columns on the other volumes also fails (3930435)
      3. Virtualization known issues
        1.  
          Configuring application for high availability with storage using VCS wizard may fail on a VMware virtual machine which is configured with more than two storage controllers [3640956]
        2.  
          Host fails to reboot when the resource gets stuck in ONLINE|STATE UNKNOWN state [2738864]
        3.  
          VM state is in PAUSED state when storage domain is inactive [2747163]
        4.  
          Switching KVMGuest resource fails due to inadequate swap space on the other host [2753936]
        5.  
          Policies introduced in SLES 11SP2 may block graceful shutdown if a VM in SUSE KVM environment [2792889]
        6.  
          Load on libvirtd may terminate it in SUSE KVM environment [2824952]
        7.  
          Offline or switch of KVMGuest resource fails if the VM it is monitoring is undefined [2796817]
        8.  
          Increased memory usage observed even with no VM running [2734970]
        9.  
          Resource faults when it fails to ONLINE VM beacuse of insufficient swap percentage [2827214]
        10.  
          Migration of guest VM on native LVM volume may cause libvirtd process to terminate abruptly (2582716)
        11.  
          Virtual machine may return the not-responding state when the storage domain is inactive and the data center is down (2848003)
        12.  
          Guest virtual machine may fail on RHEL 6.1 if KVM guest image resides on CVM-CFS [2659944]
        13.  
          System panics after starting KVM virtualized guest or initiating KVMGuest resource online [2337626]
        14.  
          CD ROM with empty file vmPayload found inside the guest when resource comes online [3060910]
        15.  
          VCS fails to start virtual machine on another node if the first node panics [3042806]
        16.  
          VM fails to start on the target node if the source node panics or restarts during migration [3042786]
        17.  
          High Availability tab does not report LVMVolumeGroup resources as online [2909417]
        18.  
          Cluster communication breaks when you revert a snapshot in VMware environment [3409586]
        19.  
          VCS may detect the migration event during the regular monitor cycle due to the timing issue [2827227]
      4. Veritas File System known issues
        1.  
          Cfsmount test fails with error logs that inaccessible block device path for the file system (3873325)
        2.  
          FSMount fails to mount a file system with or without SmartIO options (3870190)
        3.  
          Docker does not recognize VxFS backend file system
        4.  
          On RHEL7 onwards, Pluggable Authentication Modules(PAM) related error messages for Samba daemon might occur in system logs [3765921]
        5.  
          Delayed allocation may be turned off automatically when one of the volumes in a multi-volume file system nears 100%(2438368)
        6.  
          The file system deduplication operation fails with the error message "DEDUP_ERROR Error renaming X checkpoint to Y checkpoint on filesystem Z error 16" (3348534)
        7.  
          After upgrading a file system using the vxupgrade(1M) command, the sfcache(1M) command with the stat option shows garbage value on the secondary node. [3759788]
        8.  
          XFS file system is not supported for RDE
        9.  
          The command tab auto-complete fails for the /dev/vx/ file tree; specifically for RHEL 7 (3602082)
        10.  
          Task blocked messages display in the console for RHEL5 and RHEL6 (2560357)
        11.  
          Deduplication can fail with error 110 (3741016)
        12.  
          System unable to select ext4 from the file system (2691654)
        13.  
          The system panics with the panic string "kernel BUG at fs/dcache.c:670!" (3323152)
        14.  
          A restored volume snapshot may be inconsistent with the data in the SmartIO VxFS cache (3760219)
        15.  
          When in-place and relocate compression rules are in the same policy file, file relocation is unpredictable (3760242)
        16.  
          During a deduplication operation, the spoold script fails to start (3196423)
        17.  
          The file system may hang when it has compression enabled (3331276)
        18.  
          "rpc.statd" in the "nfs-utils" RPM in the various Linux distributions does not properly cleanse the untrusted format strings (3335691)
    4. Replication known issues
      1.  
        After the product upgrade on secondary site, replication may fail to resume with "Secondary SRL missing" error [3931763]
      2.  
        vradmin repstatus command reports secondary host as "unreachable"(3896588)
      3.  
        RVGPrimary agent operation to start replication between the original Primary and the bunker fails during failback (2036605)
      4.  
        A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail [3761497]
      5.  
        In an IPv6-only environment RVG, data volumes or SRL names cannot contain a colon (1672410, 1672417, 1825031)
      6.  
        vxassist relayout removes the DCM (145413)
      7.  
        vradmin functionality may not work after a master switch operation [2158679]
      8.  
        Cannot relayout data volumes in an RVG from concat to striped-mirror (2129601)
      9.  
        vradmin verifydata operation fails when replicating between versions 5.1 and 6.0 or later (2360713)
      10.  
        vradmin verifydata may report differences in a cross-endian environment (2834424)
      11.  
        vradmin verifydata operation fails if the RVG contains a volume set (2808902)
      12.  
        Plex reattach operation fails with unexpected kernel error in configuration update (2791241)
      13.  
        Bunker replay does not occur with volume sets (3329970)
      14.  
        SmartIO does not support write-back caching mode for volumes configured for replication by Volume Replicator (3313920)
      15.  
        During moderate to heavy I/O, the vradmin verifydata command may falsely report differences in data (3270067)
      16.  
        The vradmin repstatus command does not show that the SmartSync feature is running [3343141]
      17.  
        While vradmin commands are running, vradmind may temporarily lose heartbeats (3347656, 3724338)
      18.  
        Write I/Os on the primary logowner may take a long time to complete (2622536)
      19.  
        DCM logs on a disassociated layered data volume results in configuration changes or CVM node reconfiguration issues (3582509)
      20.  
        After performing a CVM master switch on the secondary node, both rlinks detach (3642855)
      21.  
        vradmin -g dg repstatus rvg displays the following configuration error: vradmind not reachable on cluster peer (3648854)
      22.  
        The RVGPrimary agent may fail to bring the application service group online on the new Primary site because of a previous primary-elect operation not being run or not completing successfully (3761555, 2043831)
      23.  
        A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail (1558257)
      24.  
        DCM plex becomes inaccessible and goes into DISABLED(SPARSE) state in case of node failure. (3931775)
    5. Cluster Server known issues
      1. Operational issues for VCS
        1.  
          LVM SG transition fails in all paths disabled status [2081430]
        2.  
          SG goes into Partial state if Native LVMVG is imported and activated outside VCS control
        3.  
          Switching service group with DiskGroup resource causes reservation conflict with UseFence set to SCSI3 and powerpath environment set [2749136]
        4.  
          Stale NFS file handle on the client across failover of a VCS service group containing LVMLogicalVolume resource (2016627)
        5.  
          NFS cluster I/O fails when storage is disabled [2555662]
        6.  
          VVR configuration may go in a primary-primary configuration when the primary node crashes and restarts [3314749]
        7.  
          CP server does not allow adding and removing HTTPS virtual IP or ports when it is running [3322154]
        8.  
          CP server does not support IPv6 communication with HTTPS protocol [3209475]
        9.  
          VCS fails to stop volume due to a transaction ID mismatch error [3292840]
        10.  
          Some VCS components do not work on the systems where a firewall is configured to block TCP traffic [3545338]
      2. Issues related to the VCS engine
        1.  
          Invalid argument message in the message log due to Red Hat Linux bug (3872083)
        2.  
          Extremely high CPU utilization may cause HAD to fail to heartbeat to GAB [1744854]
        3.  
          The hacf -cmdtocf command generates a broken main.cf file [1919951]
        4.  
          Trigger does not get executed when there is more than one leading or trailing slash in the triggerpath [2368061]
        5.  
          Service group is not auto started on the node having incorrect value of EngineRestarted [2653688]
        6.  
          Group is not brought online if top level resource is disabled [2486476]
        7.  
          NFS resource goes offline unexpectedly and reports errors when restarted [2490331]
        8.  
          Parent group does not come online on a node where child group is online [2489053]
        9.  
          Cannot modify temp attribute when VCS is in LEAVING state [2407850]
        10.  
          Service group may fail to come online after a flush and a force flush operation [2616779]
        11.  
          Elevated TargetCount prevents the online of a service group with hagrp -online -sys command [2871892]
        12.  
          Auto failover does not happen in case of two successive primary and secondary cluster failures [2858187]
        13.  
          GCO clusters remain in INIT state [2848006]
        14.  
          The ha commands may fail for non-root user if cluster is secure [2847998]
        15.  
          Running -delete -keys for any scalar attribute causes core dump [3065357]
        16.  
          Veritas InfoScale enters into admin_wait state when Cluster Statistics is enabled with load and capacity defined [3199210]
        17.  
          Agent reports incorrect state if VCS is not set to start automatically and utmp file is empty before VCS is started [3326504]
        18.  
          VCS crashes if feature tracking file is corrupt [3603291]
        19.  
          RemoteGroup agent and non-root users may fail to authenticate after a secure upgrade [3649457]
        20.  
          Global Cluster Option (GCO) require NIC names in specific format [3641586]
        21.  
          If you disable security before upgrading VCS to version 7.0.1 or later on secured clusters, the security certificates will not be upgraded to 2048 bit SHA2 [3812313]
        22.  
          Clusters with VCS versions earlier than 6.0.5 cannot form cross cluster communication (like GCO, STEWARD) with clusters installed with SHA256 signature certificates [3812313]
        23.  
          Java console and CLI do not allow adding VCS user names starting with '_' character (3870470)
      3. Issues related to the bundled agents
        1.  
          KVMGuest resource fails to work on VCS agent for RHEV3.5 (3873800)
        2.  
          LVM Logical Volume will be auto activated during I/O path failure [2140342]
        3.  
          KVMGuest monitor entry point reports resource ONLINE even for corrupted guest or with no OS installed inside guest [2394235]
        4.  
          Concurrency violation observed during migration of monitored virtual machine [2755936]
        5.  
          LVM logical volume may get stuck with reiserfs file system on SLES11 [2120133]
        6.  
          KVMGuest resource comes online on failover target node when started manually [2394048]
        7.  
          IMF registration fails for Mount resource if the configured MountPoint path contains spaces [2442598]
        8.  
          DiskGroup agent is unable to offline the resource if volume is unmounted outside VCS
        9.  
          RemoteGroup agent does not failover in case of network cable pull [2588807]
        10.  
          VVR setup with FireDrill in CVM environment may fail with CFSMount Errors [2564411]
        11.  
          CoordPoint agent remains in faulted state [2852872]
        12.  
          RVGsnapshot agent does not work with volume sets created using vxvset [2553505]
        13.  
          No log messages in engine_A.log if VCS does not find the Monitor program [2563080]
        14.  
          KVMGuest agent fails to recognize paused state of the VM causing KVMGuest resource to fault [2796538]
        15.  
          Concurrency violation observed when host is moved to maintenance mode [2735283]
        16.  
          Logical volume resources fail to detect connectivity loss with storage when all paths are disabled in KVM guest [2871891]
        17.  
          Resource does not appear ONLINE immediately after VM appears online after a restart [2735917]
        18.  
          NFS resource faults on the node enabled with SELinux and where rpc.statd process may terminate when access is denied to the PID file [3248903]
        19.  
          On cluster nodes that run RHEL7.4, the NFS agent fails to come online or go offline when NFSSecurity is enabled (3932288)
        20.  
          Unexpected behavior in VCS observed while taking the disk online [3123872]
        21.  
          LVMLogicalVolume agent clean entry point fails to stop logical volume if storage connectivity is lost [3118820]
        22.  
          VM goes into paused state if the source node loses storage connectivity during migration [3085214]
        23.  
          Virtual machine goes to paused state during migration if the public network cable is pulled on the destination node [3080930]
        24.  
          NFS client reports I/O error because of network split brain [3257399]
        25.  
          Manual configuration of RHEVMInfo attribute of KVMGuest agent requires all its keys to be configured [3277994]
        26.  
          SambaServer agent may generate core on Linux if LockDir attribute is changed to empty value while agent is running [3339231]
        27.  
          Independent Persistent disk setting is not preserved during failover of virtual disks in VMware environment [3338702]
        28.  
          LVMLogicalVolume resource goes in UNABLE TO OFFLINE state if native LVM volume group is exported outside VCS control [3606516]
        29.  
          DiskGroup resource online may take time if it is configured along with VMwareDisks resource [3638242]
        30.  
          SFCache Agent fails to enable caching if cache area is offline [3644424]
        31.  
          RemoteGroup agent may stop working on upgrading the remote cluster in secure mode [3648886]
        32.  
          VMwareDisks agent may fail to start or storage discovery may fail if SELinux is running in enforcing mode [3106376]
      4. Issues related to the VCS database agents
        1.  
          Unsupported startup options with systemD enabled [3901204]
        2.  
          ASMDG agent does not go offline if the management DB is running on the same (3856460)
        3.  
          ASMDG on a particular does not go offline if its instances is being used by other database instances (3856450)
        4.  
          Sometimes ASMDG reports as offline instead of faulted (3856454)
        5.  
          The ASMInstAgent does not support having pfile/spfile for the ASM Instance on the ASM diskgroups
        6.  
          VCS agent for ASM: Health check monitoring is not supported for ASMInst agent
        7.  
          NOFAILOVER action specified for certain Oracle errors
        8.  
          Oracle agent fails to offline pluggable database (PDB) resource with PDB in backup mode [3592142]
        9.  
          Clean succeeds for PDB even as PDB staus is UNABLE to OFFLINE [3609351]
        10.  
          Second level monitoring fails if user and table names are identical [3594962]
        11.  
          Monitor entry point times out for Oracle PDB resources when CDB is moved to suspended state in Oracle 12.1.0.2 [3643582]
        12.  
          Oracle agent fails to online and monitor Oracle instance if threaded_execution parameter is set to true [3644425]
      5. Issues related to the agent framework
        1.  
          Agent framework cannot handle leading and trailing spaces for the dependent attribute (2027896)
        2.  
          The agent framework does not detect if service threads hang inside an entry point [1442255]
        3.  
          IMF related error messages while bringing a resource online and offline [2553917]
        4.  
          Delayed response to VCS commands observed on nodes with several resources and system has high CPU usage or high swap usage [3208239]
        5.  
          CFSMount agent may fail to heartbeat with VCS engine and logs an error message in the engine log on systems with high memory load [3060779]
        6.  
          Logs from the script executed other than the agent entry point goes into the engine logs [3547329]
        7.  
          VCS fails to process the hares -add command resource if the resource is deleted and subsequently added just after the VCS process or the agent's process starts (3813979)
      6. Cluster Server agents for Volume Replicator known issues
        1.  
          fdsetup cannot correctly parse disk names containing characters such as "-" (1949294)
        2.  
          Stale entries observed in the sample main.cf file for RVGLogowner and RVGPrimary agent [2872047]
      7. Issues related to Intelligent Monitoring Framework (IMF)
        1.  
          Registration error while creating a Firedrill setup [2564350]
        2.  
          IMF does not provide notification for a registered disk group if it is imported using a different name (2730774)
        3.  
          Direct execution of linkamf displays syntax error [2858163]
        4.  
          Error messages displayed during reboot cycles [2847950]
        5.  
          Error message displayed when ProPCV prevents a process from coming ONLINE to prevent concurrency violation does not have I18N support [2848011]
        6.  
          AMF displays StartProgram name multiple times on the console without a VCS error code or logs [2872064]
        7.  
          Core dump observed when amfconfig is run with set and reset commands simultaneously [2871890]
        8.  
          VCS engine shows error for cancellation of reaper when Apache agent is disabled [3043533]
        9.  
          Terminating the imfd daemon orphans the vxnotify process [2728787]
        10.  
          Agent cannot become IMF-aware with agent directory and agent file configured [2858160]
        11.  
          ProPCV fails to prevent a script from running if it is run with relative path [3617014]
      8. Issues related to global clusters
        1.  
          The engine log file receives too many log messages on the secure site in global cluster environments [1919933]
        2.  
          Application group attempts to come online on primary site before fire drill service group goes offline on the secondary site (2107386)
      9. Issues related to the Cluster Manager (Java Console)
        1.  
          Cluster Manager (Java Console) may display an error while loading templates (1433844)
        2.  
          Some Cluster Manager features fail to work in a firewall setup [1392406]
      10. VCS Cluster Configuration wizard issues
        1.  
          VCS Cluster Configuration wizard does not automatically close in Mozilla Firefox [3281450]
        2.  
          Configuration inputs page of VCS Cluster Configuration wizard shows multiple cluster systems for the same virtual machine [3237023]
        3.  
          VCS Cluster Configuration wizard fails to display mount points on native LVM if volume groups are exported [3341937]
        4.  
          IPv6 verification fails while configuring generic application using VCS Cluster Configuration wizard [3614680]
        5.  
          InfoScale Enterprise: Unable to configure clusters through the VCS Cluster Configuration wizard (3911694)
      11. LLT known issues
        1.  
          LLT may fail to detect when bonded NICs come up (2604437)
        2.  
          LLT connections are not formed when a vlan is configured on a NIC (2484856)
        3.  
          LLT port stats sometimes shows recvcnt larger than recvbytes (1907228)
        4.  
          LLT may incorrectly declare port-level connection for nodes in large cluster configurations [1810217]
        5.  
          If you manually re-plumb (change) the IP address on a network interface card (NIC) which is used by LLT, then LLT may experience heartbeat loss and the node may panic (3188950)
        6.  
          A network restart of the network interfaces may cause heartbeat loss for the NIC interfaces used by LLT
        7.  
          When you execute the unit service file (RHEL 7 or SLES 12 onwards) or the LSB file to load the LLT module, the syslog file may record messages related to kernel symbols associated with Infiniband (3136418)
        8.  
          Performance degradation occurs when RDMA connection between nodes is down [3877863]
        9.  
          After configuring LLT over UDP using IPV6, one of the configured link may show DOWN status for lltstat command [3916374]
        10.  
          When using FSS over RDMA links during heavy IO, LLT may face link fluctuations [3907179]
        11.  
          The LLT window may drop to a very low value in CVM/FSS or CFS environment [3914954]
      12. I/O fencing known issues
        1.  
          After a node reboots, it fails to join its cluster if the fencing module fails to start (3931526)
        2.  
          One or more nodes in a cluster panic when a node in the cluster is ungracefully shutdown or rebooted [3750577]
        3.  
          CP server repetitively logs unavailable IP addresses (2530864)
        4.  
          Fencing port b is visible for few seconds even if cluster nodes have not registered with CP server (2415619)
        5.  
          The cpsadm command fails if LLT is not configured on the application cluster (2583685)
        6.  
          In absence of cluster details in CP server, VxFEN fails with pre-existing split-brain message (2433060)
        7.  
          The vxfenswap utility does not detect failure of coordination points validation due to an RSH limitation (2531561)
        8.  
          Fencing does not come up on one of the nodes after a reboot (2573599)
        9.  
          Hostname and username are case sensitive in CP server (2846392)
        10.  
          Server-based fencing comes up incorrectly if default port is not mentioned (2403453)
        11.  
          Fencing may show the RFSM state as replaying for some nodes in the cluster (2555191)
        12.  
          The vxfenswap utility deletes comment lines from the /etc/vxfemode file, if you run the utility with hacli option (3318449)
        13.  
          The vxfentsthdw utility may not run on systems installed with partial SFHA stack [3333914]
        14.  
          When a client node goes down, for reasons such as node panic, I/O fencing does not come up on that client node after node restart (3341322)
        15.  
          VCS fails to take virtual machines offline while restarting a physical host in RHEV and KVM environments (3320988)
        16.  
          Fencing may panic the node while shut down or restart when LLT network interfaces are under Network Manager control [3627749]
        17.  
          The vxfenconfig -l command output does not list Coordinator disks that are removed using the vxdmpadm exclude dmpnodename=<dmp_disk/node> command [3644431]
        18.  
          The CoordPoint agent faults after you detach or reattach one or more coordination disks from a storage array (3317123)
        19.  
          The upper bound value of FaultTolerance attribute of CoordPoint agent should be less than the majority of the coordination points. (2846389)
    6. Storage Foundation and High Availability known issues
      1.  
        Cache area is lost after a disk failure (3158482)
      2.  
        Installer exits upgrade to 5.1 RP1 with Rolling Upgrade error message (1951825, 1997914)
      3.  
        In an IPv6 environment, db2icrt and db2idrop commands return a segmentation fault error during instance creation and instance removal (1602444)
      4.  
        Process start-up may hang during configuration using the installer (1678116)
      5.  
        Oracle 11gR1 may not work on pure IPv6 environment (1819585)
      6.  
        Not all the objects are visible in the VOM GUI (1821803)
      7.  
        An error message is received when you perform off-host clone for RAC and the off-host node is not part of the CVM cluster (1834860)
      8.  
        A volume's placement class tags are not visible in the Veritas Enterprise Administrator GUI when creating a dynamic storage tiering placement policy (1880081)
    7. Storage Foundation Cluster File System High Availability known issues
      1.  
        In an FSS environment, creation of mirrored volumes may fail for SSD media [3932494]
      2.  
        Mount command may fail to mount the file system (3913246)
      3.  
        After the local node restarts or panics, the FSS service group cannot be online successfully on the local node and the remote node when the local node is up again (3865289)
      4.  
        In the FSS environment, if DG goes to the dgdisable state and deep volume monitoring is disabled, successive node joins fail with error 'Slave failed to create remote disk: retry to add a node failed' (3874730)
      5.  
        DG creation fails with error "V-5-1-585 Disk group punedatadg: cannot create: SCSI-3 PR operation failed" on the VSCSI disks (3875044)
      6.  
        Write back cache is not supported on the cluster in FSS scenario [3723701]
      7.  
        CVMVOLDg agent is not going into the FAULTED state. [3771283]
      8.  
        On CFS, SmartIO is caching writes although the cache appears as nocache on one node (3760253)
      9.  
        Unmounting the checkpoint using cfsumount(1M) may fail if SElinux is in enforcing mode [3766074]
      10.  
        tail -f run on a cluster file system file only works correctly on the local node [3741020]
      11.  
        In SFCFS on Linux, stack may overflow when the system creates ODM file [3758102]
      12.  
        CFS commands might hang when run by non-root (3038283)
      13.  
        The fsappadm subfilemove command moves all extents of a file (3258678)
      14.  
        Certain I/O errors during clone deletion may lead to system panic. (3331273)
      15.  
        Panic due to null pointer de-reference in vx_bmap_lookup() (3038285)
      16.  
        In a CFS cluster, that has multi-volume file system of a small size, the fsadm operation may hang (3348520)
    8. Storage Foundation for Oracle RAC known issues
      1. Oracle RAC known issues
        1.  
          Oracle Grid Infrastructure installation may fail with internal driver error
        2.  
          During installation or system startup, Oracle Grid Infrastructure may fail to start
      2. Storage Foundation Oracle RAC issues
        1.  
          CSSD configuration fails if OCR and voting disk volumes are located on Oracle ASM (3914497)
        2.  
          When you upgrade to SF Oracle RAC 7.1, VxFS may fail to stop (3872605)
        3.  
          ASM disk groups configured with normal or high redundancy are dismounted if the CVM master panics due to network failure in FSS environment or if CVM I/O shipping is enabled (3600155)
        4.  
          PrivNIC and MultiPrivNIC agents not supported with Oracle RAC 11.2.0.2 and later versions
        5.  
          CSSD agent forcibly stops Oracle Clusterware if Oracle Clusterware fails to respond (3352269)
        6.  
          Intelligent Monitoring Framework (IMF) entry point may fail when IMF detects resource state transition from online to offline for CSSD resource type (3287719)
        7.  
          Node fails to join the SF Oracle RAC cluster if the file system containing Oracle Clusterware is not mounted (2611055)
        8.  
          The vxconfigd daemon fails to start after machine reboot (3566713)
        9.  
          Health check monitoring fails with policy-managed databases (3609349)
        10.  
          Issue with format of the last 8-bit number in private IP addresses (1164506)
        11.  
          CVMVolDg agent may fail to deport CVM disk group
        12.  
          Rolling upgrade not supported for upgrades from SF Oracle RAC 5.1 SP1 with fencing configured in dmpmode.
        13.  
          "Configuration must be ReadWrite : Use haconf -makerw" error message appears in VCS engine log when hastop -local is invoked (2609137)
        14.  
          Veritas Volume Manager can not identify Oracle Automatic Storage Management (ASM) disks (2771637)
        15.  
          vxdisk resize from slave nodes fails with "Command is not supported for command shipping" error (3140314)
        16.  
          CVR configurations are not supported for Flexible Storage Sharing (3155726)
        17.  
          CVM requires the T10 vendor provided ID to be unique (3191807)
        18.  
          SG_IO ioctl hang causes disk group creation, CVM node joins, and storage connects/disconnects, and vxconfigd to hang in the kernel (3193119)
        19.  
          vxdg adddisk operation fails when adding nodes containing disks with the same name (3301085)
        20.  
          FSS Disk group creation with 510 exported disks from master fails with Transaction locks timed out error (3311250)
        21.  
          vxconfigrestore is unable to restore FSS cache objects in the pre-commit stage (3461928)
        22.  
          Change in naming scheme is not reflected on nodes in an FSS environment (3589272)
        23.  
          Intel SSD cannot be initialized and exported (3584762)
        24.  
          VxVM may report false serial split brain under certain FSS scenarios (3565845)
    9. Storage Foundation for Databases (SFDB) tools known issues
      1.  
        Clone operations fail for instant mode snapshot (3916053)
      2.  
        Sometimes SFDB may report the following error message: SFDB remote or privileged command error (2869262)
      3.  
        SFDB commands do not work in IPV6 environment (2619958)
      4.  
        When you attempt to move all the extents of a table, the dbdst_obj_move(1M) command fails with an error (3260289)
      5.  
        Attempt to use SmartTier commands fails (2332973)
      6.  
        Attempt to use certain names for tiers results in error (2581390)
      7.  
        Clone operation failure might leave clone database in unexpected state (2512664)
      8.  
        Clone command fails if PFILE entries have their values spread across multiple lines (2844247)
      9.  
        Clone command errors in a Data Guard environment using the MEMORY_TARGET feature for Oracle 11g (1824713)
      10.  
        Clone fails with error "ORA-01513: invalid current time returned by operating system" with Oracle 11.2.0.3 (2804452)
      11.  
        Data population fails after datafile corruption, rollback, and restore of offline checkpoint (2869259)
      12.  
        Flashsnap clone fails under some unusual archivelog configuration on RAC (2846399)
      13.  
        In the cloned database, the seed PDB remains in the mounted state (3599920)
      14.  
        Cloning of a container database may fail after a reverse resync commit operation is performed (3509778)
      15.  
        If one of the PDBs is in the read-write restricted state, then cloning of a CDB fails (3516634)
      16.  
        Cloning of a CDB fails for point-in-time copies when one of the PDBs is in the read-only mode (3513432)
      17.  
        If a CDB has a tablespace in the read-only mode, then the cloning fails (3512370)
      18.  
        If any SFDB installation with authentication setup is upgraded to 7.3.1, the commands fail with an error (3644030)
      19.  
        Error message displayed when you use the vxsfadm -a oracle -s filesnap -o destroyclone command (3901533)
    10. Storage Foundation for Sybase ASE CE known issues
      1.  
        Sybase Agent Monitor times out (1592996)
      2.  
        Installer warning (1515503)
      3.  
        Unexpected node reboot while probing a Sybase resource in transition (1593605)
      4.  
        Unexpected node reboot when invalid attribute is given (2567507)
      5.  
        "Configuration must be ReadWrite : Use haconf -makerw" error message appears in VCS engine log when hastop -local is invoked (2609137)
    11. Application isolation feature known Issues
      1.  
        Addition of an Oracle instance using Oracle GUI (dbca) does not work with Application Isolation feature enabled
      2.  
        Auto reattach of detached plexes may not happen for FSS disk groups when auto-mapping feature is used (3902004)
      3.  
        CPI is not supported for configuring the application isolation feature (3902023)
      4.  
        Thin reclamation does not happen for remote disks if the storage node or the disk owner does not have the file system mounted on it (3902009)
    12. Cloud deployment known issues
      1.  
        In an Azure environment, the systems under InfoScale control may panic due to CPU soft lockup [3929534]
      2.  
        In an Azure environment, an InfoScale cluster node may panic if any of the node is rebooted using Azure portal [3930926]
      3.  
        If you disable a public IP from the Azure portal, the corresponding AzureIP resource goes into UNKNOWN state [3928222]
  6. Software Limitations
    1. Virtualization software limitations
      1.  
        Paths cannot be enabled inside a KVM guest if the devices have been previously removed and re-attached from the host
      2.  
        Application component fails to come online [3489464]
    2. Storage Foundation software limitations
      1. Dynamic Multi-Pathing software limitations
        1.  
          DMP settings for NetApp storage attached environment
        2.  
          LVM volume group in unusable state if last path is excluded from DMP (1976620)
      2. Veritas Volume Manager software limitations
        1.  
          Snapshot configuration with volumes in shared disk groups and private disk groups is not supported (2801037)
        2.  
          SmartSync is not supported for Oracle databases running on raw VxVM volumes
        3.  
          Veritas InfoScale does not support thin reclamation of space on a linked mirror volume (2729563)
        4.  
          Cloned disks operations not supported for FSS disk groups
        5.  
          Thin reclamation requests are not redirected even when the ioship policy is enabled (2755982)
        6.  
          Veritas Operations Manager does not support disk, disk group, and volume state information related to CVM I/O shipping feature (2781126)
      3. Veritas File System software limitations
        1.  
          Limitations while managing Docker containers
        2.  
          Linux I/O Scheduler for Database Workloads
        3.  
          Recommended limit of number of files in a directory
        4.  
          The vxlist command cannot correctly display numbers greater than or equal to 1 EB
        5.  
          Limitations with delayed allocation for extending writes feature
        6.  
          FlashBackup feature of NetBackup 7.5 (or earlier) does not support disk layout Version 8, 9, or 10
        7.  
          Compressed files that are backed up using NetBackup 7.1 or prior become uncompressed when you restore the files
        8.  
          On SUSE, creation of a SmartIO cache of VxFS type hangs on Fusion-io device (3200586)
        9.  
          A NetBackup restore operation on VxFS file systems does not work with SmartIO writeback caching
        10.  
          VxFS file system writeback operation is not supported with volume level replication or array level replication
      4. SmartIO software limitations
        1.  
          Cache is not online after a reboot
        2.  
          Writeback caching limitations
        3.  
          The sfcache operations may display error messages in the caching log when the operation completed successfully (3611158)
    3. Replication software limitations
      1.  
        Softlink access and modification times are not replicated on RHEL5 for VFR jobs
      2.  
        VVR Replication in a shared environment
      3.  
        VVR IPv6 software limitations
      4.  
        VVR support for replicating across Storage Foundation versions
    4. Cluster Server software limitations
      1. Limitations related to bundled agents
        1.  
          Programs using networked services may stop responding if the host is disconnected
        2.  
          Volume agent clean may forcibly stop volume resources
        3.  
          False concurrency violation when using PidFiles to monitor application resources
        4.  
          Mount agent limitations
        5.  
          Share agent limitations
        6.  
          Volumes in a disk group start automatically irrespective of the value of the StartVolumes attribute in VCS [2162929]
        7.  
          Application agent limitations
        8.  
          Campus cluster fire drill does not work when DSM sites are used to mark site boundaries [3073907]
        9.  
          Mount agent reports resource state as OFFLINE if the configured mount point does not exist [3435266]
        10.  
          Limitation of VMwareDisks agent to communicate with the vCenter Server [3528649]
        11.  
          NFSRestart agent: In NFSv3, lock recovery is not supported with multiple NFS share service groups
      2. Limitations related to VCS engine
        1.  
          Loads fail to consolidate and optimize when multiple groups fault [3074299]
        2.  
          Preferred fencing ignores the forecasted available capacity [3077242]
        3.  
          Failover occurs within the SystemZone or site when BiggestAvailable policy is set [3083757]
        4.  
          Load for Priority groups is ignored in groups with BiggestAvailable and Priority in the same group[3074314]
      3. Veritas cluster configuration wizard limitations
        1.  
          Wizard fails to configure VCS resources if storage resources have the same name [3024460]
        2.  
          Environment variable used to change log directory cannot redefine the log path of the wizard [3609791]
      4.  
        Limitations related to IMF
      5. Limitations related to the VCS database agents
        1.  
          DB2 RestartLimit value [1234959]
        2.  
          Sybase agent does not perform qrmutil based checks if Quorum_dev is not set (2724848)
        3.  
          Pluggable database (PDB) online may timeout when started after container database (CDB) [3549506]
      6.  
        Security-Enhanced Linux is not supported on SLES distributions
      7.  
        Systems in a cluster must have same system locale setting
      8.  
        VxVM site for the disk group remains detached after node reboot in campus clusters with fire drill [1919317]
      9.  
        Limitations with DiskGroupSnap agent [1919329]
      10.  
        System reboot after panic
      11.  
        Host on RHEV-M and actual host must match [2827219]
      12. Cluster Manager (Java console) limitations
        1.  
          Cluster Manager does not work if the hosts file contains IPv6 entries
        2.  
          VCS Simulator does not support I/O fencing
        3.  
          Using the KDE desktop
      13. Limitations related to LLT
        1.  
          Limitation of LLT support over UDP or RDMA using alias IP [3622175]
      14. Limitations related to I/O fencing
        1.  
          Preferred fencing limitation when VxFEN activates RACER node re-election
        2.  
          Stopping systems in clusters with I/O fencing configured
        3.  
          Uninstalling VRTSvxvm causes issues when VxFEN is configured in SCSI3 mode with dmp disk policy (2522069)
        4.  
          Node may panic if HAD process is stopped by force and then node is shut down or restarted [3640007]
      15.  
        Limitations related to global clusters
      16.  
        Clusters must run on VCS 6.0.5 and later to be able to communicate after upgrading to 2048 bit key and SHA256 signature certificates [3812313]
    5. Storage Foundation Cluster File System High Availability software limitations
      1.  
        cfsmntadm command does not verify the mount options (2078634)
      2.  
        Obtaining information about mounted file system states (1764098)
      3.  
        Stale SCSI-3 PR keys remain on disk after stopping the cluster and deporting the disk group
      4.  
        Unsupported FSS scenarios
    6. Storage Foundation for Oracle RAC software limitations
      1.  
        Supportability constraints for normal or high redundancy ASM disk groups with CVM I/O shipping and FSS (3600155)
      2.  
        Limitations of CSSD agent
      3.  
        Oracle Clusterware/Grid Infrastructure installation fails if the cluster name exceeds 14 characters
      4.  
        SELinux supported in disabled and permissive modes only
      5.  
        Policy-managed databases not supported by CRSResource agent
      6.  
        Health checks may fail on clusters that have more than 10 nodes
      7.  
        Cached ODM not supported in Veritas InfoScale environments
    7. Storage Foundation for Databases (SFDB) tools software limitations
      1.  
        Parallel execution of vxsfadm is not supported (2515442)
      2.  
        Creating point-in-time copies during database structural changes is not supported (2496178)
      3.  
        Oracle Data Guard in an Oracle RAC environment
    8. Storage Foundation for Sybase ASE CE software limitations
      1.  
        Only one Sybase instance is supported per node
      2.  
        SF Sybase CE is not supported in the Campus cluster environment
      3.  
        Hardware-based replication technologies are not supported for replication in the SF Sybase CE environment

Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)

In a Cluster Volume Manager (CVM) cluster, you can disable connectivity to the disks at the controller or enclosure level with the vxdmpadm disable command. In this case, CVM may place the disks into the lfailed state. When you restore connectivity with the vxdmpadm enable command, CVM may not automatically clear the lfailed state. After enabling the controller or enclosure, you must run disk discovery to clear the locally failed state.

To run disk discovery

  • Run the following command:
    # vxdisk scandisks