Sign In
Forgot Password

Don’t have an account? Create One.

IS 8.0.2 Update 6 on RHEL8 Platform

Cumulative Patch

Abstract

Infoscale 8.0.2 Update 6 on RHEL8 Platform

Description

This is a Cumulative patch release for RHEL8 platform on InfoScale 8.0.2

 

SORT ID: 22328

 

Patch Name:

InfoScale 8.0.2 Patch 3100
(RHEL8 Support on IS 8.0.2)

 

Supported Platforms :

RHEL8.8 , RHEL8.10

 

Patch ID's:

VRTSamf-8.0.2.1600-0789_RHEL8 for VRTSamf
VRTSaslapm-8.0.2.2600-0292_RHEL8 for VRTSaslapm
VRTScavf-8.0.2.2700-0339_RHEL8 for VRTScavf
VRTScps-8.0.2.2300-0935_RHEL8 for VRTScps
VRTSdbac-8.0.2.1400-0066_RHEL8 for VRTSdbac
VRTSdbed-8.0.2.1400-0050_RHEL for VRTSdbed
VRTSfsadv-8.0.2.2500-0313_RHEL8 for VRTSfsadv
VRTSgab-8.0.2.1600-0789_RHEL8 for VRTSgab
VRTSglm-8.0.2.1900-0171_RHEL8 for VRTSglm
VRTSgms-8.0.2.1900-0171_RHEL8 for VRTSgms
VRTSllt-8.0.2.2300-0933_RHEL8 for VRTSllt
VRTSodm-8.0.2.2700-0343_RHEL8 for VRTSodm
VRTSpython-3.9.16.7-RHEL8 for VRTSpython
VRTSrest-3.0.10-linux for VRTSrest
VRTSsfcpi-8.0.2.1500-GENERIC for VRTSsfcpi
VRTSsfmh-8.0.2.551-0259_Linux for VRTSsfmh
VRTSspt-8.0.2.1300-0027_RHEL8 for VRTSspt
VRTSvbs-8.0.2.1200-0032-RHEL8.x86_64.rpm for VRTSvbs
VRTSvcs-8.0.2.2300-0933_RHEL8 for VRTSvcs
VRTSvcsag-8.0.2.2300-0933_RHEL8 for VRTSvcsag
VRTSvcsea-8.0.2.2300-0933_RHEL8 for VRTSvcsea
VRTSveki-8.0.2.1600-0169_RHEL8 for VRTSveki
VRTSvlic-4.01.802.002-RHEL8 for VRTSvlic
VRTSvxfen-8.0.2.2300-0933_RHEL8 for VRTSvxfen
VRTSvxfs-8.0.2.2700-0343_RHEL8 for VRTSvxfs
VRTSvxvm-8.0.2.2600-0292_RHEL8 for VRTSvxvm 

 

Pre-requisites:
You should be minimally at IS 8.0.2-GA to install this update.

 

SPECIAL NOTES:

1.Please ensure that to install HF's VRTSvxfs-8.0.2.2701 and VRTSodm-8.0.2.2701 in conjunction with this patch.

2.In case the internet is not available, Installation of the patch must be performed concurrently with the latest CPI patch downloaded from Download Center.

                          * * * READ ME * * *
                      * * * InfoScale 8.0.2 * * *
                         * * * Patch 3100 * * *
                         Patch Date: 2025-07-02


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0.2 Patch 3100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSpython
VRTSrest
VRTSsfcpi
VRTSsfmh
VRTSspt
VRTSvbs
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSveki
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0.2
   * InfoScale Enterprise 8.0.2
   * InfoScale Foundation 8.0.2
   * InfoScale Storage 8.0.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-8.0.2.2600
* 4188358 (4188399) Adjust the default values of tunables
* 4189232 (4189556) VxVM support on RHEL9.6
* 4189350 (4188549) vxconfigd died due to a floating point exception.
* 4189351 (4188560) Volume Manager Encryption Service repeatedly dies
* 4189564 (4189567) Panic seen in VVRCert due to incorrect value of blksize
* 4189695 (4188763) Stale and incorrect symbolic links to VxDMP devices in "/dev/disk/by-uuid".
* 4189698 (4189447) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
* 4189751 (4189428) Security vulnerabilities exists in third party components [curl , libxml].
* 4189773 (4189301) Frequent IPM handle purging cause VVR SG to switchover
Patch ID: VRTSaslapm 8.0.2.2600
* 4189576 (4185193) UDID mismatch error occurs when using VxVM/ASL 7.4.2.5300 with RHEL 8 and NVMe devices.
* 4189696 (4188831) Adding support for Hitachi VSPOne array.
* 4189772 (4189561) Added support for Netapp ASA r2 array
Patch ID: VRTSvxvm-8.0.2.2400
* 4189251 (4189428) Security vulnerabilities exists in third party components [curl , libxml].
Patch ID: VRTSvxvm-8.0.2.2300
* 4184100 (4183777) System log is flooding with the fake alarms "VxVM vxio V-5-0-0 read/write on disk: xxx took longer to complete".
* 4184316 (4177113) The vxdisk() function in the dmpcert overload_fun.sh script was found to be going into an infinite loop when executed.
* 4184318 (4155324) if any one of our systemd service fails during installation, it keeps the modules loaded in the system and hampers subsequent installation retry.
* 4187579 (4187459) Plex attach operations are taking an excessive amount of time to sync when Azure 4K Native disks are configured.
Patch ID: VRTSaslapm 8.0.2.2300
* 4188105 (4188104) dummy incident for archival.
Patch ID: VRTSvxvm-8.0.2.2100
* 4124889 (4090828) Enhancement to track plex att/recovery data synced in past to debug corruption
* 4138279 (4116496) System panic at dmp_process_errbp+47.
* 4165431 (4160809) [Cosmote][NBFS][media-only]CVM Failed to join after reboot operation from GUI
* 4175713 (4175712) Security vulnerabilities exists in third party components [curl , libxml and openssl].
* 4178207 (4118809) System panic at dmp_process_errbp.
* 4178260 (4175390) When adding a mirror plexes to a volume, the plex goes into and are stuck at TEMPRMSD state.
* 4179072 (4178449) vxconfigd thread stack corrupted for segfault when writing to translog.
* 4179370 (4179002) VxFS got corrupted after dynamic LUN expansion on rhel9.
* 4179818 (4178920) "vxdmp V-5-0-0 failed to get request for devno for IO offset" continuously appears in the system log.
Patch ID: VRTSaslapm 8.0.2.2100
* 4184813 (4184814) Dummy incident for readme
Patch ID: VRTSvxvm-8.0.2.1700
* 4128883 (4112687) DLE (Dynamic Lun Expansion) of single path GPT disk may corrupt disk public region.
* 4137508 (4066310) Added BLK-MQ feature for DMP driver
* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
* 4143558 (4141890) ----------
TUTIL0 field may not get cleared sometimes after cluster reboot.
* 4153377 (4152445) Replication failed to start due to vxnetd threads not running
* 4153566 (4090410) VVR secondary node panics during replication.
* 4153570 (4134305) Collecting ilock stats for admin SIO causes buffer overrun.
* 4153597 (4146424) CVM Failed to join after power off and Power on from ILO
* 4153874 (4010288) [Cosmote][NBFS]ECV:DR:Replace Node on Primary failed with error"Rebuild data for faulted node failed"
* 4154104 (4142772) Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again.
* 4154107 (3995831) System hung: A large number of SIOs got queued in FMR.
* 4155091 (4118510) Volume manager tunable to control log file permissions
* 4155719 (4154921) system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on.
* 4157012 (4145715) ----------
Secondary SRL log error while reading data from log
* 4157643 (4159198) vxfmrmap coredump.
* 4158517 (4159199) AIX 7.3 TL2 - Memory fault(coredump) while running "./scripts/admin/vxtune/vxdefault.tc"
* 4158662 (4159200) AIX 7.3 - Script error while installing VXVM Patch -"VRTSvxvm.post_u[289]: -q: not found"
* 4158920 (4159680) set_proc_oom_score: not found while /usr/lib/vxvm/bin/vxconfigbackupd gets executed
* 4161646 (4149528) Cluster wide hang after faulting nodes one by one
* 4162053 (4132221) Supportability requirement for easier path link to dmpdr utility
* 4162055 (4116024) machine panic due to access illegal address.
* 4162058 (4046560) vxconfigd aborts on Solaris if device's hardware path is too long.
* 4162665 (4162664) VxVM support on Rocky Linux 8 and 9.
* 4162917 (4139166) Enable VVR Bunker feature for shared diskgroups.
* 4162966 (4146885) Restarting syncrvg after termination will start sync from start
* 4164114 (4162873) disk reclaim is slow.
* 4164250 (4154121) add a new tunable use_hw_replicatedev to enable Volume Manager to import the hardware replicated disk group.
* 4164252 (4159403) add clearclone option automatically when import the hardware replicated disk group.
* 4164254 (4160883) clone_flag was set on srdf-r1 disks after reboot.
* 4164944 (4165970) LOG_FILE_PERM & perm_change command not found
* 4164947 (4165971) /var/tmp/rpm-tmp.Kl3ycu: line 657: [: missing `]'
* 4165431 (4160809) [Cosmote][NBFS][media-only]CVM Failed to join after reboot operation from GUI
* 4165889 (4165158) Disk associated with CATLOG showing in RECOVER State after rebooting nodes.
* 4166881 (4164734) Disable support for TLS1.1
* 4166882 (4161852) [BankOfAmerica][Infoscale][Upgrade] Post InfoScale upgrade, command "vxdg upgrade" succeeds but generates apparent error "RLINK is not encypted"
* 4172377 (4172033) Data corruption due to stale DRL agenodes
* 4173722 (4158303) Panic at dmpsvc_da_analyze_error+417
* 4174239 (4171979) Panic occurs with message "kernel BUG at fs/inode.c:1578!"
Patch ID: VRTSaslapm 8.0.2.1700
* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
Patch ID: VRTSvxvm-8.0.2.1500
* 4161828 (4161827) RHEL8.10 Platform Support in VxVM
Patch ID: VRTSvxvm-8.0.2.1400
* 4132775 (4132774) VxVM support on SLES15 SP5
* 4133930 (4100646) Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks
Patch ID: VRTSaslapm 8.0.2.1400
* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
Patch ID: VRTSvxvm-8.0.2.1200
* 4119267 (4113582) In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.
* 4123065 (4113138) 'vradmin repstatus' invoked on the secondary site shows stale information
* 4123069 (4116609) VVR Secondary logowner change is not reflected with virtual hostnames.
* 4123080 (4111789) VVR does not utilize the network provisioned for it.
* 4123345 (4113323) VxVM Support on RHEL 8.8
* 4124291 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4124794 (4114952) With virtual hostnames, pause replication operation fails.
* 4124796 (4108913) Vradmind dumps core because of memory corruption.
* 4125392 (4114193) 'vradmin repstatus' incorrectly shows replication status as inconsistent.
* 4125811 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4128127 (4132265) Machine attached with NVMe devices may panic.
* 4128835 (4127555) Unable to configure replication using diskgroup id.
* 4129766 (4128380) With virtual hostnames, 'vradmin resync' command may fail if invoked from DR site.
* 4130402 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.
* 4130827 (4098391) Continuous system crash is observed during VxVM installation.
* 4130947 (4124725) With virtual hostnames, 'vradmin delpri' command may hang.
Patch ID: VRTSaslapm 8.0.2.1200
* 4132966 (4116868) ASLAPM rpm Support on RHEL 8.8
Patch ID: VRTSvxvm-8.0.2.1100
* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
Patch ID: VRTSaslapm 8.0.2.1100
* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
Patch ID: VRTSveki-8.0.2.1600
* 4184573 (4184575) Veki failed to install on rocky machine due to improper version.
Patch ID: VRTSveki-8.0.2.1400
* 4135795 (4135683) Enhancing debugging capability of VRTSveki package installation
* 4140468 (4152368) Some incidents do not appear in changelog because their cross-references are not properly processed
Patch ID: VRTSveki-8.0.2.1200
* 4120300 (4110457) Veki packaging were failing due to dependency
* 4130816 (4130815) Generate and add changelog in VEKI rpm
* 4132635 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSveki-8.0.2.1100
* 4118568 (4110457) Veki packaging were failing due to dependency
Patch ID: VRTSgms-8.0.2.1900
* 4181486 (4181235) FS components(GLM,GMS,ODM,FSadv) packaging were failing due to the wrong secure boot keys location.
Patch ID: VRTSgms-8.0.2.1500
* 4140460 (4152373) Some incidents do not appear in changelog because their cross-references are not properly processed
Patch ID: VRTSgms-8.0.2.1200
* 4126266 (4125932) no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.
* 4127527 (4107112) When finding GMS module with version same as kernel version, need to consider kernel-build number.
* 4127528 (4107753) If GMS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129708 (4129707) Generate and add changelog in GMS rpm
Patch ID: VRTSglm-8.0.2.1900
* 4181484 (4181235) FS components(GLM,GMS,ODM,FSadv) packaging were failing due to the wrong secure boot keys location.
Patch ID: VRTSglm-8.0.2.1500
* 4138274 (4126298) System may panic due to unable to handle kernel paging request 
and memory corruption could happen.
Patch ID: VRTSglm-8.0.2.1200
* 4124912 (4118297) GLM support for RHEL9.2.
* 4127524 (4107114) When finding GLM module with version same as kernel version, need to consider kernel-build number.
* 4127525 (4107754) If GLM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4127626 (4127627) GLM support for RHEL9.0 minor kernel 5.14.0-70.36.1
* 4129715 (4129714) Generate and add changelog in GLM rpm
Patch ID: VRTSfsadv-8.0.2.2500
* 4188577 (4188576) Security vulnerabilities exist in the Curl third-party components used by VxFS.
Patch ID: VRTSspt-8.0.2.1300
* 4139975 (4149462) New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.
* 4146957 (4149448) New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.
Patch ID: VRTSrest-3.0.10
* 4124960 (4130028) GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs
* 4124963 (4127170) While modifying the system list for service group when dependency is there, the api would fail
* 4124964 (4127167) -force option is used by default in delete of rvg and a new -online option is used in patch of rvg
* 4124966 (4127171) While getting excluded disks on Systems API we were getting nodelist instead of nodename in href
* 4124968 (4127168) In GET request on rvgs all datavolumes in RVGs not listed correctly
* 4125162 (4127169) Get disks api failing when cvm is down on any node
Patch ID: VRTSpython-3.9.16 P07
* 4179488 (4179487) Upgrading Multiple vulnerable thirdparty modules and cleaning up .pyenv directory unused files under VRTSPython for IS 8.0.2.
Patch ID: VRTSsfcpi-8.0.2.1500
* 4115603 (4115601) On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a 
unique publisher list during install and upgrade.
* 4115707 (4126025) While performing full upgrade of the secondary site, SRL missing & RLINK dissociated error observed.
* 4115874 (4124871) Configuration of Vxfen fails for a three-node cluster on VMs in different AZs
* 4116368 (4123645) During the Rolling upgrade response file creation, CPI is asking to unmount the VxFS filesystem.
* 4116406 (4123654) Removed unnecessary swap space message.
* 4116879 (4126018) During addnode, installer fails to mount resources.
* 4116995 (4123657) Installer retries upgrading protocol version post-upgrade.
* 4117956 (4104627) Providing multiple-patch support up to 10 patches.
* 4121961 (4123908) Installer does not register InfoScale hosts to the VIOM Management Server after InfoScale configuration.
* 4122442 (4122441) CPI is displaying licensing information after starting the product through the response file.
* 4122749 (4122748) On Linux, had service fails to start during rolling upgrade from InfoScale 7.4.1 or lower to higher InfoScale version.
* 4126470 (4130003) Installer failed to start vxfs_replication while performing Configuration of Enterprise on OEL 9.2
* 4127111 (4127117) On a Linux system, you can configure the GCO(Global Cluster option) with a hostname by using InfoScale installer.
* 4130377 (4131703) Installer performs dmp include/exclude operations if /etc/vx/vxvm.exclude is present on the system.
* 4131315 (4131314) Environment="VCS_ENABLE_PUBSEC_LOG=0" is added from cpi in install section of service file instead of Service section.
* 4131684 (4131682) On SunOS, installer prompts the user to install 'bourne' package if it is not available.
* 4132411 (4139946) Rolling upgrade fails if the recommended upgrade path is not followed.
* 4133019 (4135602) Installer failed to update main.cf file with VCS user during reconfiguring a secured cluster to non-secured cluster.
* 4133469 (4136432) add node to higher version of infoscale node fails.
* 4135015 (4135014) CPI installer should not ask for install InfoScale after "./installer -precheck" is done.
* 4136211 (4139940) Installer failed to get the package version and failed due to PADV missmatch.
* 4139609 (4142877) Missing HF list not displayed during upgrade by using the patch release.
* 4140512 (4140542) Rolling upgrade failed for the patch installer
* 4157440 (4158841) VRTSrest verison changes support.
* 4157696 (4157695) In IS 7.4.2 U7 to IS 8.0.2 upgradation, VRTSpython version upgradation fails.
* 4158650 (4164760) The installer will check for dvd pkg version with the available patch pkg version to install the latest pkgs.
* 4159940 (4159942) The installer will not update existing file permissions.
* 4161937 (4160983) In Solaris, the vxfs modules are getting removed from current BE while upgrading the Infoscale to ABE.
* 4164945 (4164958) The installer will check for pkg version to allow EO tunable changes in config files.
* 4165118 (4171259) The installer will add the new node in cluster.
* 4165727 (4165726) Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise.
* 4165730 (4165726) Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise.
* 4165840 (4165833) InfoScale installer does not support installation using IPS repository on Solaris.
* 4166659 (4171256) The installer will ease in checking rvg role while the upgrading VVR host.
* 4166980 (4166979) [VCS] - VMwareDisks agent is unable to start and run after upgrade to RHEL 8.10 and Infoscale 8.0.2.1700
* 4167308 (4171253) IS 8.0.2U3 : CPI doesn't ask to set EO tunable in case of Infoscale upgrade
* 4177618 (4184454) We are changing code by adding dbed related checks
* 4178007 (4177807) We are changing message for CPC fencing not writing in env file /etc/vxenviron file
* 4181039 (4181037) We are making the etc/vx/vxdbed/dbedenv file accessible
* 4181282 (4181279) Configuration fails if dbed is not installed.
* 4181787 (4181786) VCS configuration with responsefile changing the interface bootproto from dhcp to none when we configure the LLT over UDP.
* 4184438 (4186642) The installer will not ask for VIOM registration in case of start option.
Patch ID: -4.01.802.002
* 4173483 (4173483) Providing Patch Release for VRTSvlic
Patch ID: VRTSsfmh-vom-HF0802551
* 4189545 (4189544) VIOM 8.0.2.551 VRTSsfmh package for InfoScale 8.0.2 Update releases
Patch ID: VRTSdbac-8.0.2.1400
* 4161967 (4157901) vcsmmconfig.log file permission is hardcoded, but permission should be set as per EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM.
Patch ID: VRTSdbac-8.0.2.1300
* 4153145 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms.
Patch ID: VRTSdbac-8.0.2.1200
* 4132631 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSvcsea-8.0.2.2300
* 4189548 (4189547) Invalid details mentioned while executing fire drill of oracle agent with oracle21c.
Patch ID: VRTSvcsea-8.0.2.1700
* 4180094 (4180091) Offline script not able to exit as fuser check is being run on all disks, even the ones not under VCS control.
Patch ID: VRTSvcsea-8.0.2.1600
* 4088599 (4088595) hapdbmigrate utility fails to online the oracle service group
Patch ID: VRTSvcsea-8.0.2.1400
* 4058775 (4073508) Oracle virtual fire-drill is failing.
Patch ID: VRTSamf-8.0.2.1600
* 4161436 (4161644) System panics when AMF enabled and there are Process/Application resources.
* 4162305 (4168084) AMF caused kernel BUG: scheduling while atomic when umount file system.
Patch ID: VRTSamf-8.0.2.1400
* 4137600 (4136003) A cluster node panics when the AMF module overruns internal buffer to analyze arguments of an executable binary.
Patch ID: VRTSamf-8.0.2.1200
* 4132471 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSgab-8.0.2.1600
* 4161908 (4160398) Veritas Infoscale Availability does not support Rocky Linux release.
Patch ID: VRTSgab-8.0.2.1400
* 4153142 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms.
Patch ID: VRTSgab-8.0.2.1200
* 4123487 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSvcsag-8.0.2.2300
* 4189572 (4188318) KVMGuest agent , auto removal of invalid env file once environment become valid.
* 4189590 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents
* 4189594 (4189392) Added support for attaching GCP regional disks in read-only mode to multiple instances.
Patch ID: VRTSvcsag-8.0.2.2100
* 4180582 (4180581) AWSIP agent does not fail over in case system gets faulted and when IPv6 is used.
Patch ID: VRTSvcsag-8.0.2.1700
* 4177815 (4175426) VMwareDisk Agent taking longer time to failover.
Patch ID: VRTSvcsag-8.0.2.1600
* 4149272 (4164374) VCS DNS Agent monitor gets timeout if multiple DNS servers are added as Stealth Masters and if few of them get hung.
* 4156630 (4156628) Getting message "Uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317"constantly.
* 4162102 (4163518) Apache (httpd) agent hangs on reboot due to ordering dependency deadlock between vcs and httpd.
* 4162659 (4162658) LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure.
* 4162753 (4142040) While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.
Patch ID: VRTSvcsag-8.0.2.1500
* 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.
Patch ID: VRTSvcsag-8.0.2.1400
* 4114880 (4152700) When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.
* 4135534 (4152812) AWS EBSVol agent takes long time to perform online and offline operations on resources.
* 4137215 (4094539) Agent resource monitor not parsing process name correctly.
* 4137376 (4122001) NIC resource remain online after unplug network cable on ESXi server.
* 4137377 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.
* 4137602 (4121270) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
* 4137618 (4152886) AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a shared VPC.
* 4143918 (4152815) AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.
Patch ID: VRTSvcsag-8.0.2.1200
* 4130206 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.
Patch ID: VRTSllt-8.0.2.2300
* 4189272 (4189271) LLT service is unable to start due to LLT_IRQBALANCE
* 4189571 (4167108) replace yield() with cond_resched()
* 4189853 (4189566) In an InfoScale FSS environment where LLT links are configured over RDMA, the CVM slave node panics whilst joining the cluster with the CVM master.
Patch ID: VRTSllt-8.0.2.2100
* 4186647 (4186645) Enable LLT_IRQBALANCE by default, and add a check on hpe_irqbalance to detect any anomalies
Patch ID: VRTSllt-8.0.2.1800
* 4187795 (4180026) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).
Patch ID: VRTSllt-8.0.2.1700
* 4179383 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
Patch ID: VRTSllt-8.0.2.1600
* 4162744 (4139781) Unexpected or corrupted skb, memory type missing in buffer header.
* 4173093 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).
Patch ID: VRTSllt-8.0.2.1400
* 4132209 (4124759) Panic happened with llt_ioship_recv on a server running in AWS.
* 4137611 (4135825) Once root file system is full during llt start, llt module failing to load forever.
* 4153057 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).
Patch ID: VRTSllt-8.0.2.1200
* 4124138 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).
* 4128886 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15.
* 4132621 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).
Patch ID: VRTScps-8.0.2.2300
* 4189591 (4188652) After configuring CP server, getting EO related error in CP server logs.
* 4189990 (4189584) Security vulnerabilities exists in Sqlite third-party components used by VCS.
Patch ID: VRTScps-8.0.2.1600
* 4152885 (4152882) vxcpserv process received SIGABRT signal due to invalid pointer access in acvsc_lib while writing logs.
Patch ID: VRTSdbed-8.0.2.1400
* 4188986 (4188985) Checkpoint creation fails for oracle database application using dbed, if archive log is set as a directory inside the mountpoint
Patch ID: VRTSdbed-8.0.2.1200
* 4163136 (4136146) Update security component libraries.
Patch ID: VRTSdbed-8.0.2.1100
* 4153061 (4092588) SFAE failed to start with systemd.
Patch ID: VRTSvbs-8.0.2.1200
* 4189595 (4188647) Virtual Business Services feature will not work with latest Linux platform.
Patch ID: VRTSvbs-8.0.2.1100
* 4163135 (4136146) Update security component libraries.
Patch ID: VRTSvxfen-8.0.2.2300
* 4189905 (4189906) vxfendisk fails due to ksh overwriting positional parameters by default after executing subsequent scripts inside it.
Patch ID: VRTSvxfen-8.0.2.1900
* 4187629 (4187897) Security vulnerabilities exist in the Curl third-party components used by VCS.
Patch ID: VRTSvxfen-8.0.2.1800
* 4180027 (4180026) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).
Patch ID: VRTSvxfen-8.0.2.1700
* 4176111 (4176110) vxfentsthdw failed to verify fencing disks compatibility in KVM environment
* 4177677 (4176592) Flooding of 'vxfen.log' file with the error message - "VXFEN already configured".
Patch ID: VRTSvxfen-8.0.2.1600
* 4164329 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).
Patch ID: VRTSvxfen-8.0.2.1400
* 4137326 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).
Patch ID: VRTSvxfen-8.0.2.1200
* 4124086 (4124084) Security vulnerabilities exist in the Curl third-party components used by VCS.
* 4124644 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).
* 4125891 (4113847) Support for even number of coordination disks for CVM-based disk-based fencing
* 4125895 (4108561) Reading vxfen reservation not working
* 4132625 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).
Patch ID: VRTSvcs-8.0.2.2300
* 4189593 (4188662) App group faulted during upgrade
Patch ID: VRTSvcs-8.0.2.2200
* 4189253 (4189252) Upgrading Netsnmp component to fix security vulnerabilities .
Patch ID: VRTSvcs-8.0.2.1600
* 4162755 (4136359) When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated.
Patch ID: VRTSvcs-8.0.2.1500
* 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.
Patch ID: VRTSvcs-8.0.2.1400
* 4133677 (4129493) Tenable security scan kills the Notifier resource.
Patch ID: VRTSvcs-8.0.2.1200
* 4113391 (4124956) GCO configuration with hostname is not working.
Patch ID: VRTScavf-8.0.2.2700
* 4177247 (4177245) CVMVolDg resource takes many minutes to online with CPS fencing.
Patch ID: VRTScavf-8.0.2.2100
* 4162683 (4153873) Deport decision was being dependent on local system only not on all systems in the cluster
Patch ID: VRTScavf-8.0.2.1500
* 4133969 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.
* 4137640 (4088479) The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.
Patch ID: VRTSodm-8.0.2.2700
* 4175626 (4175627) ODM module failed to load with latest VxFS.
Patch ID: VRTSodm-8.0.2.2400
* 4188098 (4188097) ODM module failed to load with latest VxFS.
Patch ID: VRTSodm-8.0.2.2100
* 4175626 (4175627) ODM module failed to load with latest VxFS.
Patch ID: VRTSodm-8.0.2.1700
* 4154116 (4118154) System may panic in simple_unlock_mem() when errcheckdetail enabled.
* 4159290 (4159291) ODM module is not getting loaded with newly rebuilt VxFS.
Patch ID: VRTSodm-8.0.2.1500
* 4153091 (4153090) After installing VRTSvxfs-8.0.2.1400 ODM fails to start.
Patch ID: VRTSodm-8.0.2.1400
* 4144274 (4144269) After installing VRTSvxfs-8.0.2.1400 ODM fails to start.
Patch ID: VRTSodm-8.0.2.1200
* 4123834 (4113118) ODM support for RHEL 8.8.
* 4126262 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration
* 4127518 (4107017) When finding ODM module with version same as kernel version, need to consider kernel-build number.
* 4127519 (4107778) If ODM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129838 (4129837) Generate and add changelog in ODM rpm
Patch ID: VRTSvxfs-8.0.2.2700
* 4135608 (4086287) VxFS mount command may panic the system with "scheduling while atomic: mount/453834/0x00000002" bug for cluster file system
* 4159938 (4155961) Panic in vx_rwlock during force unmount.
* 4164927 (4187385) Handling for IFAULOG type inode in fsck pass1b.
* 4177631 (4177630) Save the fsck progress status report to a file by default.
* 4177641 (4135900) LM Stress Worm fails hitting "Oops: 0002 [#1] PREEMPT SMP PTI"
* 4177643 (4085144) Implemented a fix to address the 'scheduling while atomic' bug in VxFS affecting FEL-enabled file systems.
* 4177650 (4164503) Fix to resolve Memory leak presents in internal VxFS library function.
* 4188062 (4188063) Internal assert was seen during sles15sp6 support for VxFS.
* 4188390 (4188391) There was a mismatch between the setfacl and getfacl command outputs for an empty ACL.
* 4189348 (4188888) Fstrim service fault on VXFS Filesystems present in /etc/fstab. Running the fstrim command manually results in the error: 'FITRIM ioctl failed: Invalid argument' for VxFS filesystems.
* 4189349 (4188107) Softlockup occurred during Shrinking VxFS file system.
* 4189423 (4189424) FSQA binary freezeit fails with the error "ioctl VX_FREEZE failed"
* 4189586 (4189587) The setfacl operation failed with the error: Operation not supported.
* 4189598 (4187406) Panic in locked_inode_to_wb_and_lock_list during OS writeback.
* 4189599 (3743572) File system may get hang when reaching 1 billion inode 
limit
* 4189600 (4189333) Fixed inode size mismatch after truncate/fallocate with vx_falloc_clear=1.
* 4189601 (4120787) Data corruption issues with parallel direct IO on ZFOD extents.
* 4189603 (4187096) Orphaned symlinks were not getting replicated in VFR.
* 4189604 (4184953) mkfs may generate coredump with signal SIGSEGV
* 4189605 (4188417) NULL pointer dereference while trying to dereference fs_fel_info pointer in recovery context.
* 4189607 (4189606) SecureFS failed to create checkpoint as per schedule
* 4189642 (4127771) Full fsck fails and generates core dump.
* 4189648 (4142106) fsck -n shows warnings after successful log replay.
* 4189650 (4155954) Attribute data mismatch even if the node is owner while doing reverse name lookup in vxuditlogadm.
* 4189652 (4188805) Online migration process goes in hang state
* 4189655 (4189654) VxFS mount binary code does not support multi category security SElinux context like: "system_u:object_r:container_file_t:s0:c7,c28"
* 4189656 (4179548) Potential missed buffer flush during audit log file grow in vx_multi_bufflush when f_bsize is less than 8K.
* 4189659 (4180012) fsck utility was generating coredump due to a race between multiple threads of fsck.
* 4189663 (4181952) cfs.stress.worm hits assert "vx_fcl_bufinval:1a
* 4189664 (4181957) SELinux denied messages seen while running LM Conformance on RHEL9.4
* 4189665 (4182162) Implemented a fix to allow creation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS.
* 4189668 (4189667) VxFS medium impact coverity issues
* 4189669 (4182897) Failures seen while running LM CMDS->metasave testcase
* 4189672 (4188816) Migration fails when doing direct write and file system is disabled.
* 4189675 (4187574) Fix to address a core dump issue in the 'vxfstaskd' daemon caused by an internal race condition.
* 4189676 (4187819) Fix to avoid unnecessary execution of "vxsnap" command on every mounted vxfs filesystem.
* 4189677 (4188282) [RHEL9.4] LM Noise Replay Worm test exits with Failed to full fsck cleanly.
* 4189686 (4188813) Online migration process in hang state.
* 4189702 (4189180) System got hang due to global lru lock contention.
* 4189792 (4189761) used-after-free memory corruption occurred.
* 4190077 (4116377) filesystem check showing warning for audit log record files
* 4190275 (4190241) vx_clear_zfod_extent triggers vx_dataioerr due to improper 64-bit offset alignment with 32-bit block size value
Patch ID: VRTSvxfs-8.0.2.2500
* 4189227 (4189228) Security vulnerabilities exist in the third-party components[zlib, libexpat] used by VxFS.
Patch ID: VRTSvxfs-8.0.2.2400
* 4141853 (4141854) Conformance->fsadm hits coredump.
* 4164637 (4164638) Fixing thread local memory leak in FSCK binary.
* 4177627 (4160991) Accessing the address which is freed and still present in the mlink.
* 4177656 (4167362) Memory leak observed in fsck through valgrind.
* 4177657 (4144669) Hitting Asser in case of clone inodes where the filesystem is not frozen and fullfsck flag is set.
Patch ID: VRTSvxfs-8.0.2.2200
* 4177662 (4171368) Node panicked while unmounting filesystem.
* 4177663 (4168443) System panicked at vx_clonemap.
* 4177664 (4175488) DB2 thread hang seen while trying to acquire vx_rwsleep_rec lock.
Patch ID: VRTSvxfs-8.0.2.2100
* 4144078 (4142349) Using sendfile() on VxFS file system might result in hang.
* 4162063 (4136858) Added a basic sanity check for directory inodes in ncheck codepath.
* 4162064 (4121580) WORM flag is getting set on checkpoint mounter or RW mode.
* 4162065 (4158238) vxfsrecover command exits with error if the previous invocation terminated abnormally.
* 4162066 (4156650) Older checkpoints remain, if SecureFS is recovered from newer checkpoint.
* 4162220 (4099775) System might panic if ownership change operations are done for a quota enabled Filesystem
* 4163183 (4158381) Server panicked with "Kernel panic - not syncing: Fatal exception"
* 4164090 (4163498) Veritas File System df command logging doesn't have sufficient permission while validating tunable configuration
* 4164270 (4156384) Filesystem's metadata can get corrupted due to missing transaction in the intent log
* 4165966 (4165967) mount and fsck commands are facing few SELinux permission denials issue.
* 4166501 (4163862) Mutex lock contention is observed in cluster file system under massive file creation workload
* 4166502 (4163127) Spinlock contention observed during inode allocation for massive file creation operation on cluster file system.
* 4166503 (4162810) Spikes in CPU usage of glm threads were observed in output of "top" command during massive file creation workload on cluster file system.
* 4168357 (4076646) Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area.
* 4172054 (4162316) FS migration to VxFS might hit the kernel PANIC if CrowdStrike falcon sensor is running.
* 4172753 (4173685) fsck command facing few SELinux permission denials issue.
* 4173064 (4163337) Intermittent df slowness seen across cluster.
* 4174242 (4174538) mount and fsck commands are facing few SELinux permission denials issue.
* 4174244 (4174539) fsck command facing few SELinux permission denials issue.
Patch ID: VRTSvxfs-8.0.2.1700
* 4158756 (4158757) VxFS support for RHEL-8.10.
* 4159284 (4145203) Invoking veki through systemctl inside vxfs-startup script.
* 4159938 (4155961) Panic in vx_rwlock during force unmount.
* 4160325 (4160740) Command fsck is facing few SELinux permission denials issue.
* 4160326 (4160742) mount and fsck commands are facing few SELinux permission denials issue.
* 4161120 (4161121) Non root user is unable to access log files under /var/log/vx directory
Patch ID: VRTSvxfs-8.0.2.1600
* 4157410 (4157409) Security Vulnerabilities exists in the current versions of third party components, sqlite and expat, used by VxFS .
Patch ID: VRTSvxfs-8.0.2.1500
* 4119626 (4119627) Command fsck is facing few SELinux permission denials issue.
* 4146580 (4141876) Parallel invocation of command vxschadm might delete previous SecureFS configuration.
* 4148734 (4148732) get_dg_vol_names is leaking memory.
* 4150065 (4149581) VxFS Secure clock is running behind than expected by huge margin.
Patch ID: VRTSvxfs-8.0.2.1400
* 4141666 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.2.1200
* 4121230 (4119990) Recovery stuck while flushing and invalidating the buffers
* 4123715 (4113121) VXFS support for RHEL 8.8.
* 4125870 (4120729) Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.
* 4125871 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4125873 (4108955) VFR job hangs on source if thread creation fails on target.
* 4125875 (4112931) vxfsrepld consumes a lot of virtual memory when it has been running for long time.
* 4125878 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4126104 (4122331) Enhancement in vxfs error message which are logged while marking the bitmap or inode as "BAD".
* 4127509 (4107015) When finding VxFS module with version same as kernel version, need to consider kernel-build number.
* 4127510 (4107777) If VxFS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4127594 (4126957) System crashes with VxFS stack.
* 4127720 (4127719) Added fallback logic in fsdb binary and made changes to fstyp binary such that it now dumps uuid.
* 4127785 (4127784) Earlier fsppadm binary was just giving warning in case of invalid UID, GID number. After this change providing invalid UID / GID  e.g.
"1ABC" (UID/GID are always numbers) will result into error and parsing will stop.
* 4128249 (4119965) VxFS mount binary failed to mount VxFS with SELinux context.
* 4129494 (4129495) Kernel panic observed in internal VxFS LM conformance testing.
* 4129681 (4129680) Generate and add changelog in VxFS rpm
* 4131312 (4128895) On servers with SELinux enabled, VxFS mount command may throw error.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-8.0.2.2600

* 4188358 (Tracking ID: 4188399)

SYMPTOM:
Customers experienced frequent performance degradations and escalations due to a few default tunable settings.

DESCRIPTION:
Set the new default values of the following tunables:

vol_max_nmpool_sz      512M
vol_max_rdback_sz        512M               
vol_rvio_maxpool_sz     512M

RESOLUTION:
Tunable values are changed to prevent further potential escalations caused by performance degradation.

* 4189232 (Tracking ID: 4189556)

SYMPTOM:
On RHEL9.6, module compilation fails and module insertion errors occur.

DESCRIPTION:
There are changes in the kernel API, that needed to be relected in the VxVM code.

RESOLUTION:
Necessary changes have been done to support VxVM on RHEL9.6 platform.

* 4189350 (Tracking ID: 4188549)

SYMPTOM:
vxconfigd died due to a floating point exception with below stack:
#0  in get_geometry ()
#1  in getpart ()
#2  in devintf_setup_slices () 
#3  in devintf_online_setup () 
#4  in auto_online () 
#5  in da_online () 
#6  in da_thread_online_disk ()

DESCRIPTION:
After getting the geometry from system, volume manager failed to perform a sanity check of disk header and sector number when calculating the cylinder number, hence the issue.

RESOLUTION:
Code changes have been made to perform a sanity check of disk header and sector number before calculating the cylinder number.

* 4189351 (Tracking ID: 4188560)

SYMPTOM:
The vxvm-encrypt service will be in failed state and core dump for the vxencryptd may be generated.

DESCRIPTION:
The vxvm-encrypt service will be in failed state

RESOLUTION:
VxVM encryption was unable to handle IO greater than 1Mb size. Code changes has been done to fix this issue.

* 4189564 (Tracking ID: 4189567)

SYMPTOM:
System panics while VVRCert

DESCRIPTION:
logical_block_size is being set to 0 instead of a default value. This causes incorrect queue limits to be set, which can lead to a system panic. Previously, we defaulted to a valid size if the value was too low, but that check is now missing.

RESOLUTION:
Populating logical_block_size value correctly

* 4189695 (Tracking ID: 4188763)

SYMPTOM:
Stale and incorrect symbolic links to VxDMP devices in "/dev/disk/by-uuid".

DESCRIPTION:
On some of the systems with Infoscale installed there can be stale symbolic links of /boot , /boot/efi to "VxDMP" devices instead of "SD" devices.
DMP uses "blkid" command" to get the O.S device based on UUID. But on some systems "blkid" command is taking long time for its completion.
In this scenario there can be stale symbolic link to VxDMP device.

RESOLUTION:
Code changes are done to use "udevadm info" command instead of "blkid".

* 4189698 (Tracking ID: 4189447)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
VxVM (Veritas Volume Manager) creates some .lock files under /etc/vx directory. 

The non-root users have access to these .lock files, and they may accidentally
modify, move or delete those files.
Such actions may interfere with the normal functioning of the Veritas Volume
Manager.

RESOLUTION:
This Fix address the issue by masking the write permission for non-root users
for these .lock files.

* 4189751 (Tracking ID: 4189428)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl , libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl , libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl , libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4189773 (Tracking ID: 4189301)

SYMPTOM:
Frequent IPM handle purging causes VVR SG tswitchover to fail.

DESCRIPTION:
Frequent IPM handle purging causes VVR SG tswitchover to fail. The switchover will pass in those iterations where a new IPM handle was opened while the VCS tries to do switchover. In order to handle the above scenario, we have enhanced the frequency VVR IPM heartbeat handle creation. Now as soon as a command is fired we check for the handle availability in local list of handles. If not found then create it. This helped in handling the frequent purging of IPM handles by creating the IPM handles when and if required.

RESOLUTION:
Enhancing the checks for  IPM handle availability in the local list of handles and then creating one helped in handling the IPM handle purging.

Patch ID: VRTSaslapm 8.0.2.2600

* 4189576 (Tracking ID: 4185193)

SYMPTOM:
Customers experience UDID mismatch error on one of the NVMe devices, while test setup and in-house setup do not show the issue.

DESCRIPTION:
When using VxVM/ASL 7.4.2.5300 with RHEL 8 and 4 NVMe devices, customers encounter a UDID mismatch error on one of the devices. However, identical configurations on test setups and in-house environments work as expected. After debugging, it was discovered that an issue lies with the ioctl() vendor side.

RESOLUTION:
A sysfs approach has been adopted to resolve the UDID mismatch issue.

* 4189696 (Tracking ID: 4188831)

SYMPTOM:
Hitachi VSPOne array support is not present.

DESCRIPTION:
Hitachi VSPOne array support is not present.
Adding support for Hitachi VSPOne array.
Support for VSP One Block 20 Series
---- Firmware details
A3-XX, Standard Inquiry Byte[32-35]
---- Vendor and model name
Vendor: HITACHI, Standard Inquiry Byte[8-15]

RESOLUTION:
Added support for Hitachi VSPOne array.

* 4189772 (Tracking ID: 4189561)

SYMPTOM:
Added support for Netapp ASA r2 array

DESCRIPTION:
Netapp ASA r2 is new array and current ASL doesn't support it. So it will not be claimed/core dump with previous ASL. This array support has been added in the current ASL.

RESOLUTION:
Added support for Netapp ASA r2 array

Patch ID: VRTSvxvm-8.0.2.2400

* 4189251 (Tracking ID: 4189428)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl , libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl , libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl , libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSvxvm-8.0.2.2300

* 4184100 (Tracking ID: 4183777)

SYMPTOM:
System log is flooding with the fake alarms "VxVM vxio V-5-0-0 read/write on disk: xxx took longer to complete".

DESCRIPTION:
When vol_ioship_stats_enable is disabled, volume layer uses jiffies to initialize IO's start time. Later DMP uses the current time of day to reset the start time. This causes the big discrepancy when compare the different time formats, hence the issue.

RESOLUTION:
Code changes have been done to set the IO's start and end time using the same format.

* 4184316 (Tracking ID: 4177113)

SYMPTOM:
dmpcert overload_fun.sh script was found to be going into an infinite loop when executed.

DESCRIPTION:
The original check within the vxdisk() function was necessary for older RHEL and SLES versions due to their lack of support for automatic device tree updates after fabric changes. This led to the introduction of a

RESOLUTION:
The vxdisk() function has been modified to remove the outdated check. After implementing the changes, the function now operates correctly on RHEL 8 and RHEL 9, without entering a loop. The modification has been validated, and the function performs as expected across all relevant environments.

* 4184318 (Tracking ID: 4155324)

SYMPTOM:
System hampers subsequent installation retry.

DESCRIPTION:
In VIKE deployment, we observed issue that if any one of our systemd service fails during installation, the service is not correctly stopped and it does not 
unload kernel module. This keeps the modules loaded in the system and hampers subsequent installation retry.
For this we have identified a fix in service files for our systemd services. We need the fix to be implemented for all kernel modules coming from VxVM.
Add the fix in service file which loads the kernel modules.

RESOLUTION:
we need to add ExecStopPost to required service files.
Kernel modules - vxio, vxdmp, vxspec, dmpaa

* 4187579 (Tracking ID: 4187459)

SYMPTOM:
Plex attach operations are taking an excessive amount of time to sync when Azure 4K Native disks are configured. All VxVM commands hang.

DESCRIPTION:
A defect in the DCO (Data Change Object) has been identified, which may generate non-4K-aligned I/Os. Customers may encounter this issue if the underlying disks are unable to handle unaligned I/Os.

RESOLUTION:
Code changes have been made to make sure IO is aligned with disk sector size.

Patch ID: VRTSaslapm 8.0.2.2300

* 4188105 (Tracking ID: 4188104)

SYMPTOM:
dummy incident for archival.

DESCRIPTION:
dummy incident for archival.

RESOLUTION:
Creating dummy incident for archival.

Patch ID: VRTSvxvm-8.0.2.2100

* 4124889 (Tracking ID: 4090828)

SYMPTOM:
Dumped fmrmap data for better debuggability for corruption issues

DESCRIPTION:
vxplex att/vxvol recover cli will internally fetch fmrmaps from kernel using existing ioctl before starting attach operation and get data in binary 
format and dump to file and store it with specific format like volname_taskid_date.

RESOLUTION:
Changes done now dumps the fmrmap data into a binary file.

* 4138279 (Tracking ID: 4116496)

SYMPTOM:
System panic at dmp_process_errbp+47 with following call stack.
machine_kexec
__crash_kexec
crash_kexec
oops_end
no_context
__bad_area_nosemaphore
do_page_fault
page_fault
[exception RIP: dmp_process_errbp+47]
dmp_daemons_loop
kthread
ret_from_fork

DESCRIPTION:
When a LUN is detached, bio->bi_disk will be set to NULL, which would cause NULL pointer reference panic when VxDMP calls bio_dev(bio).

RESOLUTION:
Code changes have been made to avoid panic.

* 4165431 (Tracking ID: 4160809)

SYMPTOM:
vxconfigd hang during VxVM transaction causing cluster hang situation

DESCRIPTION:
VxVM Volume with  Data Change Object (DCO) configured with volume pre-allocates memory to perform bitmap read/write operations. This memory is pre-allocated during volume create/start times using KMEM cache (kmem_cache_alloc() call) . If system is under memory pressure, this memory allocation with KMEM cache gets stuck for long time waiting for memory to be grabbed. This leads to VxVM transaction hang like situation and eventually leads to IO slowness or clusterwide status for long time causing application IO timeouts.

RESOLUTION:
Changes done in FMR memory buffer allocation logic to use _get_free_pages() and vmalloc() based allocation instead of going through kmem_cache_alloc() calls to avoid hang situations. The code has been added to ensure allocation code quickly falls back to vmalloc() if _get_free_pages() is unable to allocate memory and thus avoiding hang like situation.

* 4175713 (Tracking ID: 4175712)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl , libxml , openssl] that are used by VxVM.

DESCRIPTION:
Third party components [curl , libxml , openssl] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl , libxml , openssl] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4178207 (Tracking ID: 4118809)

SYMPTOM:
System panic at dmp_process_errbp with following call stack.
machine_kexec
__crash_kexec
crash_kexec
oops_end
no_context
__bad_area_nosemaphore
do_page_fault
page_fault
[exception RIP: dmp_process_errbp+203]
dmp_daemons_loop
kthread
ret_from_fork

DESCRIPTION:
When a LUN is detached, VxDMP may invoke its error handler to process the error buffer, during that period the OS SCSI device node could have been removed, which will make VxDMP can not find the corresponding path node and introduces a pointer reference panic.

RESOLUTION:
Code changes have been made to avoid panic.

* 4178260 (Tracking ID: 4175390)

SYMPTOM:
Plexes will have STATE field as TEMPRMSD and the TUTIL0 field as NEW.

DESCRIPTION:
When adding multiple mirrors to a volume in background parallelly, mirror plexes will be go in TEMPRMSD state. vxtask list will show task for only one plex. For 
remaining plex no tasks will be created.

# vxassist -g mirrordg -b mirror vol1 ibm_shark0_11
# vxassist -g mirrordg -b mirror vol1 ibm_shark0_12

[##]# vxprint
<snip>
v  vol1         fsgen        ENABLED  4194304  -        ACTIVE   -       -
pl vol1-01      vol1         ENABLED  4194304  -        ACTIVE   -       -
sd ibm_shark0_10-01 vol1-01  ENABLED  4194304  0        -        -       -
pl vol1-02      vol1         ENABLED  4194304  -        ACTIVE   -       -
sd ibm_shark0_11-01 vol1-02  ENABLED  4194304  0        -        -       -
pl vol1-03      vol1         DISABLED 4194304  -        TEMPRMSD NEW     -
sd ibm_shark0_12-01 vol1-03  ENABLED  4194304  0        -        -       -

RESOLUTION:
Code changes has been done to fix the issue.

* 4179072 (Tracking ID: 4178449)

SYMPTOM:
vxconfigd gets abort for segfault, vold core file can see thread stack corruption.

DESCRIPTION:
In vxconfigd multi-threads mode, two threads were writing translog in parallel
using static buffer, which can be realloc for bigger buffer, resulting in a thread accessing
it but post-free.

RESOLUTION:
Use thread-safe popen() and mutex to protect static buffer from use-post-free.

* 4179370 (Tracking ID: 4179002)

SYMPTOM:
After dynamic LUN expansion on rhel9, resize operation failed. VxFS got corrupted.

DESCRIPTION:
While performing actions like Dynamic LUN Expansion (DLE), the OS removes partitions when it detects a disk change. The partition was not being added back by the OS because Dynamic Multipathing (DMP) didn't "revalidate" the disk during the disk open process in RHEL9 and upstream. This loss of the partition causes device open failures, resulting in corruption, as the entire device is used for I/O operations instead of the previously defined partition, which contains the file system data.

RESOLUTION:
Code change has been made to revalidate disk unconditionally.

* 4179818 (Tracking ID: 4178920)

SYMPTOM:
"vxdmp V-5-0-0 failed to get request for devno for IO offset" continuously appears in the system log.

DESCRIPTION:
DMP sets BLK_MQ_REQ_NOWAIT when allocating a request from the OS. This means that the OS might not be able to allocate a request for the I/O operation when the system is busy. As a result, DMP reports a warning message. When this occurs, DMP will retry the allocation later. This message is harmless.

RESOLUTION:
Code change has been made to reduce the log level of this message.

Patch ID: VRTSaslapm 8.0.2.2100

* 4184813 (Tracking ID: 4184814)

SYMPTOM:
No

DESCRIPTION:
Creating dummy incident only for readme

RESOLUTION:
None

Patch ID: VRTSvxvm-8.0.2.1700

* 4128883 (Tracking ID: 4112687)

SYMPTOM:
vxdisk resize corrupts disk public region and causes file system mount fail.

DESCRIPTION:
For single path disk, during two transactions of resize operation, the private region IOs could be incorrectly sent to partition 3 of the GPT disk, which would cause 48 more sectors shift.  This may make the private region data written to public region and cause corruption.

RESOLUTION:
Code changes have been made to fix the problem.

* 4137508 (Tracking ID: 4066310)

SYMPTOM:
New feature for performance improvement

DESCRIPTION:
Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP.

RESOLUTION:
resolved

* 4137995 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

* 4143558 (Tracking ID: 4141890)

SYMPTOM:
----------
TUTIL0 field may not get cleared sometimes after cluster reboot.

DESCRIPTION:
------------
- TUTIL field may not get cleared sometimes after cluster reboot due to cleanup issue for volume start operation.

RESOLUTION:
---------
- Autofix can cleanup this and trigger recovery. Also Fix is checked-in for this.

* 4153377 (Tracking ID: 4152445)

SYMPTOM:
Replication failed to start due to vxnetd threads not running on secondary site.

DESCRIPTION:
Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck 
in a dead loop till max retry reached.

RESOLUTION:
Code changes have been made to add lock protection to avoid the race condition.

* 4153566 (Tracking ID: 4090410)

SYMPTOM:
PID: 19769  TASK: ffff8fd2f619b180  CPU: 31  COMMAND: "vxiod"
 #0 [ffff8fcef196bbf0] machine_kexec at ffffffffbb2662f4
 #1 [ffff8fcef196bc50] __crash_kexec at ffffffffbb322a32
 #2 [ffff8fcef196bd20] panic at ffffffffbb9802cc
 #3 [ffff8fcef196bda0] volrv_seclog_bulk_cleanup_verification at ffffffffc09f099a [vxio]
 #4 [ffff8fcef196be18] volrv_seclog_write1_done at ffffffffc09f0a41 [vxio]
 #5 [ffff8fcef196be48] voliod_iohandle at ffffffffc0827688 [vxio]
 #6 [ffff8fcef196be88] voliod_loop at ffffffffc082787c [vxio]
 #7 [ffff8fcef196bec8] kthread at ffffffffbb2c5e61

DESCRIPTION:
This panic on secondary node is explicitly triggered when unexpected data is detected during data verification process. This is due to incorrect data sent by 
primary for a specific network failure scenario.

RESOLUTION:
The source has been changed to fix this problem on primary.

* 4153570 (Tracking ID: 4134305)

SYMPTOM:
Illegal memory access is detected when an admin SIO is trying to lock a volume.

DESCRIPTION:
While locking a volume, an admin SIO is converted to an incompatible SIO, on which collecting ilock stats causes memory overrun.

RESOLUTION:
The code changes have been made to fix the problem.

* 4153597 (Tracking ID: 4146424)

SYMPTOM:
CVM node join operation may hang with vxconfigd on master node stuck in following code path.
 
ioctl ()
 kernel_ioctl ()
 kernel_get_cvminfo_all ()
 send_slaves ()
 master_send_dg_diskids ()
 dg_balance_copies ()
 client_abort_records ()
 client_abort ()
 dg_trans_abort ()
 dg_check_kernel ()
 vold_check_signal ()
 request_loop ()
 main ()

DESCRIPTION:
During vxconfigd level communication between master and slave nodes, if GAB returns EAGAIN,
vxconfigd code does a poll on the GAB fd. In normal circumstances, the GAB will return the poll call
with appropriate return value. If however, the poll timeout occurs (poll returning 0), it was 
erroneously treated as success and the caller assumes that message was sent, when in fact it
had failed. This resulted in the hang in the message exchange between master and slave
vxconfigd.

RESOLUTION:
Fix is to retry the send operation on GAB fd after some delay if the poll times out in the context
of EAGAIN or ENOMEM error. The fix is applicable on both master and slave side functions

* 4153874 (Tracking ID: 4010288)

SYMPTOM:
On setup Replace node fails due to DCM log plex not getting recovered.

DESCRIPTION:
This is happening because dcm log plex kstate is going enabled with state RECOVER and stale flag set on it. Plex attach expect plex kstate to be not enabled to allow attach operation which fails in this case. Due to some race, plex state of log dcm plex is getting set to enabled.

RESOLUTION:
Changes done to detect such problematic dcm plex state and correct it and then normal plex attach transactions are triggered.

* 4154104 (Tracking ID: 4142772)

SYMPTOM:
In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode.

DESCRIPTION:
When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared.

RESOLUTION:
The code changes have been made to fix the issue.

* 4154107 (Tracking ID: 3995831)

SYMPTOM:
System hung: A large number of SIOs got queued in FMR.

DESCRIPTION:
When IO load is high, there may be not enough chunks available. In that case, DRL flushsio needs to drive fwait queue which may get some available chunks. Due a race condition and a bug inside DRL, DRL may queue the flushsio and fail to trigger flushsio again, then DRL ends in a permanent hung situation, not able to flush the dirty regions. The queued SIOs fails to be driven further hence system hung.

RESOLUTION:
Code changes have been made to drive SIOs which got queued in FMR.

* 4155091 (Tracking ID: 4118510)

SYMPTOM:
Volume manager tunable to control log file permissions

DESCRIPTION:
With US President Executive Order 14028 compliance changes, all product log file permissions changed to 600. Introduced tunable "log_file_permissions" to control the log file permissions to 600 (default), 640 or 644. The tunable can be changed at install time or any time with reboot.

RESOLUTION:
Added the log_file_permissions tunable.

* 4155719 (Tracking ID: 4154921)

SYMPTOM:
system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on.

DESCRIPTION:
Due to the different reasons, DMP might disable its subpaths. In a particular scenario, DMP might fail to reset IO QUIESCES flag on its subpaths, which caused IOs got queued in DMP defer queue. In case the upper layer, like zfs, kept waiting for IOs to complete, this bug might cause whole system hang.

RESOLUTION:
Code changes have been made to reset IO quiesce flag properly after disabled dmp path.

* 4157012 (Tracking ID: 4145715)

SYMPTOM:
----------
Replication disconnect

DESCRIPTION:
------------
There was issue with dummy update handling on secondary side when temp logging is enabled.
It was observed that update next to dummy update is not found on secondary site. Dummy update
was getting written with incorrect metadata about size of VVR update.

RESOLUTION:
---------
Fixed dummy update size metadata getting written on disk.

* 4157643 (Tracking ID: 4159198)

SYMPTOM:
vxfmrmap utility generated coredump in solaris due to missing id in pfmt

DESCRIPTION:
The coredump was seen due to missing id in pfmt.

RESOLUTION:
Added id in pfmt() statement.

* 4158517 (Tracking ID: 4159199)

SYMPTOM:
coredump was being generated while running the TC "./scripts/admin/vxtune/vxdefault.tc" on AIX 7.3 TL2
gettimeofday(??, ??) at 0xd02a7dfc
get_exttime(), line 532 in "vm_utils.c"
cbr_cmdlog(argc = 2, argv = 0x2ff224e0, a_client_id = 0), line 275 in "cbr_cmdlog.c"
main(argc = 2, argv = 0x2ff224e0), line 296 in "vxtune.c"

DESCRIPTION:
Passing NULL parameter to gettimeofday function was causing coredump creation

RESOLUTION:
Code changes have been made to pass timeval parameter instead of NULL to gettimeofday function.

* 4158662 (Tracking ID: 4159200)

SYMPTOM:
. . . . . << Copyright notice for VRTSvxvm >> . . . . . . .
Copyright (c) 2023 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Techno
logies LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.

The Licensed Software and Documentation are deemed to be commercial computer software as defined in FAR 12.212 and subject to restricted rights as define
d in FAR Section 52.227-19 "Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq. "Commercial Computer Software and Commercial Co
mputer Software Documentation," as applicable, and any successor regulations, whether delivered by Veritas as on premises or hosted services. Any use, m
odification, reproduction release, performance, display or disclosure of the Licensed Software and Documentation by the U.S. Government shall be solely i
n accordance with the terms of this Agreement.
. . . . . << End of copyright notice for VRTSvxvm >>. . . .

sysck: 3001-024 The file /sbin/vxconfigd
    is the wrong file type.
VRTSvxvm.post_u[289]: -q: not found                               <<<<<<<<<<<<<< Script error
Finished processing all filesets. (Total time: 26 secs)

DESCRIPTION:
Script error is shown due to usage of undeclared variable.

RESOLUTION:
Define the variable used.

* 4158920 (Tracking ID: 4159680)

SYMPTOM:
0 Fri Apr  5 20:32:30 IST 2024 + read bd_dg bd_dgid
         0 Fri Apr  5 20:32:30 IST 2024 +          0 Fri Apr  5 20:32:30 IST 2024 first_time=1
+ clean_tempdir
         0 Fri Apr  5 20:32:30 IST 2024 + whence -v set_proc_oom_score
         0 Fri Apr  5 20:32:30 IST 2024 set_proc_oom_score not found
         0 Fri Apr  5 20:32:30 IST 2024 +          0 Fri Apr  5 20:32:30 IST 2024 1> /dev/null
+ set_proc_oom_score 17695012
         0 Fri Apr  5 20:32:30 IST 2024 /usr/lib/vxvm/bin/vxconfigbackupd[295]: set_proc_oom_score:  not found
         0 Fri Apr  5 20:32:30 IST 2024 + vxnotify

DESCRIPTION:
type set_proc_oom_score &>/dev/null && set_proc_oom_score $$

Here the stdout and stderr stream is not getting redirected to /dev/null. This is because "&>" is incompatible with POSIX.
>out 2>&1 is a POSIX-compliant way to redirect both standard output and standard error to out. It also works in pre-POSIX Bourne shells.

RESOLUTION:
The code changes have been done to fix the problem.

* 4161646 (Tracking ID: 4149528)

SYMPTOM:
----------
Vxconfigd and vx commands hang. The vxconfigd stack is seen as follows.

        volsync_wait
        volsiowait
        voldco_read_dco_toc
        voldco_await_shared_tocflush
        volcvm_ktrans_fmr_cleanup
        vol_ktrans_commit
        volconfig_ioctl
        volsioctl_real
        vols_ioctl
        vols_unlocked_ioctl
        do_vfs_ioctl
        ksys_ioctl
        __x64_sys_ioctl
        do_syscall_64
        entry_SYSCALL_64_after_hwframe

DESCRIPTION:
------------
There is a hang in CVM reconfig and DCO-TOC protocol. This results in vxconfigd and vxvm commands to hang. 
In case overlapping reconfigs, it is possible that rebuild seqno on master and slave end up having different values.
At this point if some DCO-TOC protocol is also in progress, the protocol gets hung due to difference in the rebuild
seqno (messages are dropped).

One can find messages similar to following in the /etc/vx/log/logger.txt on master node. We can see the mismatch in 
the rebuild seqno in the two messages.  Look at the strings -  "rbld_seq: 1" "fsio-rbld_seqno: 0". The seqno received
from slave is 1 and the one present on master is 0.

    Jan 16 11:57:56:329170 1705386476329170 38ee  FMR dco_toc_req: mv: masterfsvol1-1  rcvd req withold_seq: 0  rbld_seq: 1
    Jan 16 11:57:56:329171 1705386476329171 38ee  FMR dco_toc_req: mv: masterfsvol1-1  pend rbld, retry rbld_seq: 1  fsio-rbld_seqno: 0  old: 0  cur: 3  new: 3 
flag: 0xc10d  st

RESOLUTION:
----------
Instead of using rebuild seqno to determine whether the DCO TOC protocol is running the same reconfig, using 
reconfig seqno as a rebuild seqno. Since the reconfig seqno on all nodes in the cluster is same, the DCO TCO
protocol will find consistent rebuild seqno during CVM reconfig and will not result in some node dropping
the DCO TOC protocol messages.
Added CVM protocol version check while using reconfig seqno as rebuild seqno. Thus new functionality will 
come into effect only if CVM protocol version is >= 300.

* 4162053 (Tracking ID: 4132221)

SYMPTOM:
Supportability requirement for easier path link to dmpdr utility

DESCRIPTION:
The current paths of DMPDR utility are so long and hard to remember for the customers. So it was requested to create a symbolic link to this utility for easier access.

RESOLUTION:
Code changes are made to create a symlink to this utility for easier access

* 4162055 (Tracking ID: 4116024)

SYMPTOM:
kernel panicked at gab_ifreemsg with following stack:
gab_ifreemsg
gab_freemsg
kmsg_gab_send
vol_kmsg_sendmsg
vol_kmsg_sender

DESCRIPTION:
In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k.

RESOLUTION:
Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics.

* 4162058 (Tracking ID: 4046560)

SYMPTOM:
vxconfigd aborts on Solaris if device's hardware path is more than 128 characters.

DESCRIPTION:
When vxconfigd started, it claims the devices exist on the node and updates VxVM device
database. During this process, devices which are excluded from vxvm gets excluded from VxVM device database.
To check if device to be excluded, we consider device's hardware full path. If hardware path length is
more than 128 characters, vxconfigd gets aborted. This issue occurred as code is unable to handle hardware
path string beyond 128 characters.

RESOLUTION:
Required code changes has been done to handle long hardware path string.

* 4162665 (Tracking ID: 4162664)

SYMPTOM:
VxVM fails to install on Rocky linux 8 and 9

DESCRIPTION:
VxVM fails to install on Rocky linux and throws below error :

This release of VxVM is for Red Hat Enterprise Linux 8
and CentOS Linux 8.
Please install the appropriate OS
and then restart this installation of VxVM.
error: %prein(VRTSvxvm-9.0.0.0000-0802_RHEL8.x86_64) scriptlet failed, exit status 1
error: VRTSvxvm-9.0.0.0000-0802_RHEL8.x86_64: install failed

RESOLUTION:
Required code changes have been done to make the package compatible with RL8/9.

* 4162917 (Tracking ID: 4139166)

SYMPTOM:
Enable VVR Bunker feature for shared diskgroups.

DESCRIPTION:
VVR Bunker feature was not supported for shared diskgroup configurations.

RESOLUTION:
Enable VVR Bunker feature for shared diskgroups.

* 4162966 (Tracking ID: 4146885)

SYMPTOM:
Restarting syncrvg after termination will start sync from start

DESCRIPTION:
vradmin syncrvg would terminate after 2 minutes of inactivity like network error. If run again, it would restart from scratch

RESOLUTION:
Continue vradmin syncrvg operation from where it was terminated

* 4164114 (Tracking ID: 4162873)

SYMPTOM:
disk reclaim is slow.

DESCRIPTION:
Disk reclaim length should be decided by storage's max reclaim length. But Volume Manager split the reclaim request into smaller segments than the maximum reclaim length, which led to a performance regression.

RESOLUTION:
Code change has been made to avoid splitting the reclaim request in volume manager level.

* 4164250 (Tracking ID: 4154121)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group on target node failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group on target node failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported when enable use_hw_replicatedev.

RESOLUTION:
The code is enhanced to import the replicated disk group on target node when enable use_hw_replicatedev.

* 4164252 (Tracking ID: 4159403)

SYMPTOM:
When the replicated disks are in SPLIT mode and use_hw_replicatedev is on, disks are marked as cloned disks after the hardware replicated disk group gets imported.

DESCRIPTION:
add clearclone option automatically when import the hardware replicated disk group to clear the cloned flag on disks.

RESOLUTION:
The code is enhanced to import the replicated disk group with clearclone option.

* 4164254 (Tracking ID: 4160883)

SYMPTOM:
clone_flag was set on srdf-r1 disks after reboot.

DESCRIPTION:
Clean clone got reset in case of AUTOIMPORT, which misled the clone_flag got set on the disk in the end.

RESOLUTION:
Code change has been made to correct the behavior of setting clone_flag on a disk.

* 4164944 (Tracking ID: 4165970)

SYMPTOM:
Warnings can be seen while installing vm packages

DESCRIPTION:
There were syntactical error due to which these both variable were not getting read properly.

RESOLUTION:
Issue is fixed with the code change.

* 4164947 (Tracking ID: 4165971)

SYMPTOM:
Seeing messages while installing VxVM packages.

DESCRIPTION:
Getting un-expected message on the console while installing VxVM package on the RHEL9 machines.

Below are the un_expected messages generated while installing /root/patch/VRTSvxvm* pkg

/var/tmp/rpm-tmp.Kl3ycu: line 657: [: missing `]'

RESOLUTION:
Issue is fixed with the code change.

* 4165431 (Tracking ID: 4160809)

SYMPTOM:
vxconfigd hang during VxVM transaction causing cluster hang situation

DESCRIPTION:
VxVM Volume with  Data Change Object (DCO) configured with volume pre-allocates memory to perform bitmap read/write operations. This memory is pre-allocated during volume create/start times using KMEM cache (kmem_cache_alloc() call) . If system is under memory pressure, this memory allocation with KMEM cache gets stuck for long time waiting for memory to be grabbed. This leads to VxVM transaction hang like situation and eventually leads to IO slowness or clusterwide status for long time causing application IO timeouts.

RESOLUTION:
Changes done in FMR memory buffer allocation logic to use _get_free_pages() and vmalloc() based allocation instead of going through kmem_cache_alloc() calls to avoid hang situations. The code has been added to ensure allocation code quickly falls back to vmalloc() if _get_free_pages() is unable to allocate memory and thus avoiding hang like situation.

* 4165889 (Tracking ID: 4165158)

SYMPTOM:
Plexes of layered volumes in VVR environment, remain in STALE state even after manual or vxattachd driven vxrecover operation.

DESCRIPTION:
The issue is that the stale TUTILs are not getting detected and cleared by vxattachd under some specific conditions.
- The specific conditions are:
        1. volume under rvg (VVR) + 2. volume should be layered.
- This is because, it relies on "vxprint -a" o/p.
- Looks like vxprint -a does not capture the layered volume in its o/p. 
- When vxprint -a  is given the name of the layered volume (vxprint -a vol-L01), then it prints the layered volume correctly

RESOLUTION:
- Found another option -h, which when used with -a, does show layered volume, without giving the object name.
- After modifying the vxattachd  to use "-ah" instead of "-a", it was able to recover the volumes.
- Also extending the logic to clear stale tutils to private disk groups.

* 4166881 (Tracking ID: 4164734)

SYMPTOM:
Support for TLS1.1 is not disabled.

DESCRIPTION:
In VxVM product we have disabled support for TLS 1.0, SSLv2 and SSLv3 already. Support TLS1.1 is not disabled.TLSv1.1 has security vulnerabilities

RESOLUTION:
Make required code change to disable support for TLS1.1.

* 4166882 (Tracking ID: 4161852)

SYMPTOM:
Post InfoScale upgrade, command "vxdg upgrade" succeed but throw error "RLINK is not encypted".

DESCRIPTION:
In "vxdg upgrade" codepath we need to regenerate the encryption keys if encrypted Rlinks are present in VxVM configuration. But key regeneration code was getting called even if Rlinks are not encrypted. And so further code was throwing error that "VxVM vxencrypt ERROR V-5-1-20484 Rlink is not encrypted!"

RESOLUTION:
Necessary code changes has been made to invoke encryption key regeneration for RLinks only if it is encrypted.

* 4172377 (Tracking ID: 4172033)

SYMPTOM:
Data corruption after recovery of volume

DESCRIPTION:
When disabled / detached volume gets started after storage coming back it was leaving 
stale agenodes in memory which was causing detach tracking not happening for
subsequent IOs on same region as stale agenode.

RESOLUTION:
Cleaned up stale agendas at appropriate stage.

* 4173722 (Tracking ID: 4158303)

SYMPTOM:
vxvm-boot service fails to start in alloted time after customer applied patch 8.0.2.1500 patch

DESCRIPTION:
ESCALATION JIRA:
https://jira.community.veritas.com/browse/STESC-8721

Panic:

PID: 155809  TASK: ffff9abc40e08000  CPU: 11  COMMAND: "dmpdaemon"
#0 [ffffc13ca29a7b30] machine_kexec at ffffffffb4e6da33
#1 [ffffc13ca29a7b88] __crash_kexec at ffffffffb4fb757a
#2 [ffffc13ca29a7c48] crash_kexec at ffffffffb4fb84b1
#3 [ffffc13ca29a7c60] oops_end at ffffffffb4e2be31
#4 [ffffc13ca29a7c80] no_context at ffffffffb4e7f923
#5 [ffffc13ca29a7cd8] __bad_area_nosemaphore at ffffffffb4e7fc9c
#6 [ffffc13ca29a7d20] do_page_fault at ffffffffb4e808b7
#7 [ffffc13ca29a7d50] page_fault at ffffffffb5a011ae
    [exception RIP: dmpsvc_da_analyze_error+417]
    RIP: ffffffffc0ecc411  RSP: ffffc13ca29a7e08  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: ffff9abd96d1f800  RCX: 0000000000000000
    RDX: 0000000000000000  RSI: 68d38a89bb1d8221  RDI: ffffc13ca29a7e48
    RBP: ffff9abf201b1400   R8: ffffc13ca29a7e08   R9: ffffc13ca29a7e4e
    R10: 0000000000000000  R11: 0000000000000000  R12: ffff9abd96d1f100
    R13: 0000000000000000  R14: ffff9abd588c3938  R15: 00000000085000b0
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#8 [ffffc13ca29a7e90] dmp_error_analysis_callback at ffffffffc10397fa [vxdmp]
#9 [ffffc13ca29a7ed0] dmp_daemons_loop at ffffffffc104b3a4 [vxdmp]
#10 [ffffc13ca29a7f10] kthread at ffffffffb4f1e974
#11 [ffffc13ca29a7f50] ret_from_fork at ffffffffb5a0028f

RESOLUTION:
BLK-MQ code processes IO in request form it does not deal with the bio. The bio which were seeing here it looks be a dummy bio which might be added just for 
compatibility. Looks like we need to add a piece of code that will check whether IO is request based or bio based, if it is request based then handle it differently. We are 
doing same thing for all other places need to handle it here as well.

* 4174239 (Tracking ID: 4171979)

SYMPTOM:
System panics with following message "kernel BUG at fs/inode.c:1578!"

DESCRIPTION:
Panic Stack:
PID: 68713    TASK: ff1dea3fc85c4000  CPU: 17   COMMAND: "blkid"
 #0 [ff6a453d122e3958] machine_kexec at ffffffff8a06da63
 #1 [ff6a453d122e39b0] __crash_kexec at ffffffff8a1b86ca
 #2 [ff6a453d122e3a70] crash_kexec at ffffffff8a1b9601
 #3 [ff6a453d122e3a88] oops_end at ffffffff8a02be31
 #4 [ff6a453d122e3aa8] do_trap at ffffffff8a028047
 #5 [ff6a453d122e3af0] do_invalid_op at ffffffff8a028d86
 #6 [ff6a453d122e3b10] invalid_op at ffffffff8ac00da4
    [exception RIP: iput+436]
    RIP: ffffffff8a389804  RSP: ff6a453d122e3bc8  RFLAGS: 00010202
    RAX: ff1deac36eec3b88  RBX: ff1dea3ed45e4ec0  RCX: 000000000800005d
    RDX: ff1dea3ed45e4ec0  RSI: ff1dea649674d300  RDI: ff1dea3ed45e4fb8
    RBP: ffffffff8ba096c0   R8: 0000000000000000   R9: 0000000000000000
    R10: ff6a453d122e3c28  R11: 0000000000000007  R12: ff1deac36eec3a10
    R13: ffffffff8a3b32e0  R14: 0000000000000000  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #7 [ff6a453d122e3be8] bd_acquire at ffffffff8a3b3261
 #8 [ff6a453d122e3c08] blkdev_open at ffffffff8a3b332e
 #9 [ff6a453d122e3c20] do_dentry_open at ffffffff8a364fd3
#10 [ff6a453d122e3c50] path_openat at ffffffff8a37acab
#11 [ff6a453d122e3d28] do_filp_open at ffffffff8a37d153

RESOLUTION:
There is a inode reference leaks in our code due to which the inode reference count increases till its reaches its maximum permissible value(i.e. max size of a 32-bit unsigned int, or 4294967295).  Once it hits this then it wraps back to 0 which is an invalid value which causes the system panic.
Code change to fix inode reference count leaks have been done.

Patch ID: VRTSaslapm 8.0.2.1700

* 4137995 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

Patch ID: VRTSvxvm-8.0.2.1500

* 4161828 (Tracking ID: 4161827)

SYMPTOM:
RHEL8.10 Platform Support in VxVM

DESCRIPTION:
Few changes and compilation with RHEL8.10 kernel is required.

RESOLUTION:
Necessary changes have been done to make VxVM compatible with RHEL8.10

Patch ID: VRTSvxvm-8.0.2.1400

* 4132775 (Tracking ID: 4132774)

SYMPTOM:
Existing VxVM package fails to load on SLES15SP5

DESCRIPTION:
There are multiple changes done in this kernel related to handling of SCSI passthrough requests ,initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5.

RESOLUTION:
Required changes have been done to make VxVM compatible with SLES15SP5.

* 4133930 (Tracking ID: 4100646)

SYMPTOM:
Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks

DESCRIPTION:
Due to multiple reason stale tutil may remain stamped on dcl subdisks which may cause next vxrecover instances
not able to recover dcl plex.

RESOLUTION:
Issue is resolved by vxattachd daemon intelligently detecting these stale tutils and clearing+triggering recoveries after 10 min interval.

Patch ID: VRTSaslapm 8.0.2.1400

* 4137995 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

Patch ID: VRTSvxvm-8.0.2.1200

* 4119267 (Tracking ID: 4113582)

SYMPTOM:
In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.

DESCRIPTION:
Reboot of primary nodes resulted in missing write completions of updates on the primary SRL volume. After the node came up, last update received by VVR secondary was incorrectly compared with the missing updates.

RESOLUTION:
Fixed the check to correctly compare the last received update by VVR secondary.

* 4123065 (Tracking ID: 4113138)

SYMPTOM:
In CVR environments configured with virtual hostname, after node reboots on VVR Primary and Secondary, 'vradmin repstatus' invoked on the secondary site shows stale information with following  warning message:
VxVM VVR vradmin INFO V-5-52-1205 Primary is unreachable or RDS has configuration error. Displayed status information is from Secondary and can be out-of-date.

DESCRIPTION:
This issue occurs when there is a explicit RVG logowner set on the CVM master due to which the old connection of vradmind with its remote peer disconnects and new connection is not formed.

RESOLUTION:
Fixed the issue with the vradmind connection with its remote peer.

* 4123069 (Tracking ID: 4116609)

SYMPTOM:
In CVR environments where replication is configured using virtual hostnames, vradmind on VVR primary loses connection with its remote peer after a planned RVG logowner change on the VVR secondary site.

DESCRIPTION:
vradmind on VVR primary was unable to detect a RVG logowner change on the VVR secondary site.

RESOLUTION:
Enabled primary vradmind to detect RVG logowner change on the VVR secondary site.

* 4123080 (Tracking ID: 4111789)

SYMPTOM:
In VVR/CVR environments, VVR would use any IP/NIC/network to replicate the data and may not utilize the high performance NIC/network configured for VVR.

DESCRIPTION:
The default value of tunable was set to 'any_ip'.

RESOLUTION:
The default value of tunable is set to 'replication_ip'.

* 4123345 (Tracking ID: 4113323)

SYMPTOM:
Existing package failed to load on RHEL 8.8 server.

DESCRIPTION:
RHEL 8.8 is a new release and hence VxVM module is compiled with this new kernel along with few other changes.

RESOLUTION:
Compiled VxVM code against 8.8 kernel and made changes to make it compatible.

* 4124291 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4124794 (Tracking ID: 4114952)

SYMPTOM:
With VVR configured with a virtual hostname, after node reboots on DR site, 'vradmin pauserep' command failed with following error:
VxVM VVR vradmin ERROR V-5-52-421 vradmind server on host <host> not responding or hostname cannot be resolved.

DESCRIPTION:
The virtual host mapped to multiple IP addresses, and vradmind was using incorrectly mapped IP address.

RESOLUTION:
Fixed by using the correct mapping of IP address from the virtual host.

* 4124796 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4125392 (Tracking ID: 4114193)

SYMPTOM:
'vradmin repstatus' command showed replication data status incorrectly as 'inconsistent'.

DESCRIPTION:
vradmind was relying on replication data status from both primary as well as DR site.

RESOLUTION:
Fixed replication data status to rely on the primary data status.

* 4125811 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4128127 (Tracking ID: 4132265)

SYMPTOM:
Machine with NVMe disks panics with following stack: 
blk_update_request
blk_mq_end_request
dmp_kernel_nvme_ioctl
dmp_dev_ioctl
dmp_send_nvme_passthru_cmd_over_node
dmp_pr_do_nvme_read
dmp_pgr_read
dmpioctl
dmp_ioctl
blkdev_ioctl
__x64_sys_ioctl
do_syscall_64

DESCRIPTION:
Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported.

RESOLUTION:
Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR.

* 4128835 (Tracking ID: 4127555)

SYMPTOM:
While adding secondary site using the 'vradmin addsec' command, the command fails with following error if diskgroup id is used in place of diskgroup name:
VxVM vxmake ERROR V-5-1-627 Error in field remote_dg=<dgid>: name is too long

DESCRIPTION:
Diskgroup names can be 32 characters long where as diskgroup ids can be 64 characters long. This was not handled by vradmin commands.

RESOLUTION:
Fix vradmin commands to handle the case where longer diskgroup ids can be used in place of diskgroup names.

* 4129766 (Tracking ID: 4128380)

SYMPTOM:
If VVR is configured using virtual hostname and 'vradmin resync' command is invoked from a DR site node, it fails with following error:
VxVM VVR vradmin ERROR V-5-52-405 Primary vradmind server disconnected.

DESCRIPTION:
In case of virtual hostname maps to multiple IPs, vradmind service on the DR site was not able to reach the VVR logowner node on the primary site due to incorrect IP address mapping used.

RESOLUTION:
Fixed vradmind to use correct mapped IP address of the primary vradmind.

* 4130402 (Tracking ID: 4107801)

SYMPTOM:
/dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.

DESCRIPTION:
vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp .
This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3.
This folder is explicitly removed from  SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder.

RESOLUTION:
Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links".

* 4130827 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel &gt;= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel &gt;= 5.14.21-150400.24.11.1

RESOLUTION:
It is recommended to use mq-deadline as io scheduler. Code changes have been done to automatically change the disk io scheduler to mq-deadline.

* 4130947 (Tracking ID: 4124725)

SYMPTOM:
With VVR configured using virtual hostnames, 'vradmin delpri' command could hang after doing the RVG cleanup.

DESCRIPTION:
'vradmin delsec' command used prior to 'vradmin delpri' command had left the cleanup in an incomplete state resulting in next cleanup command to hang.

RESOLUTION:
Fixed to make sure that 'vradmin delsec' command executes its workflow correctly.

Patch ID: VRTSaslapm 8.0.2.1200

* 4132966 (Tracking ID: 4116868)

SYMPTOM:
Support for ASLAPM on RHEL 8.8

DESCRIPTION:
RHEL 8.8 is new release and hence APM module
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with RHEL 8.8 kernel.

Patch ID: VRTSvxvm-8.0.2.1100

* 4125322 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSaslapm 8.0.2.1100

* 4125322 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which
needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSveki-8.0.2.1600

* 4184573 (Tracking ID: 4184575)

SYMPTOM:
Veki failed to install on rocky machine due to improper version.

DESCRIPTION:
Veki package version for rocky is greater than IS-802U4 version.

RESOLUTION:
Veki package version is incremented to resolve issue.

Patch ID: VRTSveki-8.0.2.1400

* 4135795 (Tracking ID: 4135683)

SYMPTOM:
Enhancing debugging capability of VRTSveki package installation

DESCRIPTION:
Enhancing debugging capability of VRTSveki package installation using temporary debug logs for SELinux policy file installation.

RESOLUTION:
Code is changed to store output of VRTSveki SELinux policy file installation in temporary debug logs.

* 4140468 (Tracking ID: 4152368)

SYMPTOM:
Some incidents do not appear in changelog because their cross-references are not properly processed

DESCRIPTION:
Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.

RESOLUTION:
All cross-references are traversed to find parent-child only if it present and then find top.

Patch ID: VRTSveki-8.0.2.1200

* 4120300 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

* 4130816 (Tracking ID: 4130815)

SYMPTOM:
VEKI rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VEKI rpm.

* 4132635 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSveki-8.0.2.1100

* 4118568 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

Patch ID: VRTSgms-8.0.2.1900

* 4181486 (Tracking ID: 4181235)

SYMPTOM:
FS components(GLM,GMS,ODM,FSadv) packaging were failing due to the wrong secure boot keys location.

DESCRIPTION:
FS components(GLM,GMS,ODM,FSadv) packaging were failing due to the wrong secure boot keys location.

RESOLUTION:
Code change has been done to update secure boot keys location.

Patch ID: VRTSgms-8.0.2.1500

* 4140460 (Tracking ID: 4152373)

SYMPTOM:
Some incidents do not appear in changelog because their cross-references are not properly processed

DESCRIPTION:
Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.

RESOLUTION:
All cross-references are traversed to find parent-child only if it present and then find top.

Patch ID: VRTSgms-8.0.2.1200

* 4126266 (Tracking ID: 4125932)

SYMPTOM:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

DESCRIPTION:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

RESOLUTION:
Updated the code to build gms with correct kbuild symbols.

* 4127527 (Tracking ID: 4107112)

SYMPTOM:
The GMS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-gms script to consider kernel-build version in exact-version-module version calculation.

* 4127528 (Tracking ID: 4107753)

SYMPTOM:
The GMS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-gms script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129708 (Tracking ID: 4129707)

SYMPTOM:
GMS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GMS rpm.

Patch ID: VRTSglm-8.0.2.1900

* 4181484 (Tracking ID: 4181235)

SYMPTOM:
FS components(GLM,GMS,ODM,FSadv) packaging were failing due to the wrong secure boot keys location.

DESCRIPTION:
FS components(GLM,GMS,ODM,FSadv) packaging were failing due to the wrong secure boot keys location.

RESOLUTION:
Code change has been done to update secure boot keys location.

Patch ID: VRTSglm-8.0.2.1500

* 4138274 (Tracking ID: 4126298)

SYMPTOM:
System may panic due to unable to handle kernel paging request 
and memory corruption could happen.

DESCRIPTION:
Panic may occur due to a race between a spurious wakeup and normal 
wakeup of thread waiting for glm lock grant. Due to the race, 
the spurious wakeup would have already freed a memory and then 
normal wakeup thread might be passing that freed and reused memory 
to wake_up function causing memory corruption and panic.

RESOLUTION:
Fixed the race between a spurious wakeup and normal wakeup threads
by making wake_up lock protected.

Patch ID: VRTSglm-8.0.2.1200

* 4124912 (Tracking ID: 4118297)

SYMPTOM:
The GLM module fails to load on RHEL9.2.

DESCRIPTION:
This issue occurs due to changes in the RHEL9.2 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on RHEL9.2.

* 4127524 (Tracking ID: 4107114)

SYMPTOM:
The GLM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-glm script to consider kernel-build version in exact-version-module version calculation.

* 4127525 (Tracking ID: 4107754)

SYMPTOM:
The GLM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-glm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4127626 (Tracking ID: 4127627)

SYMPTOM:
The GLM module fails to load on RHEL9.0 minor kernel 5.14.0-70.36.1

DESCRIPTION:
This issue occurs due to changes in the RHEL9.0 minor kernel.

RESOLUTION:
Updated GLM to support RHEL9.0 minor kernel 5.14.0-70.36.1

* 4129715 (Tracking ID: 4129714)

SYMPTOM:
GLM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GLM rpm.

Patch ID: VRTSfsadv-8.0.2.2500

* 4188577 (Tracking ID: 4188576)

SYMPTOM:
Security vulnerabilities exist in the Curl third-party components used by VxFS.

DESCRIPTION:
VxFS uses the Curl third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (8.12.1v) of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSspt-8.0.2.1300

* 4139975 (Tracking ID: 4149462)

SYMPTOM:
New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.

DESCRIPTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

RESOLUTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

* 4146957 (Tracking ID: 4149448)

SYMPTOM:
New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.

DESCRIPTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

RESOLUTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

Patch ID: VRTSrest-3.0.10

* 4124960 (Tracking ID: 4130028)

SYMPTOM:
GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs

DESCRIPTION:
The get api was returning different response from what was mentioned in the specs

RESOLUTION:
Changed the response of the GET api of vm and fs apis to match the specs. After the changes client generated code will not get error

* 4124963 (Tracking ID: 4127170)

SYMPTOM:
While modifying the system list for service group when dependency is there, the api would fail

DESCRIPTION:
While modifying the system list for service group when dependency is there, the api would fail. So we were not able to modify system list if there were dependency of service group on other service group

RESOLUTION:
Now we have modified the code for api to modify system list for service group when the dependency exists.

* 4124964 (Tracking ID: 4127167)

SYMPTOM:
DELETE rvg was failing when replication was in progress

DESCRIPTION:
DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume

RESOLUTION:
DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume

* 4124966 (Tracking ID: 4127171)

SYMPTOM:
While getting excluded disks on Systems API we were getting nodelist instead of nodename in href. When the user would try to GET on that link, the request would fail

DESCRIPTION:
The GET system list api was returning wrong reference links for excluded disks. When the user would try to GET on that link, the request would fail

RESOLUTION:
Returning the correct href for excluded disks from GET system api.

* 4124968 (Tracking ID: 4127168)

SYMPTOM:
In GET request on rvgs all datavolumes in RVGs not listed correctly

DESCRIPTION:
The command which we were using for getting the list of data volumes on rvg, was not returning all data volumes, because of which the api was not returning all the data volumes of rvg

RESOLUTION:
Changed the command to get data volumes of rvg. Now GET on rvg will return all the data volumes associated with that rvg

* 4125162 (Tracking ID: 4127169)

SYMPTOM:
Get disks api failing when cvm is down on any node

DESCRIPTION:
When node is out of cluster from CVM, GET disks api is failing and not giving proper output

RESOLUTION:
Used the appropriate checks to get the proper list of disks from GET disks api.

Patch ID: VRTSpython-3.9.16 P07

* 4179488 (Tracking ID: 4179487)

SYMPTOM:
Upgrading multiple vulnerable third-party modules and cleaning up unused files in the .pyenv directory under VRTSPython for IS 8.0.2.

DESCRIPTION:
Upgrading multiple vulnerable third-party modules and cleaning up unused files in the .pyenv directory under VRTSPython for IS 8.0.2.

RESOLUTION:
Upgrading Multiple vulnerable thirdparty modules and cleaning up .pyenv directory unused files under VRTSPython for IS 8.0.2.

Patch ID: VRTSsfcpi-8.0.2.1500

* 4115603 (Tracking ID: 4115601)

SYMPTOM:
On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a 
unique publisher list during install and upgrade.  A publisher list gets displayed during install and upgrade which is not unique.

DESCRIPTION:
On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a 
unique publisher list during install and upgrade.  A publisher list gets displayed during install and upgrade which is not unique.

RESOLUTION:
Installer code modified to skip the publisher list during start, stop and uninstall process and get unique publisher list during install and upgrade.

* 4115707 (Tracking ID: 4126025)

SYMPTOM:
While performing full upgrade of the secondary site, SRL missing & RLINK dissociated error observed.

DESCRIPTION:
While performing full upgrade of the secondary site, SRL missing & RLINK dissociated error observed.
SRL volume[s] is[are] in recovery state which leads to failure in associating srl vol with rvg.

RESOLUTION:
Installer code modified to wait for recovery tasks to complete on all volumes and then proceed with associating srl with rvg.

* 4115874 (Tracking ID: 4124871)

SYMPTOM:
Configuration of Vxfen fails for a three-node cluster on VMs in different AZs.

DESCRIPTION:
Configuration of Vxfen fails for a three-node cluster on VMs in different AZs

RESOLUTION:
Added set-strictsrc 0  tunable to the llttab file in the installer.

* 4116368 (Tracking ID: 4123645)

SYMPTOM:
During the Rolling upgrade response file creation, CPI is asking to unmount the VxFS filesystem.

DESCRIPTION:
During the Rolling upgrade response file creation, CPI is asking to unmount the VxFS filesystem.

RESOLUTION:
Installer code modified to exclude VxFS file system unmount process.

* 4116406 (Tracking ID: 4123654)

SYMPTOM:
The installer gives a null swap space message.

DESCRIPTION:
The installer gives a null swap space message, as the swap space requirement is not needed anymore.

RESOLUTION:
Installer code modified and removed swap space message.

* 4116879 (Tracking ID: 4126018)

SYMPTOM:
During addnode, installer fails to mount resources.

DESCRIPTION:
During addnode, installer fails to mount resources; as new node system was not added to child SG of resources.

RESOLUTION:
Installer code modified to add new node to all SGs.

* 4116995 (Tracking ID: 4123657)

SYMPTOM:
Installer while performing upgrade Cluster Protocol version not upgraded post Full Upgrade

DESCRIPTION:
Installer while performing full upgrade Cluster Protocol version not upgraded post Full Upgrade.

RESOLUTION:
Installer code modified to retry upgrade to complete Cluster Protocol version .

* 4117956 (Tracking ID: 4104627)

SYMPTOM:
The installer supports maximum of 5 patches. The user is not able to provide more than 5 patches for installation.

DESCRIPTION:
The latest bundle package installer supports maximum 5 patches. The user is not able to provide more than 5 patches for installation.

RESOLUTION:
The installer code modified to support maximum of 10 patches for installation.

* 4121961 (Tracking ID: 4123908)

SYMPTOM:
Installer does not register InfoScale hosts to the VIOM Management Server after InfoScale configuration.

DESCRIPTION:
Installer does not register InfoScale hosts to the VIOM Management Server after InfoScale configuration.

RESOLUTION:
Installer is enhanced to register InfoScale hosts to VIOM Management Server by using both menu-driven program and responsefile. To register InfoScale hosts to VIOM Management Server by using responsefile, $CFG{gendeploy_path} parameter must be used. The value for $CFG{gendeploy_path} is the absolute path of gendeploy script from the local node.

* 4122442 (Tracking ID: 4122441)

SYMPTOM:
When the product starts through the response file, installer displays keyless licensing information on screen.

DESCRIPTION:
When the product starts through the response file, installer displays keyless licensing information on screen.

RESOLUTION:
Licensing code modified to skip licensing information during the product start process.

* 4122749 (Tracking ID: 4122748)

SYMPTOM:
On Linux, had service fails to start during rolling upgrade from InfoScale 7.4.1 or lower to higher InfoScale version.

DESCRIPTION:
VCS protocol version was supported from InfoScale 7.4.2 onwards. During rolling upgrade process from 7.4.1 or lower to higher InfoScale version, due to wrong release matrix data, installer tries to perform single phase rolling upgrade instead of two-phase rolling upgrade and had service fails to start.

RESOLUTION:
Installer is enhanced to perform two-phase rolling upgrade if installed Infoscale version is 7.4.1 or older.

* 4126470 (Tracking ID: 4130003)

SYMPTOM:
Installer failed to start vxfs_replication while performing Configuration of Enterprise on OEL 9.2

DESCRIPTION:
Installer failed to start vxfs_replication while performing Configuration of Enterprise on OEL 9.2

RESOLUTION:
Installer code modified to retry start vxfs_replication while Configuration of Enterprise.

* 4127111 (Tracking ID: 4127117)

SYMPTOM:
On a Linux system, the InfoScale installer configures the GCO(Global Cluster option) only with a virtual IP address.

DESCRIPTION:
On a Linux system, you can configure the GCO(Global Cluster option) with a hostname by using InfoScale installer on a 
different cloud platform.

RESOLUTION:
Installer prompts for the hostname to configure the GCO.

* 4130377 (Tracking ID: 4131703)

SYMPTOM:
Installer performs dmp include/exclude operations if /etc/vx/vxvm.exclude is present on the system.

DESCRIPTION:
Installer performs dmp include/exclude operations if /etc/vx/vxvm.exclude is present on the system which is not required.

RESOLUTION:
Removed unnecessary dmp include/exclude operations which are launched after starting services in the container environment.

* 4131315 (Tracking ID: 4131314)

SYMPTOM:
Environment="VCS_ENABLE_PUBSEC_LOG=0" is added from cpi in install section of service file instead of Service section.

DESCRIPTION:
Environment="VCS_ENABLE_PUBSEC_LOG=0" is added from cpi in install section of service file instead of Service section.

RESOLUTION:
Environment="VCS_ENABLE_PUBSEC_LOG=0" is added in Service section of service file.

* 4131684 (Tracking ID: 4131682)

SYMPTOM:
On SunOS, installer prompts the user to install 'bourne' package if it is not available.

DESCRIPTION:
Installer had a dependency on 'usr/sunos/bin/sh', which is from 'bourne' package. 'bourne' package is deprecated with latest SRUs.

RESOLUTION:
Installer code is updated to use '/usr/bin/sh' instead of 'usr/sunos/bin/sh' thus removing bourne package dependency.

* 4132411 (Tracking ID: 4139946)

SYMPTOM:
Rolling upgrade fails if the recommended upgrade path is not followed.

DESCRIPTION:
Rolling upgrade fails if the recommended upgrade path is not followed.

RESOLUTION:
Installer code fixed to resolve rolling upgrade issues if recommended upgrade path is not followed.

* 4133019 (Tracking ID: 4135602)

SYMPTOM:
Installer failed to update main.cf file with VCS user during reconfiguring a secured cluster to non-secured cluster.

DESCRIPTION:
Installer failed to update main.cf file with VCS user during reconfiguring a secured cluster to non-secured cluster.

RESOLUTION:
Installer code checks are modified to update VCS user in main.cf file during reconfiguration of cluster from secured to non-secure.

* 4133469 (Tracking ID: 4136432)

SYMPTOM:
Installer failed to add node to a higher version of infoscale node.

DESCRIPTION:
Installer failed to add node to a higher version of infoscale node.

RESOLUTION:
Installer code modified to enable adding node to a higher version of infoscale node.

* 4135015 (Tracking ID: 4135014)

SYMPTOM:
CPI installer is asking "Would you like to install InfoScale" after "./installer -precheck" is done.

DESCRIPTION:
CPI installer is asking "Would you like to install InfoScale" after "./installer -precheck" is done. So it should not ask for installation after precheck is completed.

RESOLUTION:
Installer code modified to skip the question for installation after precheck is completed.

* 4136211 (Tracking ID: 4139940)

SYMPTOM:
Installer failed to get the package version and failed due to PADV missmatch.

DESCRIPTION:
Installer failed to get the package version and failed due to PADV missmatch.

RESOLUTION:
Installer code modified to retrieve proper package version.

* 4139609 (Tracking ID: 4142877)

SYMPTOM:
Missing HF list not displayed during upgrade by using the patch release.

DESCRIPTION:
Missing HF list not displayed during upgrade by using the patch release.

RESOLUTION:
Add prechecks in installer for identifying missing HFs and accept action from customer.

* 4140512 (Tracking ID: 4140542)

SYMPTOM:
Installer failed to rolling upgrade for patch

DESCRIPTION:
Rolling upgrade failed for the patch installer for the build version during mixed ru check

RESOLUTION:
Installer code modified to handle build version during mixed ru check.

* 4157440 (Tracking ID: 4158841)

SYMPTOM:
The installer supports VRTSrest version changes.

DESCRIPTION:
The installer now supports VRTSrest version changes.

RESOLUTION:
The installer code has been modified to enable the support for VRTSrest version changes.

* 4157696 (Tracking ID: 4157695)

SYMPTOM:
In IS 7.4.2 U7 to IS 8.0.2 upgradation, VRTSpython version upgradation fails.

DESCRIPTION:
In IS 7.4.2 U7 to IS 8.0.2 upgradation, VRTSpython version upgradation fails.

RESOLUTION:
Validation changes introduced for the VRTSpython package version.

* 4158650 (Tracking ID: 4164760)

SYMPTOM:
The installer is not checking dvd pkg version with the available patch pkg version

DESCRIPTION:
The installer is not checking dvd pkg version with the available patch pkg version due to that failed to install the latest pkgs.

RESOLUTION:
The installer code has been modified to check for dvd pkg version with the available patch pkg version to install the latest pkgs.

* 4159940 (Tracking ID: 4159942)

SYMPTOM:
The installer is used to update file permission.

DESCRIPTION:
The installer is used to update file permission.

RESOLUTION:
The installer code has been modified so that it will not update existing file permissions.

* 4161937 (Tracking ID: 4160983)

SYMPTOM:
In Solaris, after upgrading the Infoscale to ABE if we boot the current BE then the vxfs modules are not loading properly.

DESCRIPTION:
In Solaris, the vxfs modules are getting removed from current BE while upgrading the Infoscale to ABE.

RESOLUTION:
Installer code modified.

* 4164945 (Tracking ID: 4164958)

SYMPTOM:
The installer is making entries in the config files of EO tunable in the security patch.

DESCRIPTION:
The installer is not checking pkg version for the security patch, which is making entries in config files of EO tunables.

RESOLUTION:
The installer code has been modified to check pkg version for the security patch, which is making entries in config files of EO tunables.

* 4165118 (Tracking ID: 4171259)

SYMPTOM:
The installer is failing to add new node in cluster due to protocol version mismatch issue.

DESCRIPTION:
The installer is failing to add new node in cluster due to protocol version mismatch issue..

RESOLUTION:
The installer code has been modified to allow node to add in cluster.

* 4165727 (Tracking ID: 4165726)

SYMPTOM:
Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise

DESCRIPTION:
Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise

RESOLUTION:
Installer code modified.

* 4165730 (Tracking ID: 4165726)

SYMPTOM:
Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise

DESCRIPTION:
Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise

RESOLUTION:
Installer code modified.

* 4165840 (Tracking ID: 4165833)

SYMPTOM:
Unable to install InfoScale packages on Solaris using IPS repository.

DESCRIPTION:
InfoScale installer on Solaris did not support IPS repository-based installation. This feature has now been added to InfoScale on Solaris.

RESOLUTION:
InfoScale installer is modified to support IPS repository on Solaris.
Use the new option named "-ipsrepo" to define the IPS repository path or the repo name for performing IPS-based installation on Solaris.

* 4166659 (Tracking ID: 4171256)

SYMPTOM:
The installer is not allowing to upgrade primary rvg role vvr host to upgrade.

DESCRIPTION:
The installer is not allowing to upgrade primary rvg role vvr host to upgrade.

RESOLUTION:
The installer code has been modified to ease the checking vvr rvg role while upgrade.

* 4166980 (Tracking ID: 4166979)

SYMPTOM:
VMwareDisks agent is unable to start and run after upgrade

DESCRIPTION:
VMwareDisks agent is unable to start and run after upgrade

RESOLUTION:
Installer code modified.

* 4167308 (Tracking ID: 4171253)

SYMPTOM:
IS 8.0.2U3 : CPI doesn't ask to set EO tunable in case of Infoscale upgrade

DESCRIPTION:
InfoScale installer CPI doesn't ask to set EO tunable in case of Infoscale upgrade.

RESOLUTION:
InfoScale installer is modified.

* 4177618 (Tracking ID: 4184454)

SYMPTOM:
We are changing code by adding dbed related checks

DESCRIPTION:
We are changing code by adding dbed related checks

RESOLUTION:
Installer code modified

* 4178007 (Tracking ID: 4177807)

SYMPTOM:
We are changing message for CPC fencing not writing in env file /etc/vxenviron file

DESCRIPTION:
We are changing message for CPC fencing not writing in env file /etc/vxenviron file

RESOLUTION:
Installer code modified

* 4181039 (Tracking ID: 4181037)

SYMPTOM:
We are making the etc/vx/vxdbed/dbedenv file accessible

DESCRIPTION:
We are making the etc/vx/vxdbed/dbedenv file accessible

RESOLUTION:
Installer code modified

* 4181282 (Tracking ID: 4181279)

SYMPTOM:
User was trying to install rpm packages via yum.
After installation, configuring a cluster fails

DESCRIPTION:
User was trying to install rpm packages via yum.
After installation, configuring a cluster fails

RESOLUTION:
Installer code modified

* 4181787 (Tracking ID: 4181786)

SYMPTOM:
In network interface configuration file the interface method is changed to bootproto from dhcp, If we configure the the LLT over UDP.

DESCRIPTION:
If we use public interface(dhcp) interface for configuring LLT over UDP then as a result CPI changes the interface method to manual.

RESOLUTION:
Installer code modified

* 4184438 (Tracking ID: 4186642)

SYMPTOM:
The installer is asking VIOM registration question in case of start option.

DESCRIPTION:
The installer is asking VIOM registration question in case of start option.

RESOLUTION:
The installer code has been modified to not ask VIOM registration questions in case of start option.

Patch ID: -4.01.802.002

* 4173483 (Tracking ID: 4173483)

SYMPTOM:
Security vulnerability in SLIC component

DESCRIPTION:
Security vulnerability in SLIC component version 3.5

RESOLUTION:
Upgraded the SLIC component to 3.7

Patch ID: VRTSsfmh-vom-HF0802551

* 4189545 (Tracking ID: 4189544)

SYMPTOM:
N/A

DESCRIPTION:
VIOM 8.0.2.551 VRTSsfmh package for InfoScale 8.0.2 Update releases

RESOLUTION:
N/A

Patch ID: VRTSdbac-8.0.2.1400

* 4161967 (Tracking ID: 4157901)

SYMPTOM:
vcsmmconfig.log does not show file permissions 600 if EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM is set to 0.

DESCRIPTION:
vcsmmconfig.log does not show file permissions 600 if EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM is set to 0.

RESOLUTION:
Changes done in order to set file permission of vcsmmconfig.log as per EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM.

Patch ID: VRTSdbac-8.0.2.1300

* 4153145 (Tracking ID: 4153140)

SYMPTOM:
Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed.

DESCRIPTION:
Needed recompilation of Veritas Infoscale Availability packages with latest changes.

RESOLUTION:
Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform.

Patch ID: VRTSdbac-8.0.2.1200

* 4132631 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSvcsea-8.0.2.2300

* 4189548 (Tracking ID: 4189547)

SYMPTOM:
Invalid details mentioned while executing fire drill of oracle agent with oracle21c.

DESCRIPTION:
In Oracle Fire Drill scenario Filesystem column entry in output of df -k was supposed to be compared with "/" but instead it was getting compared with a different value ($fs).

RESOLUTION:
$basedir i.e. Filesystem column entry of df -k is now getting compared with "/"

Patch ID: VRTSvcsea-8.0.2.1700

* 4180094 (Tracking ID: 4180091)

SYMPTOM:
The offline script times out due to the delay introduced by the fuser check.

DESCRIPTION:
ASMDG resource timeout while offlining but happens quickly outside of VCS control. The offline script after executing the query " alter diskgroup <DISKGROUPS> dismount; " makes a fuser check on the underlying disks to see if any device is still in use and hangs.

RESOLUTION:
Alter the sql query to get the list of disks to run a fuser check on. Correct the operator precedence in the sql statement.

Patch ID: VRTSvcsea-8.0.2.1600

* 4088599 (Tracking ID: 4088595)

SYMPTOM:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.

DESCRIPTION:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.
example:
./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xml
Cluster prechecks and validation                                 Done
Taking PDB resource [pdb1_res] offline                           Done
Modification of cluster configuration                            Done
VCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node%

VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the cluster

Bringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]Done

For further details, see '/var/VRTSvcs/log/hapdbmigrate.log'

RESOLUTION:
hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.

Patch ID: VRTSvcsea-8.0.2.1400

* 4058775 (Tracking ID: 4073508)

SYMPTOM:
Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c.

DESCRIPTION:
Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c.

RESOLUTION:
Environment variables are used for pointing the updated path for the password file.

It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to 
work Oracle virtual fire-drill feature. 

Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile 
ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; 

Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile"

Patch ID: VRTSamf-8.0.2.1600

* 4161436 (Tracking ID: 4161644)

SYMPTOM:
System panics when VCS enabled AMF module.

DESCRIPTION:
System panics that indicates after amf_prexec_hook extracted an argument longer than 4k which spans
over two pages it is reading the 3rd page, that is illegal because all arguments are loaded in two pages.

RESOLUTION:
AMF continue to extract arguments from internal buffer before moving to next page.

* 4162305 (Tracking ID: 4168084)

SYMPTOM:
System panics when VCS enabled AMF module to monitor mount point.

DESCRIPTION:
AMF calls sleepable function while it holds spin lock for mount point event, resulting in system panic.

RESOLUTION:
Use a busy flag to synchronize multi threads so that it can release spin lock.

Patch ID: VRTSamf-8.0.2.1400

* 4137600 (Tracking ID: 4136003)

SYMPTOM:
A cluster node panics when VCS enabled AMF module that monitors process on/off.

DESCRIPTION:
A cluster node panics which indicates that an issue with the AMF module overruns into user space buffer during it is analyzing an argument of 8K size. The AMF module cannot load that length of data into internal buffer, eventually misleading access into user buffer which is not allowed when kernel SMAP in effect.

RESOLUTION:
The AMF module is constrained to ignore an argument of 8K or bigger size to avoid internal buffer overrun.

Patch ID: VRTSamf-8.0.2.1200

* 4132471 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSgab-8.0.2.1600

* 4161908 (Tracking ID: 4160398)

SYMPTOM:
Veritas Infoscale Availability does not support Rocky Linux release.

DESCRIPTION:
Veritas Infoscale Availability does not support Rocky Linux release.

RESOLUTION:
Veritas Infoscale Availability support for Rocky Linux release is now introduced.

Patch ID: VRTSgab-8.0.2.1400

* 4153142 (Tracking ID: 4153140)

SYMPTOM:
Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed.

DESCRIPTION:
Needed recompilation of Veritas Infoscale Availability packages with latest changes.

RESOLUTION:
Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform.

Patch ID: VRTSgab-8.0.2.1200

* 4123487 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSvcsag-8.0.2.2300

* 4189572 (Tracking ID: 4188318)

SYMPTOM:
Repro steps
1)    hastart on the node
2)    KVMagent OPEN entry called, if environment is invalid, KVM VCS resource is put into UNKNOWN state and the invalid environment file is created.
3)    User corrected the env.
4)    Probe the resource. No change to resource state as invalid environment file is present user has to remove it manually.

DESCRIPTION:
Same as above.

RESOLUTION:
enhance agent monitor for auto removal of the invalid file if environment is valid

* 4189590 (Tracking ID: 4075950)

SYMPTOM:
When IPv6 VIP switches from node1 to node2 in a cluster,
a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address.

DESCRIPTION:
After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change.

RESOLUTION:
The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network.

* 4189594 (Tracking ID: 4189392)

SYMPTOM:
Earlier, GCP Disk agent did not support attaching a regional disk in read-only mode to more than one instance.

DESCRIPTION:
In the existing GoogleDisk agent, we have now added support for attaching GCP regional disks in Read-Only (RO) mode to multiple instances - both inside and outside the cluster if they are in the same region. This enhances flexibility for scenarios requiring simultaneous read access.
While Read-Write (RW) mode continues to be restricted to one instance at a time.

RESOLUTION:
Changes have been made in the GCP Disk agent to recognize and support multi-attach of regional disks in read-only (RO) mode across multiple instances.

Patch ID: VRTSvcsag-8.0.2.2100

* 4180582 (Tracking ID: 4180581)

SYMPTOM:
Agent is unable to detach the IPv6 address from node when node gets faulted and hence unable to attach on other node.

DESCRIPTION:
For IPv6, In case of kernel panic, the system gets fault.During Online operation on the AWSIP resource on the fail over stale entry is not getting cleared from the faulted node network interface resulting AWSIP agent failed to fail over.

RESOLUTION:
Code changes are done to remove stale entry from the faulted node network interface.

Patch ID: VRTSvcsag-8.0.2.1700

* 4177815 (Tracking ID: 4175426)

SYMPTOM:
VMwareDisk Agent taking longer time to failover.

DESCRIPTION:
The VMwareDisk agent now relies on vxvm to work fast, otherwise, it will take more time to wait for a return from vxdisk, which doesn't exist at all due to vxvm package is not installed as customer having only availability configured.

RESOLUTION:
Verifying vxdctl mode and if it is enabled, then only go for vxdisk.

Patch ID: VRTSvcsag-8.0.2.1600

* 4149272 (Tracking ID: 4164374)

SYMPTOM:
VCS DNS Agent monitor gets timeout if multiple DNS servers are added as Stealth Masters and if few of them get hung.

DESCRIPTION:
This is due to the fact that VCS monitor calls nsupdate and dig commands and these calls are sequential to each DNS server. The default timeout of monitor routine (60s) is not enough to complete the nsupdate and dig calls for all servers as nsupdate can have minimum timeout of 20s. So, if there are more than 3 dns servers configured in env and 3 of them are in hung state, monitor routine will get timed out failing over resource even though 4th DNS server might be working fine.

RESOLUTION:
The changes are made to call nsupdate for all DNS servers in parallel and similar changes for dig command.

* 4156630 (Tracking ID: 4156628)

SYMPTOM:
Getting message "Uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317" constantly.

DESCRIPTION:
The following message is constantly being reported in the NIC_A.log as $version is not getting initialized.

2024/02/05 15:32:00 VCS INFO V-16-2-13716 Thread(1312) Resource(csgnic): Output of the completed operation (monitor)
==============================================
Use of uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317, <IFCONFIG> line 1.
Use of uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317, <IFCONFIG> line 2.
Use of uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317, <IFCONFIG> line 3.

RESOLUTION:
During performing ping test, the $version is not initializing so code is updated to handle this problem.

* 4162102 (Tracking ID: 4163518)

SYMPTOM:
Apache (httpd) agent hangs on reboot for over 10 minutes.

DESCRIPTION:
Apache hangs as VCS waits for httpd stoppage, and httpd waits for VCS stoppage. On node in bootup, Apache resource comes online due to this dependency even as tho it has failed over on other node. This invokes concurrency violation and tries to bring down httpd.

RESOLUTION:
Remove the dependency between vcs and httpd

* 4162659 (Tracking ID: 4162658)

SYMPTOM:
LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure.

DESCRIPTION:
If disk is detached unable to failover LVMVolumeGroup.

RESOLUTION:
Implementation of PanicSystemOnVGLoss attribute.
0 - Default value and behaviour, does not failover (not halting the system).
1  Halt the system if deactivation of volume group fails.
2 - Do not halt the system. Allow failover (Note risk of data corruption).

* 4162753 (Tracking ID: 4142040)

SYMPTOM:
While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.

DESCRIPTION:
While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.
During some instances, the user might be informed to manually copy '/etc/VRTSvcs/conf/types.cf' to the existing '/etc/VRTSvcs/conf/config/types.cf' file. Need to remove the message "Implement /etc/VRTSvcs/conf/types.cf to utilize resource type updates" when updating the VRTSvcsag rpm.

RESOLUTION:
To ensure that '/etc/VRTSvcs/conf/config/types.cf file' is updated correctly following VRTSvcsag updates, the script user_trigger_update_types can be manually triggered by the user. The following message displays:
Leaving existing /etc/VRTSvcs/conf/config/types.cf configuration file unmodified
Copy /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/user_trigger_update_types to /opt/VRTSvcs/bin/triggers
To manually update the types.cf, execute command "hatrigger -user_trigger_update_types 0

Patch ID: VRTSvcsag-8.0.2.1500

* 4157581 (Tracking ID: 4157580)

SYMPTOM:
Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS.

DESCRIPTION:
There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.

RESOLUTION:
VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed.

Patch ID: VRTSvcsag-8.0.2.1400

* 4114880 (Tracking ID: 4152700)

SYMPTOM:
When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.

DESCRIPTION:
Azure Private DNS Zone with AzureDNSZone Agent is not supported.

RESOLUTION:
The Azure Private DNS Zone is supported by the AzureDNSZone Agent by installing the Azure library for Private DNS Zone(azure-mgmt-privatedns). 
This library has functions that can be utilized by for Private DNS zone operations. The resource ID is differentiated based on the Public and the Private DNS zones and the corrective actions are taken accordingly. 
For DNS zones, the resource ID differs between Public and Private DNS zones. The resource ID can be parsed, and the resource type can be checked to determine whether it is a Public or Private DNS zone.

* 4135534 (Tracking ID: 4152812)

SYMPTOM:
AWS EBSVol agent takes long time to perform online and offline operations on resources.

DESCRIPTION:
When a large number of AWS EBSVol resources are configured, it takes a long time to perform online and offline operations on these resources. 
EBSVol is a single threaded agent and hence prevents parallel execution of attach and detach EBS volume commands.

RESOLUTION:
To resolvethe issue, the default value of 'NumThreads' attribute of EBSVol agent is modified from 1 to 10 and the agent is enhanced to use the locking mechanism to avoid conflicting resource configuration. 
This results in enhanced response time for parallel execution of attach and detach commands. 
Also, the default value of MonitorTimeout attribute is modified from 60 to 120. This avoids timeout of monitor entry point when response of AWS CLI/server is unexpectedly slow.

* 4137215 (Tracking ID: 4094539)

SYMPTOM:
The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.

DESCRIPTION:
In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.

RESOLUTION:
For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.

* 4137376 (Tracking ID: 4122001)

SYMPTOM:
NIC resource remain online after unplug network cable on ESXi server.

DESCRIPTION:
Previously MII checking network statistics/ping test but now it's directly marking NIC state ONLINE by checking NIC status in operstate. Now there is no any ping check before that. If it's fail to detect operstate file then only it's going to check for PING test. But on ESXi server environment NIC is already marked ONLINE as operstate file is available with state UP & carrier bit is set. So, even if "NetworkHosts" is not reachable then also it's marking NIC resource ONLINE.

RESOLUTION:
The NIC agent's already having "PingOptimize" attribute. We introduced new value (2) for "PingOptimize" attribute to make decision of perform PING test. If "PingOptimize = 2" then only it will do PING test or else it will work as per previous design.

* 4137377 (Tracking ID: 4113151)

SYMPTOM:
Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.

DESCRIPTION:
VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.

RESOLUTION:
Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.

* 4137602 (Tracking ID: 4121270)

SYMPTOM:
EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.

DESCRIPTION:
After attaching volume to instance its taking some time to update its device mapping in system. Due to which if we run lsblk -d -o +SERIAL immediately after attaching volume then its not showing that volume details in output. Due to which $native_device was getting blank/uninitialized.
 
So, we need to wait for some time to get device mapping updated in system.

RESOLUTION:
We have added logic to retry once for same command after some interval if in first run we didnt find expected volume device mapping. And now its properly updating attribute NativeDevice.

* 4137618 (Tracking ID: 4152886)

SYMPTOM:
AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a VPC that is shared across multiple AWS accounts.

DESCRIPTION:
When VPC is shared across multiple AWS accounts, route table associated with the subnets is exclusively owned by the owner account. AWS restricts the modification in the route table from any other account. When AWSIP agent tries to bring OverlayIP resource online on the Instance owned by a different account, may not have privileges to update the route table. In such cases, AWSIP agent fails to edit the route table, and fails to bring OverlayIP resource online and offline.

RESOLUTION:
To support cross-account deployment, assign appropriate privileges on shared resources. Create an AWS profile to grant permissions to update Route Table of VPC through different nodes belonging to different AWS accounts. This profile is used to update route tables accordingly.
A new attribute "Profile" is introduced in AWSIP agent. Use this attribute to configure the above created profile.

* 4143918 (Tracking ID: 4152815)

SYMPTOM:
AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.

DESCRIPTION:
AWS EBS Volume which is attached to AWS instance that is not part of cluster is getting attach to the node of cluster during online event. 

Instead, Unable to detach volume' message should be logged in log file as volume is already in use by another AWS instance in AWS EBSVol agent.

RESOLUTION:
AWS EBSVol agent is enhanced to avoid attachment of in-use EBS volumes whose instances are not part of cluster.

Patch ID: VRTSvcsag-8.0.2.1200

* 4130206 (Tracking ID: 4127320)

SYMPTOM:
The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.

DESCRIPTION:
The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin.

RESOLUTION:
The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process.

Patch ID: VRTSllt-8.0.2.2300

* 4189272 (Tracking ID: 4189271)

SYMPTOM:
LLT service is unable to start due to LLT_IRQBALANCE

DESCRIPTION:
Depend on irqbalance and hpe_irqbalance service status, LLT applied irqbalance changes or stops the LLT.

RESOLUTION:
No need to stop LLT. Correcting LLT_IRQBALANCE, LLT will work accordingly.

* 4189571 (Tracking ID: 4167108)

SYMPTOM:
This is code improvement and user does not experience any

DESCRIPTION:
replace yield() with cond_resched()

RESOLUTION:
replace yield() with cond_resched()

* 4189853 (Tracking ID: 4189566)

SYMPTOM:
In an InfoScale FSS environment where LLT links are configured over RDMA, the CVM slave node panics whilst joining the cluster with the CVM master.

DESCRIPTION:
The panic can occur on any running thread, but the system will typically crash with "BUG: unable to handle kernel paging request" or "general protection fault: 0000 [#1] SMP". The fault could come from a stack involving the kmem_cache family functions.

RESOLUTION:
Compare the contents of the /etc/sysconfig/llt file on the nodes and make sure the same tunable settings are in place.

Patch ID: VRTSllt-8.0.2.2100

* 4186647 (Tracking ID: 4186645)

SYMPTOM:
lltdlv hang causes fencing to panic a node due to transient network issue.

DESCRIPTION:
lltdlv hang is caused due to temporary network issue and fencing panics the node. This works fine when LLT_IRQBALANCE balance is enabled. Hence, 
requirement is to enable this tunable by default to prevent panics due to transient issues. Llt irqbalance does not work in conjunction with hpe_irqbalance, so 
add a check for that.

RESOLUTION:
Enabled LLT_IRQBALANCE by default, to make clusters more resilient to transient issues and avoid fencing to panic the server in such cases.

Patch ID: VRTSllt-8.0.2.1800

* 4187795 (Tracking ID: 4180026)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 5(RHEL9.5) is now introduced.

Patch ID: VRTSllt-8.0.2.1700

* 4179383 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

Patch ID: VRTSllt-8.0.2.1600

* 4162744 (Tracking ID: 4139781)

SYMPTOM:
System panics occasionally in LLT stack where LLT over ether enabled.

DESCRIPTION:
LLT allocates skb memory from own cache for those >4k messages and sets a field of skb_shared_info pointing to a LLT function, later uses the field to determine if skb is allocated from own cache. When receiving a packet, OS also allocates skb from system cache and doesn't reset the field, then passes skb to LLT. Occasionally the stale pointer in memory can mislead LLT to think a skb is from own cache and uses LLT free api by mistake.

RESOLUTION:
LLT uses a hash table to record skb allocated from own cache and doesn't set the field of skb_shared_info.

* 4173093 (Tracking ID: 4164328)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.

Patch ID: VRTSllt-8.0.2.1400

* 4132209 (Tracking ID: 4124759)

SYMPTOM:
Panic happened with llt_ioship_recv on a server running in AWS.

DESCRIPTION:
In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected.

RESOLUTION:
To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet.

* 4137611 (Tracking ID: 4135825)

SYMPTOM:
Once root file system is full during llt start, llt module failing to load forever.

DESCRIPTION:
Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module.

RESOLUTION:
If existing links are not present, added the logic to get name of file names to create new links.

* 4153057 (Tracking ID: 4137325)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.

Patch ID: VRTSllt-8.0.2.1200

* 4124138 (Tracking ID: 4122405)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.

* 4128886 (Tracking ID: 4128887)

SYMPTOM:
Below warning trace is observed while unloading llt module:
[171531.684503] Call Trace:
[171531.684505]  <TASK>
[171531.684509]  remove_proc_entry+0x45/0x1a0
[171531.684512]  llt_mod_exit+0xad/0x930 [llt]
[171531.684533]  ? find_module_all+0x78/0xb0
[171531.684536]  __do_sys_delete_module.constprop.0+0x178/0x280
[171531.684538]  ? exit_to_user_mode_loop+0xd0/0x130

DESCRIPTION:
While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed .

RESOLUTION:
Proc_remove api is used which cleans up the whole subtree.

* 4132621 (Tracking ID: 4125118)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.

Patch ID: VRTScps-8.0.2.2300

* 4189591 (Tracking ID: 4188652)

SYMPTOM:
After configuring CP server, getting EO related error in CP server logs.

DESCRIPTION:
After configuring CP server, getting EO related error in CP server logs. Error out if the flag value is not 0 or 1.

RESOLUTION:
Resolving the unnecessary error log message even if the tunable value is set to 0.

* 4189990 (Tracking ID: 4189584)

SYMPTOM:
Security vulnerabilities exists Sqlite third-party components used by VCS.

DESCRIPTION:
VCS uses the  Sqlite third-party components in which some security vulnerability exists.

RESOLUTION:
VCS is updated to use newer versions of Sqlite third-party component in which the security vulnerabilities have been addressed.

Patch ID: VRTScps-8.0.2.1600

* 4152885 (Tracking ID: 4152882)

SYMPTOM:
Intermittently losing access to the CP servers

DESCRIPTION:
Since we write logs into every log files(vxcpserve_[A|B|C].log ) at most till maxlen, but if it goes beyond that length, a new file will be opened and the old one will be closed. At this stack, fptr uses the old pointer, resulting in fwrite() to a closed FILE ptr.

RESOLUTION:
Opened a new file before assignment of fptr so that it will point to a correct FILE ptr.

Patch ID: VRTSdbed-8.0.2.1400

* 4188986 (Tracking ID: 4188985)

SYMPTOM:
Checkpoint creation fails for oracle database application using dbed, if archive log is set as a directory inside the mountpoint

DESCRIPTION:
Checkpoint creation fails because fsckptadm doesn't support directory level checkpoint.

RESOLUTION:
Added code change to create checkpoint of the filesystem containing the archive log directory instead of trying to take checkpoint of the directory.

Patch ID: VRTSdbed-8.0.2.1200

* 4163136 (Tracking ID: 4136146)

SYMPTOM:
Old version v6.1.14.26

DESCRIPTION:
New version available v6.1.14.27.

RESOLUTION:
Use New version available v6.1.14.27 libraries.

Patch ID: VRTSdbed-8.0.2.1100

* 4153061 (Tracking ID: 4092588)

SYMPTOM:
SFAE failed to start with systemd.

DESCRIPTION:
SFAE failed to start with systemd as currently SFAE service is used in backward compatibility mode using init script.

RESOLUTION:
Added systemd support for SFAE, such as; systemctl commands - stop/start/status/restart/enable/disable.

Patch ID: VRTSvbs-8.0.2.1200

* 4189595 (Tracking ID: 4188647)

SYMPTOM:
Virtual Business Operations instance is created and configured but not able to perform any of it's operations as
those operations command hangs for a very long period of time.

DESCRIPTION:
vbsd server was hanging if SSS service is configured on Linux platform.

RESOLUTION:
Skipped user checks to fix the issue.Now VBS service is working as expected on latest Linux platform.

Patch ID: VRTSvbs-8.0.2.1100

* 4163135 (Tracking ID: 4136146)

SYMPTOM:
Old version v6.1.14.26

DESCRIPTION:
New version available v6.1.14.27.

RESOLUTION:
Use New version available v6.1.14.27 libraries.

Patch ID: VRTSvxfen-8.0.2.2300

* 4189905 (Tracking ID: 4189906)

SYMPTOM:
vxfendisk fails due to ksh overwriting positional parameters by default after executing subsequent scripts inside it.

DESCRIPTION:
vxfenswap invokes vxfendisk to list the disks. vxfendisk sources vxfen_scriptlib.sh to set the environment. Within vxfen_scriptlib.sh, vcs_eo_perm.sh is executed with two parameters. On AIX, due to ksh behavior, this execution overwrites the original positional parameters of vxfendisk, causing unexpected failures.

RESOLUTION:
Saved the original positional parameters in vxfendisk before environment setup and restored them afterward.

Patch ID: VRTSvxfen-8.0.2.1900

* 4187629 (Tracking ID: 4187897)

SYMPTOM:
Security vulnerabilities exist in the Curl third-party components used by VCS.

DESCRIPTION:
Security vulnerabilities exist in the Curl third-party components used by VCS.

RESOLUTION:
Curl is upgraded in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfen-8.0.2.1800

* 4180027 (Tracking ID: 4180026)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 5(RHEL9.5) is now introduced.

Patch ID: VRTSvxfen-8.0.2.1700

* 4176111 (Tracking ID: 4176110)

SYMPTOM:
vxfentsthdw failed to verify fencing disks compatibility in KVM environment

DESCRIPTION:
vxfentsthdw failed to read key buffer, as exceeds the maximum buffer size in KVM hypervisor.

RESOLUTION:
Added new macro with less number of keys to support in kvm environment.

* 4177677 (Tracking ID: 4176592)

SYMPTOM:
Continuous ERROR message in 'vxfen.log' file - "VXFEN already configured" after system startup, despite fencing working correctly.

DESCRIPTION:
The vxfen-startup script enters a loop trying to configure the vxfen driver, which is already configured, due to an incorrect exit value. This results in the 'vxfen.log' file being flooded with error messages.

RESOLUTION:
Correct the exit code to ensure the vxfen-startup script exits the loop properly, and handles vxfen already configured as a success.

Patch ID: VRTSvxfen-8.0.2.1600

* 4164329 (Tracking ID: 4164328)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.

Patch ID: VRTSvxfen-8.0.2.1400

* 4137326 (Tracking ID: 4137325)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.

Patch ID: VRTSvxfen-8.0.2.1200

* 4124086 (Tracking ID: 4124084)

SYMPTOM:
Security vulnerabilities exist in the Curl third-party components used by VCS.

DESCRIPTION:
Security vulnerabilities exist in the Curl third-party components used by VCS.

RESOLUTION:
Curl is upgraded in which the security vulnerabilities have been addressed.

* 4124644 (Tracking ID: 4122405)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.

* 4125891 (Tracking ID: 4113847)

SYMPTOM:
Even number of cp disks is not supported by design. This enhancement is a part of AFA wherein a faulted disk needs to be replaced as soon as the number of coordination disks is even in number and fencing is up and running

DESCRIPTION:
Regular split / network partitioning must be an odd number of disks.
Even number of cp support is provided with cp_count. With cp_count/2+1, fencing is not allowed to come up. Also if cp_count is not defined 
in vxfenmode file then by default minimum 3 cp disk are needed, otherwise vxfen does not start.

RESOLUTION:
In case of even number of cp disk, another disk is added. The number of cp disks is odd and fencing is thus running.

* 4125895 (Tracking ID: 4108561)

SYMPTOM:
Vxfen print keys  internal utility was not working because of overrunning of array internally

DESCRIPTION:
Vxfen print keys  internal utility will not work if the number of keys exceed 8 will then return garbage value
Overrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8)

RESOLUTION:
Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now.

* 4132625 (Tracking ID: 4125118)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.

Patch ID: VRTSvcs-8.0.2.2300

* 4189593 (Tracking ID: 4188662)

SYMPTOM:
While performing VVR rolling-upgrade from IS 7.4.2 , App group present on secondary site went into faulted state after an upgrade.

DESCRIPTION:
In types.cf of upgraded secondary nodes newly added attribute EnableSingleWriter was not getting updated because in 
trigger we were checking if any such attribute already exist by running command '$haattr -display $type | grep $attr' and if exists then we were skipping that 
command from execution. But in this case if that attribute was part of RegList then also this command was getting successful.

RESOLUTION:
added conditional check to skip if attribute exist in RegList.

Patch ID: VRTSvcs-8.0.2.2200

* 4189253 (Tracking ID: 4189252)

SYMPTOM:
Security vulnerabilities present in existing version of Netsnmp.

DESCRIPTION:
Upgrading Netsnmp component to fix security vulnerabilities

RESOLUTION:
Upgrading Netsnmp component to fix security vulnerabilities for security patch IS 8.0.2U5_SP2.

Patch ID: VRTSvcs-8.0.2.1600

* 4162755 (Tracking ID: 4136359)

SYMPTOM:
When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated.

DESCRIPTION:
To use new types, attribute(like PanicSystemOnVGLoss). User need to copy /etc/VRTSvcs/conf/types.cf to /etc/VRTSvcs/conf/config/types.cf, this copying may fault the resource due to missing types(like HTC) from new types.cf.

RESOLUTION:
Implemented new external trigger to manually update the /etc/VRTSvcs/conf/config/types.cf. Follow the post installation instructions of VRTSvcsag rpm.

Patch ID: VRTSvcs-8.0.2.1500

* 4157581 (Tracking ID: 4157580)

SYMPTOM:
Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS.

DESCRIPTION:
There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.

RESOLUTION:
VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed.

Patch ID: VRTSvcs-8.0.2.1400

* 4133677 (Tracking ID: 4129493)

SYMPTOM:
Tenable security scan kills the Notifier resource.

DESCRIPTION:
When nmap port scan performed on port 14144 (on which notifier process is listening), notifier gets killed because of connection request.

RESOLUTION:
The required code changes have done to prevent Notifier agent crash when nmap port scan is performed on notifier port 14144.

Patch ID: VRTSvcs-8.0.2.1200

* 4113391 (Tracking ID: 4124956)

SYMPTOM:
Traditionally, virtual IP addresses are used as cluster addresses. Cluster address is also used for peer-to-peer communication in GCO-DR deployment. Thus, gcoconfig utility is accustomed to IPv4 and IPv6 addresses. It gives error if hostname is provided as cluster address.

DESCRIPTION:
In cloud ecosystem, hostnames are widely used. As cluster address, gcoconfig utility must be compatible with hostname and virtual IPs.

RESOLUTION:
To address the limitation (gcoconfig does not accept hostname as cluster address), gcoconfig utility is enhanced to be supported on the following:
1. NIC and IP configuration:
   i. Continue using NIC and IP configuration.
2. Hostname as cluster address along with corresponding DNS. 
   i. On premise DNS: 
      a. Utility will take following inputs: Domain, Resource Records, TSIGKeyFile (if opted secured DNS), and StealthMasters (optional).
      b. Accordingly, gcoconfig utility will create DNS resource in cluster service group. 
   ii. AWSRoute53 DNS: It is Amazon DNS web service.
      a. Utility will take following inputs: Hosted Zone ID, Resource Records, AWS Binaries, and Directory Path.
      b. Accordingly, gcoconfig utility will create AWSRoute53 DNS type's resource in cluster service group.
   iii. AzureDNSZone: It is Microsoft DNS web service.
      a. Utility will take following inputs: Azure DNS Zone Resource Id, Resource Records are mandatory attributes. Additionally, user must either provide 
         Managed Identity Client ID or Azure Auth Resource.
      b. Accordingly, gcoconfig utility will create AzureDNSZone type's resource in cluster service group.

For the end points mentioned in Resource Records, gcoconfig utility can neither ensure their accessibility, nor manage their lifecycle. Hence, these are not within the scope of gcoconfig utility.

Patch ID: VRTScavf-8.0.2.2700

* 4177247 (Tracking ID: 4177245)

SYMPTOM:
CVMVolDg resource takes many minutes to online with CPS fencing.

DESCRIPTION:
When fencing is configured as CP server based but not disk based SCSI3PR, Diskgroups are still imported with SCSI3 reservations, which causes SCSI3 PR errors 
during import and it will take long time due to retries.

RESOLUTION:
Code changes have been done to import Diskgroup without SCSI3 reservations when SCSI3 PR is disabled.

Patch ID: VRTScavf-8.0.2.2100

* 4162683 (Tracking ID: 4153873)

SYMPTOM:
CVM master reboot resulted in volumes disabled on the slave node

DESCRIPTION:
The Infoscale stack exhibits unpredictable behaviour during reboots, sometimes the node hangs to come online, the working node goes into the faulted state and sometimes the cvm won't start on the rebooted node.

RESOLUTION:
Now we have added the mechanism for making decisions about deport and the code has been integrated with an offline routine.

Patch ID: VRTScavf-8.0.2.1500

* 4133969 (Tracking ID: 4074274)

SYMPTOM:
DR test and failover activity might not succeed for hardware-replicated disk groups and  EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.

DESCRIPTION:
In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover. And also we need to change SCSI-3 error message to "PR operation failed".

RESOLUTION:
For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.
And Pre VxVM 8.0.x, we getting "SCSI-3 PR operation failed" as shown and changes done respectively

Sample syntax
# /usr/sbin/vxdg -s -o groupreserve=VCS -o clearreserve -cC -t import AIXSRDF
VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed:
SCSI-3 PR operation failed

VRTScavf (CVM) 7.4.2.2201 agent enhanced on AIX to handle EMC SRDF VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed: SCSI-3 PR operation failed failures

NEW 8.0.x VxVM error message format:
2023/09/27 12:44:02 VCS INFO V-16-20007-1001 CVMVolDg:<RESOURCE-NAME>:online:VxVM vxdg ERROR V-5-1-19179 Disk group <DISKGROUP-NAME>: import failed:
PR operation failed

* 4137640 (Tracking ID: 4088479)

SYMPTOM:
The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.

DESCRIPTION:
The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.
#/usr/sbin/vxdg -o groupreserve=VCS -o clearreserve -c -tC import srdfdg
VxVM vxdg ERROR V-5-1-19179 Disk group srdfdg: import failed:
SCSI-3 PR operation failed

RESOLUTION:
06/16 14:31:49:  VxVM vxconfigd DEBUG  V-5-1-7765 /dev/vx/rdmp/emc1_0c93: pgr_register: setting pgrkey: AVCS
06/16 14:31:49:  VxVM vxconfigd DEBUG  V-5-1-5762 prdev_open(/dev/vx/rdmp/emc1_0c93): open failure: 47                           //#define EWRPROTECT 47 /* Write-protected media */
06/16 14:31:49:  VxVM vxconfigd ERROR  V-5-1-18444 vold_pgr_register: /dev/vx/rdmp/emc1_0c93: register failed:errno:47 Make sure the disk supports SCSI-3 PR.
 
AIX differentiates between RW and RD-only opens. When the underlying device state has changed, because of the pending open count(dmp_cache_open feature), device open failed.

Patch ID: VRTSodm-8.0.2.2700

* 4175626 (Tracking ID: 4175627)

SYMPTOM:
ODM module failed to load with latest VxFS.

DESCRIPTION:
Need to use the ODM package built with latest VXFS because there is an internal dependency of ODM on VxFS.

RESOLUTION:
Need to use the ODM package built with latest VXFS.

Patch ID: VRTSodm-8.0.2.2400

* 4188098 (Tracking ID: 4188097)

SYMPTOM:
ODM module failed to load with latest VxFS.

DESCRIPTION:
Need to use the ODM package built with latest VXFS because there is an internal dependency of ODM on VxFS.

RESOLUTION:
Need to use the ODM package built with latest VXFS.

Patch ID: VRTSodm-8.0.2.2100

* 4175626 (Tracking ID: 4175627)

SYMPTOM:
ODM module failed to load with latest VxFS.

DESCRIPTION:
Need to use the ODM package built with latest VXFS because there is an internal dependency of ODM on VxFS.

RESOLUTION:
Need to use the ODM package built with latest VXFS.

Patch ID: VRTSodm-8.0.2.1700

* 4154116 (Tracking ID: 4118154)

SYMPTOM:
System may panic in simple_unlock_mem() when errcheckdetail enabled with stack trace as follows.
        simple_unlock_mem()
        odm_io_waitreq()
        odm_io_waitreqs()
        odm_request_wait()
        odm_io()
        odm_io_stat()
        vxodmioctl()

DESCRIPTION:
odm_io_waitreq() has taken a lock and waiting to complete the IO request but it is interrupted by odm_iodone() to perform IO and unlocked a lock taken by odm_io_waitreq(). So when odm_io_waitreq() tries to unlock the lock it leads to panic as lock was unlocked already.

RESOLUTION:
Code has been modified to resolve this issue.

* 4159290 (Tracking ID: 4159291)

SYMPTOM:
ODM module is not getting loaded with newly rebuilt VxFS.

DESCRIPTION:
ODM module is not getting loaded with newly rebuilt VxFS, need recompilation of ODM with newly rebuilt VxFS.

RESOLUTION:
Recompiled the ODM with newly rebuilt VxFS.

Patch ID: VRTSodm-8.0.2.1500

* 4153091 (Tracking ID: 4153090)

SYMPTOM:
After installing VRTSvxfs-8.0.2.1500, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.2.1400

* 4144274 (Tracking ID: 4144269)

SYMPTOM:
After installing VRTSvxfs-8.0.2.1400, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.2.1200

* 4123834 (Tracking ID: 4113118)

SYMPTOM:
The ODM module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated ODM to support RHEL 8.8.

* 4126262 (Tracking ID: 4126256)

SYMPTOM:
no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration

DESCRIPTION:
modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.

RESOLUTION:
Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.

* 4127518 (Tracking ID: 4107017)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in exact-version-module version calculation.

* 4127519 (Tracking ID: 4107778)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129838 (Tracking ID: 4129837)

SYMPTOM:
ODM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to ODM rpm.

Patch ID: VRTSvxfs-8.0.2.2700

* 4135608 (Tracking ID: 4086287)

SYMPTOM:
VxFS mount command may panic the system with "scheduling while atomic: mount/453834/0x00000002" bug for cluster file system
schedule()
vxg_svar_sleep_unlock()
vxg_get_block()
vxg_do_initlock()
vxg_api_initlock()
vx_glm_init_blocklock()
vx_cbuf_lookup()
vx_getblk_clust()
vx_getblk_cmn()
vx_getblk()
vx_badmap_rdwr()
vx_emap_lookup()
vx_reorg_excl()
vx_fsadm_query()
vx_cfs_fset_mnt()
vx_domount()
vx_fill_super()
mount_bdev()
vx_get_sb_bdev_v2()
vx_get_sb_impl()
vx_get_sb_v2()
legacy_get_tree()
vfs_get_tree()
do_new_mount()

DESCRIPTION:
VxFS mount code for cluster file system needs spinlock to check for exclusion zones. This operation may requires additional I/O to fetch information disks or need sleep lock. Scheduling out thread under spinlock protection is not safe. Hence Linux may panic the system with "scheduling while atomic: mount/453834/0x00000002" bug.

RESOLUTION:
The spinlock protecting exclusion zones of VxFS is converted to sleep lock (semaphore) to solve this issue.

* 4159938 (Tracking ID: 4155961)

SYMPTOM:
System panic due to null i_fset in vx_rwlock().

DESCRIPTION:
Panic in vx_rwlock due to race between vx_rwlock() and vx_inode_deinit() function.
Panic stack
[exception RIP: vx_rwlock+174]
.
.
#10  __schedule
#11  vx_write
#12  vfs_write
#13  sys_pwrite64
#14  system_call_fastpath

RESOLUTION:
Code changes have been done to fix this issue.

* 4164927 (Tracking ID: 4187385)

SYMPTOM:
If same blocks gets allocated to different inodes, one of them being IFAULOG inode. Auditlog file inode is marked bad in pass1b, which later gets freed and auditlog file gets removed.

DESCRIPTION:
For structural files, in case of duplicate block allocation, inode should not be marked bad for removal instead other files' references to the same block should be removed.

RESOLUTION:
Changes to skip marking IFAULOG file type inode as bad during pass1b.

* 4177631 (Tracking ID: 4177630)

SYMPTOM:
Save the fsck progress status report to a file by default.

DESCRIPTION:
The fsck progress status report log is saved at /etc/vx/log/.
 - Filename is device path name  e.g. fsck_dev_vx_dsk_pvtdg_vol1
Message is printed to syslog specifying the full file name.
 - eg. Fsck progress status will be saved in /etc/vx/log/fsck_dev_vx_dsk_pvtdg_vol1
If the progress status file could not be opened or if there is an error while flushing the file, an error will be printed to syslog.
 - eg. Failed to open file for logging fsck progress status output
With -o status option, in addition to saving progress status in the file, it will be printed on screen.

RESOLUTION:
Code changes done to save the fsck progress status report to a file by default.

* 4177641 (Tracking ID: 4135900)

SYMPTOM:
asm_exc_page_fault occurred while running LM stress worm

DESCRIPTION:
Multiple usage of variable before spinlock causing compiler optimisation issue in xted_free function.

RESOLUTION:
Since variable is not used before spinlock, we omit it.

* 4177643 (Tracking ID: 4085144)

SYMPTOM:
Remount operation with "smartiomode=writeback" triggers kernel panic with message: "BUG: scheduling while atomic: mount.vxfs"

DESCRIPTION:
While remounting the filesystem with "smartiomode=writeback" option, the kernel takes spin lock but doesn't release it before requesting operations which might sleep. This results in kernel panic.

RESOLUTION:
Code changes have been done to release the spin lock before such operations.

* 4177650 (Tracking ID: 4164503)

SYMPTOM:
There are some internal vxfs user space library functions which may leak memory in some cases.

DESCRIPTION:
There are some internal vxfs user space library functions which may leak memory in some cases. Most of the consumers are short-lived binaries, but few of them are daemon as well.

RESOLUTION:
Code changes have been done to resolve the issue.

* 4188062 (Tracking ID: 4188063)

SYMPTOM:
Pages were getting unlocked even though VxFS locks them.

DESCRIPTION:
VxFS maintains dirty pages to be flushed in an array which is embedded in iowr structure. in vx_pvn_range_dirty(), VxFS uses kernel APIs to lookup for the dirty pages in given flush range and populate this array based on increasing index/offset. There is performance optimisation where VxFS tries to find dirty pages before start offset of given flush range and flush all of these accumulated pages. This optimisation was calling same kernel API for lookup of dirty pages on same IOWR structure which was passed in vx_pvn_range_dirty(). This new lookup from optimisation was reshuffling dirty page array and was causing extra reference count decrease on pages. This was resulting in pages being unlocked in kernel even though pages are locked by VxFS. This was causing VxFS internal assert to be hit.

RESOLUTION:
Code is modified to avoid reshuffling of array while doing lookup of dirty pages in vx_pvn_backward_cluster().

* 4188390 (Tracking ID: 4188391)

SYMPTOM:
There was a mismatch between the setfacl and getfacl command outputs for an empty ACL.

DESCRIPTION:
There is code in VxFS that instructs the kernel to cache the ACL that was passed in, even though no ACL is saved on disk in the case of an empty ACL.

RESOLUTION:
Modified VxFS code to cache NULL when the ACL is empty and cache the ACL otherwise.

* 4189348 (Tracking ID: 4188888)

SYMPTOM:
systemctl status fstrim
 fstrim.service - Discard unused blocks on filesystems from /etc/fstab
     Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static)
     Active: failed (Result: exit-code) since Mon 2025-03-03 08:47:18 PST; 2s ago
       Docs: man:fstrim(8)
    Process: 15347 ExecStart=/usr/sbin/fstrim --listed-in /etc/fstab:/proc/self/mountinfo --verbose --quiet-unsupported (code=exited, status=64)
   Main PID: 15347 (code=exited, status=64)
        CPU: 59ms
Mar 03 08:47:18 systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Mar 03 08:47:18 fstrim[15347]: fstrim: /t2: FITRIM ioctl failed: Invalid argument
Mar 03 08:47:18 fstrim[15347]: fstrim: /t1: FITRIM ioctl failed: Invalid argument
Mar 03 08:47:18 systemd[1]: fstrim.service: Main process exited, code=exited, status=64/USAGE
Mar 03 08:47:18 systemd[1]: fstrim.service: Failed with result 'exit-code'.
Mar 03 08:47:18 systemd[1]: Failed to start Discard unused blocks on filesystems from /etc/fstab.

# /usr/sbin/fstrim --listed-in /etc/fstab:/proc/self/mountinfo --verbose --quiet-unsupported
fstrim: /t2: FITRIM ioctl failed: Invalid argument
fstrim: /t1: FITRIM ioctl failed: Invalid argument

DESCRIPTION:
The VXFS filesystem does not support the fstrim utility. When the fstrim command is executed, it returns an error code EINVAL,
leading to confusion that implies the feature is supported, but that an argument or ioctl function is invalid. 
Instead of returning a misleading error, we should ensure that a valid error code is returned, indicating that fstrim is not supported for VXFS filesystems.

RESOLUTION:
Code changes have been added to return the valid error code stating that fstrim is not supported on the VXFS Filesystem.

* 4189349 (Tracking ID: 4188107)

SYMPTOM:
Softlockup occurred during Shrinking VxFS file system, the stacktrace of the task caused softlockup is like following:


vx_multi_bufinval+0x21d/0x230 [vxfs]
vx_reorg_start_io+0x32e/0x3d0 [vxfs]
vx_reorg_copy+0x235/0x4c0 [vxfs]
vx_reorg_dostruct+0x3b6/0x960 [vxfs]
vx_trancommit+0x32f/0x1220 [vxfs]
vx_extmap_reorg+0xdd2/0xe90 [vxfs]
vx_ilock+0x18/0x50 [vxfs]
vx_struct_reorg+0xb5f/0xb90 [vxfs]
vx_aioctl_full+0x107d/0x1160 [vxfs]
vx_aioctl_common+0x1ba9/0x2410 [vxfs]

DESCRIPTION:
This issue occurs during reorg of the extent map file, as part of reorg, vx_reorg_copy()/vx_reorg_start_io() is called to swap the extent map between IFEMAP and reorg IFEMP, during this swap, the buffer contains the original emap data needs to be invalidated. There is an excess buffer invalidation issue due to the incorrect buffer length passed to invalidate the buffers.

RESOLUTION:
Modified the code to pass the correct length of the buffer to be invalidated.

* 4189423 (Tracking ID: 4189424)

SYMPTOM:
FSQA binary freezeit fails with the error "ioctl VX_FREEZE failed"

DESCRIPTION:
The ioctl call with the command VX_FREEZE is failing with error code 25 (ENOTTY  Not a typewriter).

RESOLUTION:
Modified the code to ensure all the vxfs ioctl based commands are handled properly.

* 4189586 (Tracking ID: 4189587)

SYMPTOM:
The setfacl operation failed with the error: Operation not supported.

DESCRIPTION:
Newer kernels use dedicated APIs for interacting with POSIX ACLs. The VxFS code also needs to use these newly implemented APIs instead of the generic APIs that are used for all extended attributes.

RESOLUTION:
Modified VxFS code to use dedicated POSIX ACL APIs for setting and retrieving ACL extended attributes on newer kernels.

* 4189598 (Tracking ID: 4187406)

SYMPTOM:
Panic in locked_inode_to_wb_and_lock_list during OS writeback.

DESCRIPTION:
There is a race condition between OS writeback, inode eviction and vxfs lookup, the issue here is that vxfs lookup inits OS inode unconditionally even it is held or exposed by OS, as a result of which, i_state is cleared, which breaks the serialisation (by checking if i_state is set with I_SYNC) between OS writeback and inode eviction, hence the race, the panic stacktrace is as following:

machine_kexec at ffffffffac867cfe
__crash_kexec at ffffffffac9ad94d
crash_kexec at ffffffffac9ae881
oops_end at ffffffffac8274f1
no_context at ffffffffac879a03
__bad_area_nosemaphore at ffffffffac879d64
do_page_fault at ffffffffac87a617
page_fault at ffffffffad20111e
    [exception RIP: locked_inode_to_wb_and_lock_list+28]
writeback_sb_inodes at ffffffffacb858b4
__writeback_inodes_wb at ffffffffacb85b0f
wb_writeback at ffffffffacb85dcb
get_nr_inodes at ffffffffacb6d765
wb_workfn at ffffffffacb86c8a
process_one_work at ffffffffac910167
worker_thread at ffffffffac910820

RESOLUTION:
Modified the code to not re-init the OS inode when it is under writeback or exposed by OS.

* 4189599 (Tracking ID: 3743572)

SYMPTOM:
When the number of inodes (regular files and directories together) of a clustered file system exceeds the 1-billion limit, the CFS secondary node may hang indefinitely when trying to allocate more inodes with the following stack traces:

vx_svar_sleep_unlock
vx_event_wait
vx_async_waitmsg
vx_msg_send
llt_msgalloc
vx_cfs_getias
vx_update_ilist
vx_find_partial_au
vx_cfs_noinode
vx_noinode
vx_dircreate_tran
vx_pd_create
vx_dirlook
vx_create1_pd
vx_create1
vx_create_vp
vx_create

DESCRIPTION:
The maximum number of inodes supported by VxFS is 1 billion.
And the maximum number of inode allocation units (IAU) is 16384.
When the file system is running out of inodes, and the maximum inode 
allocation unit(IAU) limit is reached, VxFS can still create two extra IAUs 
if there is a hole in the last IAU. Because of the hole, when a CFS secondary 
requests more inodes, the CFS primary still thinks there is a hole available and 
notifies the secondary to retry. However, the secondary fails to find a slot 
since the 1 billion limit is hit, then it goes back to the primary to 
request free inodes again, and this loops infinitely, hence the hang.

RESOLUTION:
When the maximum IAU number is reached, prevent primary to 
create the extra IAUs.

* 4189600 (Tracking ID: 4189333)

SYMPTOM:
File size reported incorrectly after fallocate/truncate operations.

DESCRIPTION:
With vx_falloc_clear=1, file size inconsistencies occurred due to improper block deallocation after truncate/fallocate.

RESOLUTION:
Code fixed to ensure correct inode size updates and block handling after truncate/fallocate operations.

* 4189601 (Tracking ID: 4120787)

SYMPTOM:
Data corruption issues with parallel direct IO on ZFOD extents.

DESCRIPTION:
1. Data loss occurs when a direct IO spans over more than two adjacent ZFOD extents and those extents are not split correctly in vx_pdio_wait().
2. Single IO can get split into multiple I/Os without block alignment if the required pages exceed a predefined limit, leading to data corruption due to improper handling of adjacent IOs and extent clearing.

RESOLUTION:
Code changes are done to ensure that even if the IO spans multiple ZFOD extents, all extents are correctly split.
Eliminated redundant calls to the function which clear the zfod extent hence preventing the unnecessary clearing of data from other IOs.

* 4189603 (Tracking ID: 4187096)

SYMPTOM:
Orphaned symlinks were not getting replicated in VFR.

DESCRIPTION:
During replication sync, there is optimisation where VxFS send attributes/mtime/acls during pass2 itself in the last data record of inode. For any normal symlinks, it doesn't have any data. So we should not process symlink inodes during datapass i.e. pass2. but with this optimisation vfr was sending attributes/mtime for symlink in pass2 itself. Getting attribute/acls will work fine for any normal symlink but it will fail for broken symlink as it will follow the path and gets the acls. For any symlink, VFR shouldn't check for acls as those will be same as original file.

RESOLUTION:
Code is modified to avoid checking acls on symlinks.

* 4189604 (Tracking ID: 4184953)

SYMPTOM:
mkfs may generate coredump with signal SIGSEGV.
Stack trace looks like following:
#0  find_space
#1  place_extents
#2  make_fs
#3  main

DESCRIPTION:
The simple extent allocator in mkfs requires alignment on 8 blocks 
boundary for those requests bigger than 8 block, but actual allocation 
of aux bitmap is not always 8-block aligned. Hence, it is possible that 
the extent allocator can search beyond the actual size of aux bitmap 
and cause segfault.

RESOLUTION:
Code changes has been done to make the block calculation aligned with 8-block.

* 4189605 (Tracking ID: 4188417)

SYMPTOM:
Improper handling of null fel pointer in recovery context.

DESCRIPTION:
Improper handling of null fel pointer in recovery context.

RESOLUTION:
Code changes have been done to properly handle null fel pointer.

* 4189607 (Tracking ID: 4189606)

SYMPTOM:
SecureFS failed to create checkpoint as per schedule

DESCRIPTION:
SecureFS allows configuration to allow checkpoint creation at specified interval. Along with this user can specify number of checkpoint to be present at given point in time. This allows filesystem doesn't reach ENOSPC. To preserve certain number of checkpoints, when checkpoint creation scheduled, it delete the oldest checkpoint as well. Currently this deletion is synchronous mode. As it involves processing of inode and disk IO, in case there are large number of inodes, it could result into taking time and thus can miss the next schedule. Hence, we should delete checkpoint in asynchronous way as this activity will be done in background even in case node reboot scenario.

RESOLUTION:
Code changes are made to remove checkpoint asynchronously which unblocks next schedule to kick in to create checkpoint.

* 4189642 (Tracking ID: 4127771)

SYMPTOM:
full fsck hits assert and fails with run_fsck : Failed to full fsck cleanly, exiting

DESCRIPTION:
In case of fileset removal if the fs get disabled before freeze (during marking inodes as PTI) we can see FCL transactions followed by superblock free transaction.

RESOLUTION:
In certain cases assert was not true. Removed the assert.

* 4189648 (Tracking ID: 4142106)

SYMPTOM:
When running fsck -n, warning appears about an incorrect Allocation Unit (AU) state, even though the on disk AU state is correct.

DESCRIPTION:
This issue happens when fsck -n processes a snapshot inode before the original file inode. Because of this fsck marked some AUs as dirty even though they had fully allocated and clean on-disk state.

RESOLUTION:
Added code changes to fsck to correctly verify and update the AU state when processing inodes. This ensures the in-memory AU state is accurate for AUs fully allocated, preventing false warnings during fsck -n.

* 4189650 (Tracking ID: 4155954)

SYMPTOM:
Attribute data mismatch even if the node is owner.

DESCRIPTION:
Hlock may skip copying attribute data if the inode is bad, later while taking RWLOCK it assumes if the node is owner of the inode it should have correct attribute data.

RESOLUTION:
If inode is bad or file system is disabled, skip copying the attribute data.

* 4189652 (Tracking ID: 4188805)

SYMPTOM:
Online migration process threads hung in allocation.

DESCRIPTION:
In migration we cannot allocate memory more than 2MB. If there are threads consuming this much memory, other threads need to wait till the memory becomes available.

RESOLUTION:
Free the buffer after copying the whole file which was missing in certain case.

* 4189655 (Tracking ID: 4189654)

SYMPTOM:
Mount commands returns help as it is not able to parse multi category security SElinux contexts.

DESCRIPTION:
VxFS mount binary code does not support multi category security SElinux context like: "system_u:object_r:container_file_t:s0:c7,c28"

RESOLUTION:
Added code change to allow mount command to take comma separated multi category security SElinux context as mount suboption and mount the filesystem according to it.

* 4189656 (Tracking ID: 4179548)

SYMPTOM:
Few in-core buffers may remain unflushed, leading to potential audit log inconsistency in the event of a system panic or crash.

DESCRIPTION:
During audit log file grow, the vx_multi_bufflush function may skip flushing some buffers if the filesystem block size (f_bsize) is less than 8KB. This is due to the use of a fixed threshold (IADDREXTSIZE, set to 8KB) instead of the actual block size, which can cause dirty buffers to remain in memory. If the system panics before the next scheduled flush, these buffers may never be written to disk, resulting in inconsistencies in the audit log.

RESOLUTION:
Use fs->f_bsize instead of the IADDREXTSIZE (8KB) in vx_multi_bufflush to ensure buffer flushing behavior aligns with the actual filesystem block size.

* 4189659 (Tracking ID: 4180012)

SYMPTOM:
fsck utility was generating coredump due to a race between multiple threads of fsck. This happens when one thread is printing progress status to the psr file and another one already finished.

DESCRIPTION:
There is a race between multiple threads of fsck. One thread is printing progress status, while another already finished. File pointer accessed by the thread which is printing progress status is already closed.

RESOLUTION:
Cancelled the thread which is printing progress status before closing the psr file.

* 4189663 (Tracking ID: 4181952)

SYMPTOM:
Assert is hit when the first extent(offset 0) of an fcl file has more that 1 block associated with it and it is in vx_bufinval.

DESCRIPTION:
vx_bufinval expects the offset to be non-zero as the first extent has the fcl header. If not, it hits an assert.

RESOLUTION:
Added a check in vx_fcl_truncate for offset == 0 , to update offset to fs_bsize.

* 4189664 (Tracking ID: 4181957)

SYMPTOM:
SELinux denied messages seen while running LM Conformance on RHEL9.4

DESCRIPTION:
SELinux denied messages (avc: denied) seen while running LM Conformance on RHEL9.4 with operations related to fsadm and mount binaries.

RESOLUTION:
Added code change to create to give permissions to access the required files/directory to the fsadm/mount binaries.

* 4189665 (Tracking ID: 4182162)

SYMPTOM:
Creation operation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS fails with EROFS and ENOTSUP.

DESCRIPTION:
Creation operation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS fails with EROFS and ENOTSUP.

RESOLUTION:
Code changes have been done to allow creation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS.

* 4189668 (Tracking ID: 4189667)

SYMPTOM:
VxFS medium impact coverity issues

DESCRIPTION:
Integer overflow in vx_aioctl_device() and do_restore()

RESOLUTION:
Verified integer overflow before accessing variables

* 4189669 (Tracking ID: 4182897)

SYMPTOM:
LM CMDS->metasave testcase fails while running metasave restore

DESCRIPTION:
Invalid superblock read is encountered during metasave restore for metaversion 8 and earlier.

RESOLUTION:
Added superblock read based on metaversion.

* 4189672 (Tracking ID: 4188816)

SYMPTOM:
File system is disabled and migration fails while doing direct writes to file when migration is in progress.

DESCRIPTION:
dentry_open code was not setting the correct flags (O_RDWR) and setting flag to 0. FMODE_WRITE was not set in f_mode.

RESOLUTION:
Remove the O_ACCMODE flags before setting the O_RDWR flag.

* 4189675 (Tracking ID: 4187574)

SYMPTOM:
'vxfstaskd' generates core intermittently when user unmounted a vxfs filesystem where SecureFS was configured.

DESCRIPTION:
'vxfstaskd' generates core intermittently when user unmounted a vxfs filesystem where SecureFS was configured. It happens due to an internal race between respective worker threads causing invalid/freed memory access by "vxfstaskd" daemon.

RESOLUTION:
Code changes have been in "vxfstaskd" daemon to resolve such issue.

* 4189676 (Tracking ID: 4187819)

SYMPTOM:
vxlist timeout or takes around 5 mins on 8.0.2.1500 VxVM/8.0.2.520 VRTSsfmh (RHEL 8.10)

DESCRIPTION:
vxlist timeout or takes around 5 mins on 8.0.2.1500 VxVM/8.0.2.520 VRTSsfmh (RHEL 8.10). It can happen because of overloaded dcli by unnecessary execution of "vxsnap" command by "vxfstaskd" daemon for every mounted filesystem.

RESOLUTION:
Code changes have been done stop the un-necessary invocation of "vxsnap" binary on those mounted vxfs filesystem where SecureFS is not configured.

* 4189677 (Tracking ID: 4188282)

SYMPTOM:
While performing the LM Noise replay worm test, the system exits due to a failure to complete a full fsck cleanly.

DESCRIPTION:
In function vx_upg16_fill_aulog, nblks = IADDREXTSIZE >> fs->fs_bshift. Later in the code, nblks is used in both the vx_read_blk and vx_write_blk. 
This approach can cause problems when the extents are not aligned to 8KB.

RESOLUTION:
Fixing the len passed to vx_read_blk and vx_write_blk.

* 4189686 (Tracking ID: 4188813)

SYMPTOM:
Online migration process hangs when doing file operations such as remove, modify, ftrunc on random files in loop from client when migration is running.

DESCRIPTION:
There was a deadlock where one thread was waiting for lock on the inode and the other thread that had the lock was waiting for the file copy to complete that will be done by 1st thread.

RESOLUTION:
Fixed the deadlock by adding new migration flag.

* 4189702 (Tracking ID: 4189180)

SYMPTOM:
System got hang due to global lru lock contention.

DESCRIPTION:
Parallel file system stat calls on a large system could cause heavy lock contention on VxFS global LRU list, since which is protected by single global lock.

RESOLUTION:
To avoid the contention, multiple LRU list is introduced, the unused dentry is put on lru iist based on the hash of its file name, the lru list is pruned in a round-robin manner.

* 4189792 (Tracking ID: 4189761)

SYMPTOM:
used-after-free memory corruption occurred.

DESCRIPTION:
VxFS added support set_acl/get_acl() to use local ACL cache in sles15sp6/rhel9, after an ACL cache is created, this cache is released when inode being destroyed 
but without clear the inode acl fields (a dangling pointer is left here) by linux kernel, later the extra clear of ACL cache during VxFS OS inode deinit, which 
accesses the dangling pointer, release the acl again, hence the memory corruption.

RESOLUTION:
fix for this issue is to remove the unnecessary clearing of ACL cache during VxFS inode free, and reset those inode acl fields after the acl cached gets 
released by linux kernel.

* 4190077 (Tracking ID: 4116377)

SYMPTOM:
When filesystem check is run on a volume, it shows invalid offset for audit log records

DESCRIPTION:
Filesystem provides a way to audit the certain type of operation and for that it records the data in certain format. During filesystem check we make sure records written are in sane format. During this check due to wrong calculation correct record are showing as wrong record and hence warning message like below,

  fix invalid aulog end offset? prev end offset: 1572800 new offset: 1507328 (ynq)n

RESOLUTION:
Corrected the code to correctly do the calculation during format validation.

* 4190275 (Tracking ID: 4190241)

SYMPTOM:
Customer is seeing I/O errors:
===============
Jun 11 13:58:52 kernel: attempt to access beyond end of device

Jun 11 13:58:52 kernel: VxVM22002: rw=1, want=36028797016252418, limit=209639424

Jun 11 13:58:52 kernel: vxfs: msgcnt 4 mesg 038: V-2-38: vx_dataioerr - /dev/vx/dsk/vgd_share/sw_lv file system file data write error in dev/block 0/36028797016252416

DESCRIPTION:
During direct I/O writes, an incorrect alignment of a 64-bit file offset with 32 bit block size leads to calculation of an invalid block number, resulting in I/O attempts beyond the device boundary and triggering vx_dataioerr.

RESOLUTION:
Type cast the block size value to 64-bit in the alignment calculation to ensure correct offset alignment and prevent invalid block access.

Patch ID: VRTSvxfs-8.0.2.2500

* 4189227 (Tracking ID: 4189228)

SYMPTOM:
Security vulnerabilities exist in the third-party components [zlib, libexpat] used by VxFS.

DESCRIPTION:
VxFS uses the third-party components [zlib, libexpat] with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of third-party components [zlib, libexpat] in which the security vulnerabilities have been 
addressed.

Patch ID: VRTSvxfs-8.0.2.2400

* 4141853 (Tracking ID: 4141854)

SYMPTOM:
Conformance->fsadm hits coredump.

DESCRIPTION:
Issue caused due to recursion in EO logging code.

RESOLUTION:
Fixing is done by adding a variable which prevents code to go in recursion.

* 4164637 (Tracking ID: 4164638)

SYMPTOM:
VxFS FSCK binary will consume lot of memory.

DESCRIPTION:
Not freeing thread local heap memory causes to FSCK to unnecessarily keep holding lot of user space memory, which might degrade overall system performance.

RESOLUTION:
Did code changes to fix the issue.

* 4177627 (Tracking ID: 4160991)

SYMPTOM:
Accessing the address which is freed and still present in the mlink.

DESCRIPTION:
In cases of error while flushing it skips writing transactions to the disk, which ultimately can lead to accessing freed memory.

RESOLUTION:
Code changes doe to resolve the issue.

* 4177656 (Tracking ID: 4167362)

SYMPTOM:
Memory leak observed in fsck through valgrind.

DESCRIPTION:
When running fsck binary through valgrind observing the memory leak.

RESOLUTION:
Fixing is done through code to handle memory leaks.

* 4177657 (Tracking ID: 4144669)

SYMPTOM:
Hitting Asser in case of clone inodes where the filesystem is not frozen and fullfsck flag is set.

DESCRIPTION:
Hitting Asser in case of clone inodes where the filesystem is not frozen and fullfsck flag is set.

RESOLUTION:
Code changes doe to resolve the issue.

Patch ID: VRTSvxfs-8.0.2.2200

* 4177662 (Tracking ID: 4171368)

SYMPTOM:
Node panicked while unmounting filesystem.

DESCRIPTION:
panic in iput() due to invalid address in i_sb.If a nested unmount is stuck and the parent mount is unmounted simultaneously, the parent will be force unmounted, and its superblock will be cleared. This superblock address may then be reused and freed by another module, causing the i_sb to have an invalid address.

iput 
vx_os_iput_enqueue  [vxfs]
vx_do_unmount  [vxfs]
vx_unmount  [vxfs]
generic_shutdown_super 
kill_block_super 
vx_kill_sb  [vxfs]

RESOLUTION:
Code changes have been made to resolve this issue.

* 4177663 (Tracking ID: 4168443)

SYMPTOM:
System panicked at vx_clonemap after a smap corruption. Following is the panic stack

[exception RIP: vx_clonemap+317]
#0 vx_tflush_map at ffffffffc0dd7b58 [vxfs]
#1 vx_fsq_flush at ffffffffc0ff05c9 [vxfs]
#2 vx_fsflush_fsq at ffffffffc0ff2c5f [vxfs]
#3 vx_workitem_process at ffffffffc0eef9ea [vxfs]
#4 vx_worklist_process at ffffffffc0ef8775 [vxfs]
#5 vx_worklist_thread at ffffffffc0ef8828 [vxfs]
#6 vx_kthread_init at ffffffffc0f7f8e4 [vxfs]
#7 kthread at ffffffff810b3fb1
#8 kthread_create_on_node at ffffffff810b3ee0
#9 ret_from_fork at ffffffff816c0537

DESCRIPTION:
There was no proper error handling in case a smap is marked bad while changing its state in vx_smapchange. This can lead to a situation where system panic can occur while trying to flush an emap as the emap buffer will not be present incore

RESOLUTION:
Code changes have been done to handle smap mapbad error properly.

* 4177664 (Tracking ID: 4175488)

SYMPTOM:
DB2 hang seen with following stacktrace

 #0 __schedule
 #1 schedule
 #2 vx_svar_sleep_unlock
 #3 vx_rwsleep_rec_lock
 #4 vx_recsmp_rangelock
 #5 vx_irwlock
 #6 vx_irwglock
 #7 vx_setcache
 #8 vx_uioctl
 #9 vx_unlocked_ioctl

DESCRIPTION:
The VxFS CIO advisory is set to improve performance by enabling concurrent reads and writes on a file. If CIO advisory is being set on a file while another thread is doing a read on the same file/inode (by locking it in SHARED mode) then there can be a condition where the read thread can incorrectly miss unlocking the file and do its processing and exit. As the read thread misses releasing the lock, the inode remains locked in SHARED mode. Later when another thread tries to set CIO advisory to the same file, it needs to lock the inode in EXCLUSIVE mode and it conflicts as the lock is already taken in SHARED mode and never released. This could cause this thread to hang indefinitely.

RESOLUTION:
Code changes have been done to fix the missing unlock.

Patch ID: VRTSvxfs-8.0.2.2100

* 4144078 (Tracking ID: 4142349)

SYMPTOM:
Using sendfile() on VxFS file system might result in hang with following stack trace.
schedule()
mutex_lock()
vx_splice_to_pipe()
vx_splice_read()
splice_file_to_pipe()
do_sendfile()
do_syscall()

DESCRIPTION:
VxFS code erroneously tries to take pipe lock twice in the splice read code path, which might result in hang when sendfile() system call is used.

RESOLUTION:
VxFS now uses generic_file_splice_read() instead of own implementation for splice read.

* 4162063 (Tracking ID: 4136858)

SYMPTOM:
ncheck utility was generating coredump while running on a corrupted filesystem due to the absence of sanity check for directory inodes.

DESCRIPTION:
ncheck utility was generating coredump while running on a corrupted filesystem due to the absence of sanity check for directory inodes.

RESOLUTION:
Added a basic sanity check for directory inodes. Now, the utility is not generating any coredump on corrupted FS, instead gracefully exiting in case of any error.

* 4162064 (Tracking ID: 4121580)

SYMPTOM:
Modification operations will be allowed on checkpoint despite having WORM flag set.

DESCRIPTION:
If checkpoint is mounter on one mode in RW (READ-WRITE) mode, and WORM flag is getting set from other node, it will be allowed.

RESOLUTION:
Issue is fixed with the code change.

* 4162065 (Tracking ID: 4158238)

SYMPTOM:
vxfsrecover command exits with error if the previous invocation terminated abnormally.

DESCRIPTION:
vxfsrecover command exits with error if the previous invocation terminated abnormally due to missing cleanup in the binary.

RESOLUTION:
Code changes have been done to perform cleanup properly in case of abnormal termination of "vxfsrecover" process.

* 4162066 (Tracking ID: 4156650)

SYMPTOM:
Stale checkpoint entries will remain.

DESCRIPTION:
For example if we have checkpoints T1 (NEWEST), T2, T3 ... TN (OLDEST)
In case if recovery happened from T3, then T1 and T2 will not be deleted in future, as we have lost their information.

RESOLUTION:
Code changes have been done in the vxfstaskd binary to avoid above mentioned issues.

* 4162220 (Tracking ID: 4099775)

SYMPTOM:
System panic.

DESCRIPTION:
The reason for panic is race between two or threads trying to extend the per node quota file.

RESOLUTION:
Code is modified to handle this race condition.

* 4163183 (Tracking ID: 4158381)

SYMPTOM:
Server panicked with "Kernel panic - not syncing: Fatal exception"

DESCRIPTION:
Server panicked due to accessing the freed dentry, also the dentry's hlist has been corrupted.
There is a difference betwen the VXFS's dentry implementation and the kernel equivalent of dentry implementation.
VXFS implementation of find_alias and splice_alias is based on some old kernel versions of d_find_alias and d_splice_alias. 
We need to keep them in sync with the newer kernel code to avoid landing into any such issue.

RESOLUTION:
Addressing the difference between our dentry related function like splice_alias, find_alias and the kernel equivalent of these functions.
Made kernel equivalent code changes in our dentry's find_alias and splice_alias functions.

* 4164090 (Tracking ID: 4163498)

SYMPTOM:
Veritas File System df command logging doesn't have sufficient permission while validating tunable configuration

DESCRIPTION:
While updating Veritas File system df command logging, it does check the mode of permission with tunable configuration to update the log with respective permission.
The permission for checking tunable configuration is not correct.

RESOLUTION:
Updated code to set the permission correctly..


EO df_vxfs shows error with flex metrics-storage service

When logging to vxfs_cmdlog with df_vxfs utility, it will check tunable configuration of eo_perm in read write mode.
And Tunable configuration file doesn't have sufficient permission for this mode.

As the root is mounted as ro (readonly)
The vxtunefs command try to update tunable value to config file located in /etc/vx/vxfssystemdirectory based on eo_perm tunable.

[root@nbapp862 vx]# mount | grep /
/dev/dm-8 on / type ext4 (ro,relatime,seclabel,stripe=16)
/dev/dm-8 on /etc/opt/veritas type ext4 (rw,relatime,seclabel,stripe=16)

* 4164270 (Tracking ID: 4156384)

SYMPTOM:
Filesystem's metadata can get corrupted due to missing transaction in the intent log. This ends up resulting mount failure in some scenario.

DESCRIPTION:
Filesystem's metadata can get corrupted due to missing transaction in the intent log. This ends up resulting mount failure in some scenario. Also, it is not limited to mount failure. It may result in some more corruption in the filesystem metadata.

RESOLUTION:
Code changes have been done in reconfig code path to add a missing transaction which will prevent redundant replay of already done transactions which was causing the issue.

* 4165966 (Tracking ID: 4165967)

SYMPTOM:
mount and fsck commands are facing few SELinux permission denials issue.

DESCRIPTION:
mount and fsck commands are facing few SELinux permission denials issue to read files with the default file type and to manage files from /etc directory.

RESOLUTION:
Required SELinux permissions are added for mount and fsck commands to be able to read files with the default file type and to manage files from /etc directory.

* 4166501 (Tracking ID: 4163862)

SYMPTOM:
Mutex lock contention is observed in cluster file system under massive file creation workload

DESCRIPTION:
Mutex lock contention is observed in cluster file system under massive file creation workload. This mutex lock is used to access/modify delegated inode allocation unit list on cluster file system nodes. As multiple processes creating new files need to read this list, they contend of this mutex lock. Stack trace as following is observed in file creation code path.

__mutex_lock
vx_dalist_getau
vx_cfs_inofindau
vx_ialloc
vx_dircreate_tran
vx_int_create
vx_do_create
vx_create1
vx_create_vp
vx_create
vfs_create

RESOLUTION:
Mutex lock for delegated inode allocation unit list is converted to read write fast sleep lock. It will help to access delegated inode allocation unit list parallelly for file creation processes.

* 4166502 (Tracking ID: 4163127)

SYMPTOM:
Spinlock contention observed during inode allocation for massive file creation operation on cluster file system.

DESCRIPTION:
While file creation operation happens on cluster node, flag for inode allocation unit is accessed under protection of inode allocation spinlock. Hence, we see the contention with following code path:
 
vx_ismapdelfull
vx_get_map_dele     
vx_mdele_tryhold
vx_cfs_inofindau
vx_ialloc
vx_dircreate_tran
vx_int_create
vx_do_create
vx_create1
vx_create_vp
vx_create
vfs_create

RESOLUTION:
Code changes have been done to reduce the contention on inode allocation spinlock.

* 4166503 (Tracking ID: 4162810)

SYMPTOM:
Spikes in CPU usage of glm threads were observed in output of "top" command during massive file creation workload on cluster file system.

DESCRIPTION:
When running massive file creation workload from a cluster node(s), there was observation that a huge number of GLM threads are shown in top command output. These glm threads are contending for a global spinlock and consuming CPUs cycles with following stack trace. 
native_queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
vx_glmlist_thread()
vx_kthread_init()
kthread()

RESOLUTION:
Code is modified to split the heavy global spinlock at different priority level.

* 4168357 (Tracking ID: 4076646)

SYMPTOM:
Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area.

DESCRIPTION:
Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area.

RESOLUTION:
Code changes have been done in attribute code path to make sure the free space in attribute area should never exceed length of that area.

* 4172054 (Tracking ID: 4162316)

SYMPTOM:
System PANIC

DESCRIPTION:
CrowdStrike falcon might generate Kernel PANIC, during migration from native FS to VxFS.

RESOLUTION:
Required fix is added to vxfs code.

* 4172753 (Tracking ID: 4173685)

SYMPTOM:
fsck command facing few SELinux permission denials issue.

DESCRIPTION:
fsck command facing few SELinux permission denials issue to write to user temporary files.

RESOLUTION:
Required SELinux permissions are added for fsck command to be able to write to user temporary files.

* 4173064 (Tracking ID: 4163337)

SYMPTOM:
Intermittent df slowness seen across cluster due to slow cluster-wide file system freeze.

DESCRIPTION:
For certain workload, intent log reset can happen relatively frequently and whenever it happens it will trigger cluster-wide freeze. If there are a lot of dirty buffers that need flushing and invalidation, then the freeze might take long time to finish. The slowest part in the invalidation of cluster buffers is the de-initialisation of its glm lock which requires lots of lock release messages to be sent to the master lock node. This can cause flowcontrol to be set at LLT layer and slow down the cluster-wide freeze and block commands like df, ls for that entire duration.

RESOLUTION:
Code is modified to avoid buffer flushing and invalidation in case freeze is triggered by intent log reset.

* 4174242 (Tracking ID: 4174538)

SYMPTOM:
mount and fsck commands are facing few SELinux permission denials issue.

DESCRIPTION:
mount and fsck commands are facing few SELinux permission denials issue to write to user temporary files.

RESOLUTION:
Required SELinux permissions are added for mount and fsck commands to be able to write to user temporary files.

* 4174244 (Tracking ID: 4174539)

SYMPTOM:
fsck command facing few SELinux permission denials issue.

DESCRIPTION:
fsck command facing few SELinux permission denials issue to manage files with the default file type.

RESOLUTION:
Required SELinux permissions are added for fsck command to be able to manage files with the default file type.

Patch ID: VRTSvxfs-8.0.2.1700

* 4158756 (Tracking ID: 4158757)

SYMPTOM:
VxFS module failed to load on RHEL-8.10 kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL-8.10 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL-8.10 kernel.

* 4159284 (Tracking ID: 4145203)

SYMPTOM:
vxfs startup scripts fails to invoke veki for kernel version higher than 3.

DESCRIPTION:
vxfs startup script failed to start Veki, as it was calling system V init script to start veki instead of the systemctl interface.

RESOLUTION:
Current code changes checks if kernel version is greater than 3.x and if systemd is present then use systemctl interface otherwise use system V  interface

* 4159938 (Tracking ID: 4155961)

SYMPTOM:
System panic due to null i_fset in vx_rwlock().

DESCRIPTION:
Panic in vx_rwlock due to race between vx_rwlock() and vx_inode_deinit() function.
Panic stack
[exception RIP: vx_rwlock+174]
.
.
#10  __schedule
#11  vx_write
#12  vfs_write
#13  sys_pwrite64
#14  system_call_fastpath

RESOLUTION:
Code changes have been done to fix this issue.

* 4160325 (Tracking ID: 4160740)

SYMPTOM:
Command fsck is facing few SELinux permission denials issue.

DESCRIPTION:
Command fsck is facing few SELinux permission denials issue to manage generic files in /etc.

RESOLUTION:
Required SELinux permissions are added for command fsck to be able to manage generic files in /etc.

* 4160326 (Tracking ID: 4160742)

SYMPTOM:
mount and fsck commands are facing few SELinux permission denials issue.

DESCRIPTION:
mount and fsck commands are facing few SELinux permission denials issue to manage and set the attributes of /var directories.

RESOLUTION:
Required SELinux permissions are added for mount and fsck commands to be able to manage and set the attributes /var directories.

* 4161120 (Tracking ID: 4161121)

SYMPTOM:
Non root user is unable to access log files under /var/log/vx directory

DESCRIPTION:
In vxfs post install script we create /var/log/vx directory with 0600 permission, hence non root user is unable to read logfiles from
this location 
As part of eo log file permission tunable changes , the log file permissions are getting change as expected but due to this 
directory permission 0600 non root users are unable to access these log files.

RESOLUTION:
Change this /var/log/vx directory permission to 0755.

Patch ID: VRTSvxfs-8.0.2.1600

* 4157410 (Tracking ID: 4157409)

SYMPTOM:
Security Vulnerabilities were observed in the current versions of third party components [sqlite and expat] used by VxFS .

DESCRIPTION:
In an internal security scan, security vulnerabilities in [sqlite and expat] were observed.

RESOLUTION:
Upgrading the third party components [sqlite and expat] to address these vulnerabilities.

Patch ID: VRTSvxfs-8.0.2.1500

* 4119626 (Tracking ID: 4119627)

SYMPTOM:
Command fsck is facing few SELinux permission denials issue.

DESCRIPTION:
Command fsck is facing few SELinux permission denials issue to manage var_log_t files and search init_var_run_t directories.

RESOLUTION:
Required SELinux permissions are added for command fsck to be able to manage var_log_t files and search init_var_run_t directories.

* 4146580 (Tracking ID: 4141876)

SYMPTOM:
Old SecureFS configuration is getting deleted.

DESCRIPTION:
It is possible that multiple instances of vxschadm binary are getting executed to update the config file, however there are high chances that the last updater will nullify the previous binary changes.

RESOLUTION:
Added synchronisation mechanism between two processes of vxschadm command running across the Infoscale cluster.

* 4148734 (Tracking ID: 4148732)

SYMPTOM:
Memory by binaries / daemons who are calling this API, e.g. vxfstaskd daemon

DESCRIPTION:
At every call get_dg_vol_names() is not freeing the 8192 bytes of memory, which will result to increase in the total consumption of virtual memory by vxfstaskd.

RESOLUTION:
Free the unused memory.

* 4150065 (Tracking ID: 4149581)

SYMPTOM:
WORM checkpoints and files will not be deleted despite their retention period is expired.

DESCRIPTION:
Frequent FS freeze operations, like creation of checkpoint, may cause SecureClock to get drifted from its regular update cycle.

RESOLUTION:
Fixed this bug.

Patch ID: VRTSvxfs-8.0.2.1400

* 4141666 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.2.1200

* 4121230 (Tracking ID: 4119990)

SYMPTOM:
Some nodes in cluster are in hang state and recovery is stuck.

DESCRIPTION:
There is a deadlock where one thread locks the buffer and wait for the recovery to complete. Recovery on the other hand may get stuck while flushing and
invalidating the buffers from buffer cache as it cannot lock the buffer.

RESOLUTION:
If recovery is in progress, release buffer and return VX_ERETRY callers will retry the operation. There are some cases where lock is taken on 2 buffers. For
those cases pass the flag VX_NORECWAIT which will retry the operation after releasing both the buffers.

* 4123715 (Tracking ID: 4113121)

SYMPTOM:
The VxFS module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated VXFS to support RHEL 8.8.

* 4125870 (Tracking ID: 4120729)

SYMPTOM:
Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.

DESCRIPTION:
If full sync is started in recovery mode, we don't update state on target during start of replication (from failed to full-sync running). This state change is missed and is causing issues with states for next incremental syncs.

RESOLUTION:
Updated the code to address the correct state at target when vfr full sync is started in recovery mode

* 4125871 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4125873 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4125875 (Tracking ID: 4112931)

SYMPTOM:
vxfsrepld consumes a lot of virtual memory when it has been running for long time.

DESCRIPTION:
Current VxFS thread pool  is not efficient if it has been used for daemon process like vxfsrepld. It didn't release underline resources used by newly created threads which in turn increasing virtual memory consumption of that process. underline resources of threads will be released either when we call pthread_join() on them or when threads are created with detached attribute. with current implementation, pthread_join() is called only when thread pool is destroyed as a part of cleanup. but with vxfsrepld, it is not expected to call pool_destroy() every time when job is successful. pool_destroy is called only when repld is stopped. This was leading to consolidating threads resources and increasing VM usage of process.

RESOLUTION:
Code is modified to detach threads when it exits.

* 4125878 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4126104 (Tracking ID: 4122331)

SYMPTOM:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog while marking a bitmap/inode as "BAD".

DESCRIPTION:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog upon encountering bitmap corruption or while marking an inode "BAD".

RESOLUTION:
Code changes have been done to include required missing information in corresponding error messages.

* 4127509 (Tracking ID: 4107015)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in exact-version-module version calculation.

* 4127510 (Tracking ID: 4107777)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.

* 4127594 (Tracking ID: 4126957)

SYMPTOM:
If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,
system may crash with following stack:

 vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs]
 vx_aioctl_vfs+0x256/0x2d0 [vxfs]
 vx_admin_ioctl+0x156/0x2f0 [vxfs]
 vxportalunlockedkioctl+0x529/0x660 [vxportal]
 do_vfs_ioctl+0xa4/0x690
 ksys_ioctl+0x64/0xa0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1b0

DESCRIPTION:
There is a race condition between these two operations, due to which by the time fsadm thread tries to access
FS data structure, it is possible that umount operation has already freed the structures, which leads to panic.

RESOLUTION:
As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.

* 4127720 (Tracking ID: 4127719)

SYMPTOM:
fsdb binary fails to open the device on a VVR secondary volume in RW mode although it has write permissions. fstyp binary could not dump fs_uuid value.

DESCRIPTION:
We have observed that fsdb when run on a VVR secondary volume bails out. 
At fs level, the volume has write permission but since it is secondary from VVR perspective, it is not allowed to be opened in write mode at block layer.
fstyp binary could not dump fs_uuid value along with other superblock fields.

RESOLUTION:
Added fallback logic, wherein fsdb if fs_open fails to open the device in Read-Write mode, it will try to open it in Read-Only mode. Fixed fstyp binary to dump fs_uuid value along with other superblock fields.
Code changes have been done to reflect these changes.

* 4127785 (Tracking ID: 4127784)

SYMPTOM:
/opt/VRTS/bin/fsppadm validate /mnt4 invalid_uid.xml
UX:vxfs fsppadm: WARNING: V-3-26537: Invalid USER id 1xx specified at or near line 10

DESCRIPTION:
Before this fix, fsppadm command was not stoping the parsing, and was treating invalid uid/gid as warning only. Here invalid uid/gid means whether
they are integer numbers or not. If given uid/gid is not existing then it is still a warning.

RESOLUTION:
Code added to give user proper error in case if invalid user/group ids are provided.

* 4128249 (Tracking ID: 4119965)

SYMPTOM:
VxFS mount binary failed to mount VxFS with SELinux context.

DESCRIPTION:
Mounting the file system using VxFS binary with specific SELinux context shows below error:
/FSQA/fsqa/vxfsbin/mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1 -ocontext="system_u:object_r:httpd_sys_content_t:s0"
UX:vxfs mount: ERROR: V-3-28681: Selinux context is invalid or option/operation is not supported. Please look into the syslog for more information.

RESOLUTION:
VxFS mount command is modified to pass context options to kernel only if SELinux is enabled.

* 4129494 (Tracking ID: 4129495)

SYMPTOM:
Kernel panic observed in internal VxFS LM conformance testing.

DESCRIPTION:
Kernel panic has been observed in internal VxFS testing, OS writeback thread marking inode for writeback and then calling filesystem hook vx_writepages.
The OS writeback is not expected to get inside iput(), as it would self deadlock while waiting on writeback. This deadlock causing tsrapi command hang which further causing kernel panic.

RESOLUTION:
Modified code to avoid deallocation of inode when the inode writeback is in progress.

* 4129681 (Tracking ID: 4129680)

SYMPTOM:
VxFS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VxFS rpm.

* 4131312 (Tracking ID: 4128895)

SYMPTOM:
On servers with SELinux enabled, VxFS mount command may throw error as following.
Error message: UX:vxfs mount: ERROR: V-3-21264: <volume> is already mounted, <mount_point> is busy,
                 or the allowable number of mount points has been exceeded.

DESCRIPTION:
VxFS mount commands now run with vxfs_mount_t SELinux context. This context was missing permissions to execute VxVM commands. Hence it was not able to confirm whether the filesystem was already mounted elsewhere or not. Hence it may throw error as the volume is already mounted.

RESOLUTION:
Permission to run VxVM commands under vxfs_mount_t SELinux context are added.

INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-8.0.2.3100.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-8.0.2.3100.tar.gz to /tmp/patch
    # mkdir /tmp/patch
    # cd /tmp/patch
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-8.0.2.3100.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-8.0.2.3100.tar
3. Install the patch(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/patch
    # ./installVRTSinfoscale802P3100 [<host1> <host2>...]

You can also install this patch together with 8.0.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE

Applies to the following product releases

Update files

File name Description Version Platform Size