Translation Notice
Please note that this content includes text that has been machine-translated from English. Veritas does not guarantee the accuracy regarding the completeness of the translation. You may also refer to the English Version of this knowledge base article for up-to-date information.
IS 8.0.2 Update 6 on SLES15 Platform
Abstract
Description
This is a Cumulative patch release for SLES15 platform on InfoScale 8.0.2
SORT ID: 22397
Patch Name:
InfoScale 8.0.2 Patch 3200
(SLES15 Support on IS 8.0.2)
Supported Platforms :
SLES15SP5 , SLES15SP6
Patch ID's:
VRTSamf-8.0.2.2100-0898_SLES15 for VRTSamf
VRTSaslapm-8.0.2.2600-0292_SLES15 for VRTSaslapm
VRTScavf-8.0.2.2400-0300_SLES15 for VRTScavf
VRTScps-8.0.2.2300-0935_SLES15 for VRTScps
VRTSdbac-8.0.2.1600-0096_SLES15 for VRTSdbac
VRTSdbed-8.0.2.1400-0050_SLES for VRTSdbed
VRTSfsadv-8.0.2.2500-0313_SLES15 for VRTSfsadv
VRTSgab-8.0.2.2100-0898_SLES15 for VRTSgab
VRTSglm-8.0.2.2400-0300_SLES15 for VRTSglm
VRTSgms-8.0.2.2400-0300_SLES15 for VRTSgms
VRTSllt-8.0.2.2300-0933_SLES15 for VRTSllt
VRTSodm-8.0.2.2800-0002_SLES15 for VRTSodm
VRTSpython-3.9.16.7-SLES15 for VRTSpython
VRTSrest-3.0.10-linux for VRTSrest
VRTSsfcpi-8.0.2.1500-GENERIC for VRTSsfcpi
VRTSsfmh-8.0.2.551-0259_Linux for VRTSsfmh
VRTSspt-8.0.2.1300-0027_SLES15 for VRTSspt
VRTSvbs-8.0.2.1200-0032-SLES15.x86_64.rpm for VRTSvbs
VRTSvcs-8.0.2.2300-0933_SLES15 for VRTSvcs
VRTSvcsag-8.0.2.2300-0933_SLES15 for VRTSvcsag
VRTSvcsea-8.0.2.2300-0933_SLES15 for VRTSvcsea
VRTSveki-8.0.2.2400-0300_SLES15 for VRTSveki
VRTSvlic-4.01.802.002-SLES for VRTSvlic
VRTSvxfen-8.0.2.2300-0933_SLES15 for VRTSvxfen
VRTSvxfs-8.0.2.2800-0002_SLES15 for VRTSvxfs
VRTSvxvm-8.0.2.2600-0292_SLES15 for VRTSvxvm
Pre-requisites:
1.You should be minimally at IS 8.0.2-GA to install this update.
2. Install the VRTSVxVM hotfix 8.0.2.2607 on top of this patch.
SPECIAL NOTES:
1.This refreshed 8.0.2 U6 patch includes updated VxFS (8.0.2.2800) and ODM (8.0.2.2800) packages to resolve a system panic caused by VxFS LRU list inconsistency. Customers who have already installed the original U6 bundle can upgrade by applying the updated VxFS and ODM packages. New deployments targeting U6 should use this refreshed patch bundle directly.
2.As part of recent fix in LLT, the LLT_IRQBALANCE parameter is now enabled by default to prevent fencing panic. When LLT_IRQBALANCE=1, the irqbalance service must be configured on the system; otherwise, the LLT service will fail to start, and the module will not load. This introduces a new dependency on the irqbalance RPM. Customer will need to install irqbalance rpm on their setup before installing LLT. Additionally, the irq balance service cannot coexist with hpe_irqbalance. In that case the customer will have to disable hpe_irqbalance service and then restart LLT service.
3.In SLES15 SP6, libnsl.so.1 is now shipped in a separate rpm, whereas it was earlier part of glibc. Due to this InfoScale stack now has dependency on libnsl1 rpm (libnsl1-2.38-150600.12.1.x86_64).
4.In case the internet is not available, Installation of the patch must be performed concurrently with the latest CPI patch downloaded from Download Center.
* * * READ ME * * * * * * InfoScale 8.0.2 * * * * * * Patch 3200 * * * Patch Date: 2025-08-12 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- InfoScale 8.0.2 Patch 3200 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- SLES15 x86-64 PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTSamf VRTSaslapm VRTScavf VRTScps VRTSdbac VRTSdbed VRTSfsadv VRTSgab VRTSglm VRTSgms VRTSllt VRTSodm VRTSpython VRTSrest VRTSsfcpi VRTSsfmh VRTSspt VRTSvbs VRTSvcs VRTSvcsag VRTSvcsea VRTSveki VRTSvlic VRTSvxfen VRTSvxfs VRTSvxvm BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * InfoScale Availability 8.0.2 * InfoScale Enterprise 8.0.2 * InfoScale Foundation 8.0.2 * InfoScale Storage 8.0.2 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: VRTSvxvm-8.0.2.2600 * 4188358 (4188399) Adjust the default values of tunables * 4189232 (4189556) VxVM support on RHEL9.6 * 4189350 (4188549) vxconfigd died due to a floating point exception. * 4189351 (4188560) Volume Manager Encryption Service repeatedly dies * 4189564 (4189567) Panic seen in VVRCert due to incorrect value of blksize * 4189695 (4188763) Stale and incorrect symbolic links to VxDMP devices in "/dev/disk/by-uuid". * 4189698 (4189447) VxVM (Veritas Volume Manager) creates some required files under /tmp and /var/tmp directories. These directories could be modified by non-root users and will affect the Veritas Volume Manager Functioning. * 4189751 (4189428) Security vulnerabilities exists in third party components [curl , libxml]. * 4189773 (4189301) Frequent IPM handle purging cause VVR SG to switchover Patch ID: VRTSaslapm 8.0.2.2600 * 4189576 (4185193) UDID mismatch error occurs when using VxVM/ASL 7.4.2.5300 with RHEL 8 and NVMe devices. * 4189696 (4188831) Adding support for Hitachi VSPOne array. * 4189772 (4189561) Added support for Netapp ASA r2 array Patch ID: VRTSvxvm-8.0.2.2400 * 4189251 (4189428) Security vulnerabilities exists in third party components [curl , libxml]. Patch ID: VRTSvxvm-8.0.2.2300 * 4124889 (4090828) Enhancement to track plex att/recovery data synced in past to debug corruption * 4128883 (4112687) DLE (Dynamic Lun Expansion) of single path GPT disk may corrupt disk public region. * 4137508 (4066310) Added BLK-MQ feature for DMP driver * 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing . * 4138279 (4116496) System panic at dmp_process_errbp+47. * 4143558 (4141890) TUTIL0 field may not get cleared sometimes after cluster reboot. * 4152907 (4132751) Site disaster occurs during the VVR online add volume operation and it results in partial state of the volume. * 4153377 (4152445) Replication failed to start due to vxnetd threads not running * 4153566 (4090410) VVR secondary node panics during replication. * 4153570 (4134305) Collecting ilock stats for admin SIO causes buffer overrun. * 4153597 (4146424) CVM Failed to join after power off and Power on from ILO * 4153874 (4010288) [Cosmote][NBFS]ECV:DR:Replace Node on Primary failed with error"Rebuild data for faulted node failed" * 4154104 (4142772) Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again. * 4154107 (3995831) System hung: A large number of SIOs got queued in FMR. * 4155091 (4118510) Volume manager tunable to control log file permissions * 4155719 (4154921) system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on. * 4157012 (4145715) Secondary SRL log error while reading data from log * 4157643 (4159198) vxfmrmap coredump. * 4158517 (4159199) AIX 7.3 TL2 - Memory fault(coredump) while running "./scripts/admin/vxtune/vxdefault.tc" * 4161646 (4149528) Cluster wide hang after faulting nodes one by one * 4162049 (4156271) Selinux Validation failed on NBFS-3.2 setup. * 4162053 (4132221) Supportability requirement for easier path link to dmpdr utility * 4162055 (4116024) machine panic due to access illegal address. * 4162058 (4046560) vxconfigd aborts on Solaris if device's hardware path is too long. * 4162917 (4139166) Enable VVR Bunker feature for shared diskgroups. * 4162966 (4146885) Restarting syncrvg after termination will start sync from start * 4163010 (4146080) SELinux denials observed in CVR while testing slave reboot * 4164114 (4162873) disk reclaim is slow. * 4164250 (4154121) add a new tunable use_hw_replicatedev to enable Volume Manager to import the hardware replicated disk group. * 4164252 (4159403) add clearclone option automatically when import the hardware replicated disk group. * 4164254 (4160883) clone_flag was set on srdf-r1 disks after reboot. * 4164475 (4164474) Detached plex of an encrypted volume is not ENABLED post reboot. * 4165276 (4161331) vxresize fails to perform ioctl VX_GETFSOPT * 4165431 (4160809) [Cosmote][NBFS][media-only]CVM Failed to join after reboot operation from GUI * 4165889 (4165158) Disk associated with CATLOG showing in RECOVER State after rebooting nodes. * 4166881 (4164734) Disable support for TLS1.1 * 4166882 (4161852) [BankOfAmerica][Infoscale][Upgrade] Post InfoScale upgrade, command "vxdg upgrade" succeeds but generates apparent error "RLINK is not encypted" * 4172377 (4172033) Data corruption due to stale DRL agenodes * 4172424 (4168552) Escalation : CFS file system hangs after applying 8.0.2.1500 patch, because of I/Os are failing from DMP layer with below error: [ 2492.765405] blk_insert_cloned_request: over max segments limit. (272 > 256) * 4173722 (4158303) System panic at dmpsvc_da_analyze_error+417 * 4174239 (4171979) Panic occurs with message "kernel BUG at fs/inode.c:1578!" * 4175713 (4175712) Security vulnerabilities exists in third party components [curl , libxml and openssl]. * 4177400 (4173284) dmpdr command failing * 4177791 (4167359) EMC DeviceGroup missing SRDF SYMDEV leads to DG corruption. * 4177793 (4168665) use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master. * 4178101 (4113841) VVR panic in replication connection handshake request from network scan tool. * 4178106 (4176336) VVR heartbeat timeout due to unknown 0 length UDP packets on port 4145. * 4178177 (4153457) When using Dell/EMC PowerFlex ScaleIO storage, Veritas File System(VxFS) on Veritas Volume Manager(VxVM) volumes fail to mount after reboot. * 4178186 (4152014) the excluded dmpnodes are visible after system reboot when SELinux is disabled. * 4178201 (4120878) After enabling the dmp_native_support, system failed to boot. * 4178207 (4118809) System panic at dmp_process_errbp. * 4178260 (4175390) When adding a mirror plexes to a volume, the plex goes into and are stuck at TEMPRMSD state. * 4179072 (4178449) vxconfigd thread stack corrupted for segfault when writing to translog. * 4179818 (4178920) "vxdmp V-5-0-0 failed to get request for devno for IO offset" continuously appears in the system log. * 4183337 (4184198) In CVR environment, improve the SRL log write performance during heavy applicaiton writes * 4184100 (4183777) System log is flooding with the fake alarms "VxVM vxio V-5-0-0 read/write on disk: xxx took longer to complete". * 4185142 (4185141) Support VxVM on SLES15 SP6 * 4187579 (4187459) Plex attach operations are taking an excessive amount of time to sync when Azure 4K Native disks are configured. * 4188380 (4067191) IS8.0_SUSE15_CVR After rebooted slave node master node got panic Patch ID: VRTSaslapm 8.0.2.2300 * 4137497 (4011780) Add support for DELL EMC PowerStore plus PP * 4188382 (4188381) Support ASLAPM on SLES15 SP6 Patch ID: VRTSvxvm-8.0.2.1400 * 4124889 (4090828) Enhancement to track plex att/recovery data synced in past to debug corruption * 4129765 (4111978) Replication failed to start due to vxnetd threads not running on secondary site. * 4130858 (4128351) System hung observed when switching log owner. * 4130861 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response * 4132775 (4132774) VxVM support on SLES15 SP5 * 4133930 (4100646) Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks * 4133946 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150' Volume <vol_name> not found' * 4135127 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed. * 4135388 (4131202) In VVR environment, changeip command may fail. * 4136419 (4089696) In FSS environment, with DCO log attached to VVR SRL volume, reboot of the cluster may result into panic on the CVM master node. * 4136428 (4131449) In CVR environment, the restriction of four RVGs per diskgroup has been removed. * 4136429 (4077944) In VVR environment, application I/O operation may get hung. * 4136802 (4136751) Added selinux permissions for fcontext: aide_t, support_t, mdadm_t * 4136859 (4117568) vradmind dumps core due to invalid memory access. * 4136866 (4090476) SRL is not draining to secondary. * 4136868 (4120068) A standard disk was added to a cloned diskgroup successfully which is not expected. * 4136870 (4117957) During a phased reboot of a two node Veritas Access cluster, mounts would hang. * 4137174 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions. * 4137175 (4124223) Core dump is generated for vxconfigd in TC execution. * 4137508 (4066310) Added BLK-MQ feature for DMP driver * 4137615 (4087628) CVM goes into faulted state when slave node of primary is rebooted . * 4137753 (4128271) In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place. * 4137757 (4136458) In CVR environment, the DCM resync may hang with 0% sync remaining. * 4137986 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes. * 4138051 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full. * 4138069 (4139703) Panic due to wrong use of OS API (HUNZA issue) * 4138075 (4129873) In CVR environment, if CVM slave node is acting as logowner, then I/Os may hang when data volume is grown. * 4138101 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D') * 4138107 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached. * 4138224 (4129489) With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output. * 4138236 (4134069) VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node. * 4138237 (4113240) In CVR environment, with hostname binding configured, Rlink on VVR secondary may have incorrect VVR primary IP. * 4138251 (4132799) No detailed error messages while joining CVM fail. * 4138348 (4121564) Memory leak for volcred_t could be observed in vxio. * 4138537 (4098144) vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume * 4138538 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured. * 4140598 (4141590) Some incidents do not appear in changelog because their cross-references are not properly processed * 4143580 (4142054) primary master got panicked with ted assert during the run. * 4143857 (4130393) vxencryptd crashed repeatedly due to segfault. * 4145064 (4145063) unknown symbol message logged in syslogs while inserting vxio module. * 4146550 (4108235) System wide hang due to memory leak in VVR vxio kernel module * 4149499 (4149498) Getting unsupported .ko files not found warning while upgrading VM packages. * 4150099 (4150098) vxconfigd goes down after few VxVM operations and System file system becomes read-only . * 4150459 (4150160) Panic due to less memory allocation than required Patch ID: VRTSaslapm 8.0.2.1400 * 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing . Patch ID: VRTSvxvm-8.0.2.1300 * 4132775 (4132774) VxVM support on SLES15 SP5 * 4133312 (4128451) A hardware replicated disk group fails to be auto-imported after reboot. * 4133315 (4130642) node failed to rejoin the cluster after this node switched from master to slave due to the failure of the replicated diskgroup import. * 4133946 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150' Volume <vol_name> not found' * 4135127 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed. Patch ID: VRTSaslapm 8.0.2.1300 * 4137283 (4137282) ASLAPM rpm Support on SLES15SP5 Patch ID: VRTSvxvm-8.0.2.1200 * 4119267 (4113582) In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode. * 4123065 (4113138) 'vradmin repstatus' invoked on the secondary site shows stale information * 4123069 (4116609) VVR Secondary logowner change is not reflected with virtual hostnames. * 4123080 (4111789) VVR does not utilize the network provisioned for it. * 4124291 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference. * 4124794 (4114952) With virtual hostnames, pause replication operation fails. * 4124796 (4108913) Vradmind dumps core because of memory corruption. * 4125392 (4114193) 'vradmin repstatus' incorrectly shows replication status as inconsistent. * 4125811 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd * 4128127 (4132265) Machine attached with NVMe devices may panic. * 4128835 (4127555) Unable to configure replication using diskgroup id. * 4129766 (4128380) With virtual hostnames, 'vradmin resync' command may fail if invoked from DR site. * 4130402 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards. * 4130827 (4098391) Continuous system crash is observed during VxVM installation. * 4130947 (4124725) With virtual hostnames, 'vradmin delpri' command may hang. Patch ID: VRTSvxvm-8.0.2.1100 * 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml]. Patch ID: VRTSveki-8.0.2.2400 * 4182361 (4182362) Veritas Infoscale Availability does not support SLES15SP6. Patch ID: VRTSveki-8.0.2.1500 * 4135795 (4135683) Enhancing debugging capability of VRTSveki package installation * 4140468 (4152368) Some incidents do not appear in changelog because their cross-references are not properly processed Patch ID: VRTSveki-8.0.2.1300 * 4134084 (4134083) VEKI support for SLES15 SP5. Patch ID: VRTSveki-8.0.2.1200 * 4130816 (4130815) Generate and add changelog in VEKI rpm Patch ID: VRTSveki-8.0.2.1100 * 4118568 (4110457) Veki packaging were failing due to dependency Patch ID: VRTSgms-8.0.2.2400 * 4184621 (4184622) GMS support for SLES15-SP6. Patch ID: VRTSgms-8.0.2.1500 * 4133279 (4133278) GMS support for SLES15 SP5. * 4134948 (4134947) GMS support for azure SLES15 SP5. Patch ID: VRTSgms-8.0.2.1300 * 4133279 (4133278) GMS support for SLES15 SP5. Patch ID: VRTSgms-8.0.2.1200 * 4126266 (4125932) no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration. * 4127527 (4107112) When finding GMS module with version same as kernel version, need to consider kernel-build number. * 4127528 (4107753) If GMS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module. * 4129708 (4129707) Generate and add changelog in GMS rpm Patch ID: VRTSglm-8.0.2.2400 * 4174551 (4171246) vxglm status shows active even if it fails to load module. * 4184619 (4184620) GLM support for SLES15-SP6. * 4186391 (4117910) VRTSglm support for SAP SLES15. Patch ID: VRTSglm-8.0.2.1500 * 4133277 (4133276) GLM support for SLES15 SP5. * 4134946 (4134945) GLM support for azure SLES15 SP5. * 4138274 (4126298) System may panic due to unable to handle kernel paging request and memory corruption could happen. Patch ID: VRTSglm-8.0.2.1300 * 4133277 (4133276) GLM support for SLES15 SP5. Patch ID: VRTSglm-8.0.2.1200 * 4127524 (4107114) When finding GLM module with version same as kernel version, need to consider kernel-build number. * 4127525 (4107754) If GLM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module. * 4129715 (4129714) Generate and add changelog in GLM rpm Patch ID: VRTSfsadv-8.0.2.2500 * 4188577 (4188576) Security vulnerabilities exist in the Curl third-party components used by VxFS. Patch ID: VRTSspt-8.0.2.1300 * 4139975 (4149462) New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version. * 4146957 (4149448) New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog. Patch ID: VRTSrest-3.0.10 * 4124960 (4130028) GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs * 4124963 (4127170) While modifying the system list for service group when dependency is there, the api would fail * 4124964 (4127167) -force option is used by default in delete of rvg and a new -online option is used in patch of rvg * 4124966 (4127171) While getting excluded disks on Systems API we were getting nodelist instead of nodename in href * 4124968 (4127168) In GET request on rvgs all datavolumes in RVGs not listed correctly * 4125162 (4127169) Get disks api failing when cvm is down on any node Patch ID: VRTSpython-3.9.16 P07 * 4179488 (4179487) Upgrading Multiple vulnerable thirdparty modules and cleaning up .pyenv directory unused files under VRTSPython for IS 8.0.2. Patch ID: VRTSsfcpi-8.0.2.1500 * 4115603 (4115601) On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a unique publisher list during install and upgrade. * 4115707 (4126025) While performing full upgrade of the secondary site, SRL missing & RLINK dissociated error observed. * 4115874 (4124871) Configuration of Vxfen fails for a three-node cluster on VMs in different AZs * 4116368 (4123645) During the Rolling upgrade response file creation, CPI is asking to unmount the VxFS filesystem. * 4116406 (4123654) Removed unnecessary swap space message. * 4116879 (4126018) During addnode, installer fails to mount resources. * 4116995 (4123657) Installer retries upgrading protocol version post-upgrade. * 4117956 (4104627) Providing multiple-patch support up to 10 patches. * 4121961 (4123908) Installer does not register InfoScale hosts to the VIOM Management Server after InfoScale configuration. * 4122442 (4122441) CPI is displaying licensing information after starting the product through the response file. * 4122749 (4122748) On Linux, had service fails to start during rolling upgrade from InfoScale 7.4.1 or lower to higher InfoScale version. * 4126470 (4130003) Installer failed to start vxfs_replication while performing Configuration of Enterprise on OEL 9.2 * 4127111 (4127117) On a Linux system, you can configure the GCO(Global Cluster option) with a hostname by using InfoScale installer. * 4130377 (4131703) Installer performs dmp include/exclude operations if /etc/vx/vxvm.exclude is present on the system. * 4131315 (4131314) Environment="VCS_ENABLE_PUBSEC_LOG=0" is added from cpi in install section of service file instead of Service section. * 4131684 (4131682) On SunOS, installer prompts the user to install 'bourne' package if it is not available. * 4132411 (4139946) Rolling upgrade fails if the recommended upgrade path is not followed. * 4133019 (4135602) Installer failed to update main.cf file with VCS user during reconfiguring a secured cluster to non-secured cluster. * 4133469 (4136432) add node to higher version of infoscale node fails. * 4135015 (4135014) CPI installer should not ask for install InfoScale after "./installer -precheck" is done. * 4136211 (4139940) Installer failed to get the package version and failed due to PADV missmatch. * 4139609 (4142877) Missing HF list not displayed during upgrade by using the patch release. * 4140512 (4140542) Rolling upgrade failed for the patch installer * 4157440 (4158841) VRTSrest verison changes support. * 4157696 (4157695) In IS 7.4.2 U7 to IS 8.0.2 upgradation, VRTSpython version upgradation fails. * 4158650 (4164760) The installer will check for dvd pkg version with the available patch pkg version to install the latest pkgs. * 4159940 (4159942) The installer will not update existing file permissions. * 4161937 (4160983) In Solaris, the vxfs modules are getting removed from current BE while upgrading the Infoscale to ABE. * 4164945 (4164958) The installer will check for pkg version to allow EO tunable changes in config files. * 4165118 (4171259) The installer will add the new node in cluster. * 4165727 (4165726) Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise. * 4165730 (4165726) Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise. * 4165840 (4165833) InfoScale installer does not support installation using IPS repository on Solaris. * 4166659 (4171256) The installer will ease in checking rvg role while the upgrading VVR host. * 4166980 (4166979) [VCS] - VMwareDisks agent is unable to start and run after upgrade to RHEL 8.10 and Infoscale 8.0.2.1700 * 4167308 (4171253) IS 8.0.2U3 : CPI doesn't ask to set EO tunable in case of Infoscale upgrade * 4177618 (4184454) We are changing code by adding dbed related checks * 4178007 (4177807) We are changing message for CPC fencing not writing in env file /etc/vxenviron file * 4181039 (4181037) We are making the etc/vx/vxdbed/dbedenv file accessible * 4181282 (4181279) Configuration fails if dbed is not installed. * 4181787 (4181786) VCS configuration with responsefile changing the interface bootproto from dhcp to none when we configure the LLT over UDP. * 4184438 (4186642) The installer will not ask for VIOM registration in case of start option. Patch ID: -4.01.802.002 * 4173483 (4173483) Providing Patch Release for VRTSvlic Patch ID: VRTSsfmh-vom-HF0802551 * 4189545 (4189544) VIOM 8.0.2.551 VRTSsfmh package for InfoScale 8.0.2 Update releases Patch ID: VRTSdbac-8.0.2.1600 * 4161967 (4157901) vcsmmconfig.log file permission is hardcoded, but permission should be set as per EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM. * 4182976 (4182977) Veritas Infoscale does not support SLES15SP6. Patch ID: VRTSdbac-8.0.2.1300 * 4153146 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms. Patch ID: VRTSdbac-8.0.2.1200 * 4133167 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). Patch ID: VRTSdbac-8.0.2.1100 * 4133133 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided. Patch ID: VRTSvcsea-8.0.2.2300 * 4189548 (4189547) Invalid details mentioned while executing fire drill of oracle agent with oracle21c. Patch ID: VRTSvcsea-8.0.2.2100 * 4180094 (4180091) Offline script not able to exit as fuser check is being run on all disks, even the ones not under VCS control. Patch ID: VRTSvcsea-8.0.2.1400 * 4058775 (4073508) Oracle virtual fire-drill is failing. Patch ID: VRTSamf-8.0.2.2100 * 4161436 (4161644) System panics when AMF enabled and there are Process/Application resources. * 4162305 (4168084) AMF caused kernel BUG: scheduling while atomic when umount file system. * 4182736 (4182737) Veritas Infoscale does not support SLES15SP6. Patch ID: VRTSamf-8.0.2.1400 * 4137600 (4136003) A cluster node panics when the AMF module overruns internal buffer to analyze arguments of an executable binary. Patch ID: VRTSamf-8.0.2.1300 * 4137165 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). Patch ID: VRTSamf-8.0.2.1200 * 4133131 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided. Patch ID: VRTSgab-8.0.2.2100 * 4182383 (4182384) Veritas Infoscale Availability does not support SLES15SP6. Patch ID: VRTSgab-8.0.2.1400 * 4153142 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms. Patch ID: VRTSgab-8.0.2.1300 * 4137164 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). Patch ID: VRTSgab-8.0.2.1200 * 4133132 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided. Patch ID: VRTSvcsag-8.0.2.2300 * 4189572 (4188318) KVMGuest agent , auto removal of invalid env file once environment become valid. * 4189590 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents * 4189594 (4189392) Added support for attaching GCP regional disks in read-only mode to multiple instances. Patch ID: VRTSvcsag-8.0.2.2100 * 4149272 (4164374) VCS DNS Agent monitor gets timeout if multiple DNS servers are added as Stealth Masters and if few of them get hung. * 4162659 (4162658) LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure. * 4162753 (4142040) While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated. * 4177815 (4175426) VMwareDisk Agent taking longer time to failover. * 4180582 (4180581) AWSIP agent does not fail over in case system gets faulted and when IPv6 is used. Patch ID: VRTSvcsag-8.0.2.1500 * 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. Patch ID: VRTSvcsag-8.0.2.1400 * 4114880 (4152700) When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found. * 4135534 (4152812) AWS EBSVol agent takes long time to perform online and offline operations on resources. * 4137215 (4094539) Agent resource monitor not parsing process name correctly. * 4137376 (4122001) NIC resource remain online after unplug network cable on ESXi server. * 4137377 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database. * 4137602 (4121270) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices. * 4137618 (4152886) AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a shared VPC. * 4143918 (4152815) AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent. Patch ID: VRTSvcsag-8.0.2.1200 * 4130206 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin. Patch ID: VRTSllt-8.0.2.2300 * 4189272 (4189271) LLT service is unable to start due to LLT_IRQBALANCE * 4189571 (4167108) replace yield() with cond_resched() * 4189853 (4189566) In an InfoScale FSS environment where LLT links are configured over RDMA, the CVM slave node panics whilst joining the cluster with the CVM master. Patch ID: VRTSllt-8.0.2.2100 * 4132209 (4124759) Panic happened with llt_ioship_recv on a server running in AWS. * 4162744 (4139781) Unexpected or corrupted skb, memory type missing in buffer header. * 4166061 (4167791) Recursive mutex_enter from LLT - void llt:llt_msg_recv1 on Solaris. * 4179383 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out. * 4182383 (4182384) Veritas Infoscale Availability does not support SLES15SP6. * 4186647 (4186645) Enable LLT_IRQBALANCE by default, and add a check on hpe_irqbalance to detect any anomalies Patch ID: VRTSllt-8.0.2.1400 * 4137611 (4135825) Once root file system is full during llt start, llt module failing to load forever. Patch ID: VRTSllt-8.0.2.1300 * 4132209 (4124759) Panic happened with llt_ioship_recv on a server running in AWS. * 4137163 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). Patch ID: VRTSllt-8.0.2.1200 * 4128886 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15. Patch ID: VRTScps-8.0.2.2300 * 4189591 (4188652) After configuring CP server, getting EO related error in CP server logs. * 4189990 (4189584) Security vulnerabilities exists in Sqlite third-party components used by VCS. Patch ID: VRTScps-8.0.2.2100 * 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. Patch ID: VRTScps-8.0.2.1500 * 4161971 (4161970) Security vulnerabilities exists in Sqlite third-party components used by VCS. Patch ID: VRTSdbed-8.0.2.1400 * 4188986 (4188985) Checkpoint creation fails for oracle database application using dbed, if archive log is set as a directory inside the mountpoint Patch ID: VRTSdbed-8.0.2.1300 * 4155837 (4137171) In case of EO setting file permissions as per tunable. Patch ID: VRTSdbed-8.0.2.1100 * 4153061 (4092588) SFAE failed to start with systemd. Patch ID: VRTSvbs-8.0.2.1200 * 4189595 (4188647) Virtual Business Services feature will not work with latest Linux platform. Patch ID: VRTSvbs-8.0.2.1100 * 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. Patch ID: VRTSvxfen-8.0.2.2300 * 4189905 (4189906) vxfendisk fails due to ksh overwriting positional parameters by default after executing subsequent scripts inside it. Patch ID: VRTSvxfen-8.0.2.2100 * 4156076 (4156075) EO changes file permission tunable * 4156379 (4156075) EO changes file permission tunable * 4166076 (4166666) Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guest * 4169032 (4166666) Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guest * 4176111 (4176110) vxfentsthdw failed to verify fencing disks compatibility in KVM environment * 4177677 (4176592) Flooding of 'vxfen.log' file with the error message - "VXFEN already configured". * 4182722 (4182723) Veritas Infoscale does not support SLES15SP6. Patch ID: VRTSvxfen-8.0.2.1400 * 4153144 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms. Patch ID: VRTSvxfen-8.0.2.1300 * 4131369 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). Patch ID: VRTSvxfen-8.0.2.1200 * 4124086 (4124084) Security vulnerabilities exist in the Curl third-party components used by VCS. * 4125891 (4113847) Support for even number of coordination disks for CVM-based disk-based fencing * 4125895 (4108561) Reading vxfen reservation not working Patch ID: VRTSvcs-8.0.2.2300 * 4189593 (4188662) App group faulted during upgrade. Patch ID: VRTSvcs-8.0.2.2200 * 4189253 (4189252) Upgrading Netsnmp component to fix security vulnerabilities . Patch ID: VRTSvcs-8.0.2.2100 * 4162755 (4136359) When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated. Patch ID: VRTSvcs-8.0.2.1500 * 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. Patch ID: VRTScavf-8.0.2.2400 * 4162683 (4153873) Deport decision was being dependent on local system only not on all systems in the cluster Patch ID: VRTScavf-8.0.2.1500 * 4133969 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message. * 4137640 (4088479) The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing. Patch ID: VRTSodm-8.0.2.2800 * 4175626 (4175627) ODM module failed to load with latest VxFS. Patch ID: VRTSodm-8.0.2.2700 * 4175626 (4175627) ODM module failed to load with latest VxFS. Patch ID: VRTSodm-8.0.2.2400 * 4186392 (4117909) VRTSodm support for SAP SLES15. * 4187361 (4187362) ODM support for SLES15-SP6. Patch ID: VRTSodm-8.0.2.1700 * 4154116 (4118154) System may panic in simple_unlock_mem() when errcheckdetail enabled. * 4159290 (4159291) ODM module is not getting loaded with newly rebuilt VxFS. Patch ID: VRTSodm-8.0.2.1500 * 4133286 (4133285) ODM support for SLES15 SP5. * 4134950 (4134949) ODM support for azure SLES15 SP5. Patch ID: VRTSodm-8.0.2.1400 * 4144274 (4144269) After installing VRTSvxfs-8.0.2.1400 ODM fails to start. Patch ID: VRTSodm-8.0.2.1300 * 4133286 (4133285) ODM support for SLES15 SP5. Patch ID: VRTSodm-8.0.2.1200 * 4126262 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration * 4127518 (4107017) When finding ODM module with version same as kernel version, need to consider kernel-build number. * 4127519 (4107778) If ODM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module. * 4129838 (4129837) Generate and add changelog in ODM rpm Patch ID: VRTSvxfs-8.0.2.2800 * 4190914 (4190078) System panicked due to VxFS LRU list inconsistency. Patch ID: VRTSvxfs-8.0.2.2700 * 4135608 (4086287) VxFS mount command may panic the system with "scheduling while atomic: mount/453834/0x00000002" bug for cluster file system * 4159938 (4155961) Panic in vx_rwlock during force unmount. * 4164927 (4187385) Handling for IFAULOG type inode in fsck pass1b. * 4177631 (4177630) Save the fsck progress status report to a file by default. * 4177641 (4135900) LM Stress Worm fails hitting "Oops: 0002 [#1] PREEMPT SMP PTI" * 4177643 (4085144) Implemented a fix to address the 'scheduling while atomic' bug in VxFS affecting FEL-enabled file systems. * 4177650 (4164503) Fix to resolve Memory leak presents in internal VxFS library function. * 4188062 (4188063) Internal assert was seen during sles15sp6 support for VxFS. * 4188390 (4188391) There was a mismatch between the setfacl and getfacl command outputs for an empty ACL. * 4189348 (4188888) Fstrim service fault on VXFS Filesystems present in /etc/fstab. Running the fstrim command manually results in the error: 'FITRIM ioctl failed: Invalid argument' for VxFS filesystems. * 4189349 (4188107) Softlockup occurred during Shrinking VxFS file system. * 4189423 (4189424) FSQA binary freezeit fails with the error "ioctl VX_FREEZE failed" * 4189586 (4189587) The setfacl operation failed with the error: Operation not supported. * 4189598 (4187406) Panic in locked_inode_to_wb_and_lock_list during OS writeback. * 4189599 (3743572) File system may get hang when reaching 1 billion inode limit * 4189600 (4189333) Fixed inode size mismatch after truncate/fallocate with vx_falloc_clear=1. * 4189601 (4120787) Data corruption issues with parallel direct IO on ZFOD extents. * 4189603 (4187096) Orphaned symlinks were not getting replicated in VFR. * 4189604 (4184953) mkfs may generate coredump with signal SIGSEGV * 4189605 (4188417) NULL pointer dereference while trying to dereference fs_fel_info pointer in recovery context. * 4189607 (4189606) SecureFS failed to create checkpoint as per schedule * 4189642 (4127771) Full fsck fails and generates core dump. * 4189648 (4142106) fsck -n shows warnings after successful log replay. * 4189650 (4155954) Attribute data mismatch even if the node is owner while doing reverse name lookup in vxuditlogadm. * 4189652 (4188805) Online migration process goes in hang state * 4189655 (4189654) VxFS mount binary code does not support multi category security SElinux context like: "system_u:object_r:container_file_t:s0:c7,c28" * 4189656 (4179548) Potential missed buffer flush during audit log file grow in vx_multi_bufflush when f_bsize is less than 8K. * 4189659 (4180012) fsck utility was generating coredump due to a race between multiple threads of fsck. * 4189663 (4181952) cfs.stress.worm hits assert "vx_fcl_bufinval:1a * 4189665 (4182162) Implemented a fix to allow creation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS. * 4189668 (4189667) VxFS medium impact coverity issues * 4189669 (4182897) Failures seen while running LM CMDS->metasave testcase * 4189672 (4188816) Migration fails when doing direct write and file system is disabled. * 4189675 (4187574) Fix to address a core dump issue in the 'vxfstaskd' daemon caused by an internal race condition. * 4189676 (4187819) Fix to avoid unnecessary execution of "vxsnap" command on every mounted vxfs filesystem. * 4189677 (4188282) [RHEL9.4] LM Noise Replay Worm test exits with Failed to full fsck cleanly. * 4189686 (4188813) Online migration process in hang state. * 4189702 (4189180) System got hang due to global lru lock contention. * 4189792 (4189761) used-after-free memory corruption occurred. * 4190077 (4116377) filesystem check showing warning for audit log record files * 4190275 (4190241) vx_clear_zfod_extent triggers vx_dataioerr due to improper 64-bit offset alignment with 32-bit block size value Patch ID: VRTSvxfs-8.0.2.2500 * 4189227 (4189228) Security vulnerabilities exist in the third-party components[zlib, libexpat] used by VxFS. Patch ID: VRTSvxfs-8.0.2.2400 * 4144078 (4142349) Using sendfile() on VxFS file system might result in hang. * 4162063 (4136858) Added a basic sanity check for directory inodes in ncheck codepath. * 4162064 (4121580) WORM flag is getting set on checkpoint mounter or RW mode. * 4162065 (4158238) vxfsrecover command exits with error if the previous invocation terminated abnormally. * 4162066 (4156650) Older checkpoints remain, if SecureFS is recovered from newer checkpoint. * 4162220 (4099775) System might panic if ownership change operations are done for a quota enabled Filesystem * 4163183 (4158381) Server panicked with "Kernel panic - not syncing: Fatal exception" * 4164090 (4163498) Veritas File System df command logging doesn't have sufficient permission while validating tunable configuration * 4164270 (4156384) Filesystem's metadata can get corrupted due to missing transaction in the intent log * 4166501 (4163862) Mutex lock contention is observed in cluster file system under massive file creation workload * 4166502 (4163127) Spinlock contention observed during inode allocation for massive file creation operation on cluster file system. * 4166503 (4162810) Spikes in CPU usage of glm threads were observed in output of "top" command during massive file creation workload on cluster file system. * 4168357 (4076646) Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area. * 4172054 (4162316) FS migration to VxFS might hit the kernel PANIC if CrowdStrike falcon sensor is running. * 4173064 (4163337) Intermittent df slowness seen across cluster. * 4177627 (4160991) Accessing the address which is freed and still present in the mlink. * 4177635 (4165264) Memory leak happened inside vxfs kernel driver. * 4177636 (4160978) Error messages are seen during recovering data for a FCL enabled filesystem * 4177638 (4157349) PANIC happened due to sleeping in atomic context. * 4177640 (4164638) Fixing thread local memory leak in FSCK binary. * 4177656 (4167362) Memory leak observed in fsck through valgrind. * 4177657 (4144669) Setting FULLFSCK flag on File System while processing IFPTI Inode. * 4177661 (4141854) Conformance->fsadm hits coredump. * 4177662 (4171368) Node panicked while unmounting filesystem. * 4177663 (4168443) System panicked at vx_clonemap. * 4177664 (4175488) DB2 thread hang seen while trying to acquire vx_rwsleep_rec lock. * 4177785 (4171380) Memory leak observed during code walk through. * 4186376 (4117908) VRTSvxfs and VRTSfsadv support for SAP SLES15. * 4187359 (4187360) VxFS support for SLES15-SP6. * 4188020 (4178929) ext3 filesystem creation failed with force unmount. Patch ID: VRTSvxfs-8.0.2.1700 * 4159284 (4145203) Invoking veki through systemctl inside vxfs-startup script. * 4159938 (4155961) Panic in vx_rwlock during force unmount. * 4161120 (4161121) Non root user is unable to access log files under /var/log/vx directory Patch ID: VRTSvxfs-8.0.2.1600 * 4157410 (4157409) Security Vulnerabilities exists in the current versions of third party components, sqlite and expat, used by VxFS . Patch ID: VRTSvxfs-8.0.2.1500 * 4119626 (4119627) Command fsck is facing few SELinux permission denials issue. * 4133481 (4133480) VxFS support for SLES15 SP5. * 4134952 (4134951) VxFS support for azure SLES15 SP5. * 4146580 (4141876) Parallel invocation of command vxschadm might delete previous SecureFS configuration. * 4148734 (4148732) get_dg_vol_names is leaking memory. * 4150065 (4149581) VxFS Secure clock is running behind than expected by huge margin. Patch ID: VRTSvxfs-8.0.2.1400 * 4141666 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS. Patch ID: VRTSvxfs-8.0.2.1300 * 4133481 (4133480) VxFS support for SLES15 SP5. * 4133965 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given. * 4134040 (3979756) kfcntl/vx_cfs_ifcntllock performance is very bad on CFS. Patch ID: VRTSvxfs-8.0.2.1200 * 4121230 (4119990) Recovery stuck while flushing and invalidating the buffers * 4125870 (4120729) Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source. * 4125871 (4114176) After failover, job sync fails with error "Device or resource busy". * 4125873 (4108955) VFR job hangs on source if thread creation fails on target. * 4125875 (4112931) vxfsrepld consumes a lot of virtual memory when it has been running for long time. * 4125878 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel. * 4126104 (4122331) Enhancement in vxfs error message which are logged while marking the bitmap or inode as "BAD". * 4127509 (4107015) When finding VxFS module with version same as kernel version, need to consider kernel-build number. * 4127510 (4107777) If VxFS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module. * 4127594 (4126957) System crashes with VxFS stack. * 4127720 (4127719) Added fallback logic in fsdb binary and made changes to fstyp binary such that it now dumps uuid. * 4127785 (4127784) Earlier fsppadm binary was just giving warning in case of invalid UID, GID number. After this change providing invalid UID / GID e.g. "1ABC" (UID/GID are always numbers) will result into error and parsing will stop. * 4128249 (4119965) VxFS mount binary failed to mount VxFS with SELinux context. * 4128723 (4114127) Hang in VxFS internal LM Conformance - inotify test * 4129494 (4129495) Kernel panic observed in internal VxFS LM conformance testing. * 4129681 (4129680) Generate and add changelog in VxFS rpm DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following incidents: Patch ID: VRTSvxvm-8.0.2.2600 * 4188358 (Tracking ID: 4188399) SYMPTOM: Customers experienced frequent performance degradations and escalations due to a few default tunable settings. DESCRIPTION: Set the new default values of the following tunables: vol_max_nmpool_sz 512M vol_max_rdback_sz 512M vol_rvio_maxpool_sz 512M RESOLUTION: Tunable values are changed to prevent further potential escalations caused by performance degradation. * 4189232 (Tracking ID: 4189556) SYMPTOM: On RHEL9.6, module compilation fails and module insertion errors occur. DESCRIPTION: There are changes in the kernel API, that needed to be relected in the VxVM code. RESOLUTION: Necessary changes have been done to support VxVM on RHEL9.6 platform. * 4189350 (Tracking ID: 4188549) SYMPTOM: vxconfigd died due to a floating point exception with below stack: #0 in get_geometry () #1 in getpart () #2 in devintf_setup_slices () #3 in devintf_online_setup () #4 in auto_online () #5 in da_online () #6 in da_thread_online_disk () DESCRIPTION: After getting the geometry from system, volume manager failed to perform a sanity check of disk header and sector number when calculating the cylinder number, hence the issue. RESOLUTION: Code changes have been made to perform a sanity check of disk header and sector number before calculating the cylinder number. * 4189351 (Tracking ID: 4188560) SYMPTOM: The vxvm-encrypt service will be in failed state and core dump for the vxencryptd may be generated. DESCRIPTION: The vxvm-encrypt service will be in failed state [root@server101 /]# systemctl status vxvm-encrypt vxvm-encrypt.service - VERITAS Volume Manager Encryption Service Loaded: loaded (/usr/lib/systemd/system/vxvm-encrypt.service; enabled; preset: disabled) Active: failed (Result: core-dump) since Tue 2025-02-25 14:27:20 IST; 1s ago Duration: 208ms Process: 162772 ExecStart=/sbin/vxencryptd -m (code=dumped, signal=SEGV) Main PID: 162772 (code=dumped, signal=SEGV) CPU: 201ms Feb 25 14:27:20 server101 systemd[1]: vxvm-encrypt.service: Scheduled restart job, restart counter is at 10. Feb 25 14:27:20 server101 systemd[1]: Stopped VERITAS Volume Manager Encryption Service. Feb 25 14:27:20 server101 systemd[1]: vxvm-encrypt.service: Start request repeated too quickly. Feb 25 14:27:20 server101 systemd[1]: vxvm-encrypt.service: Failed with result 'core-dump'. Feb 25 14:27:20 server101 systemd[1]: Failed to start VERITAS Volume Manager Encryption Service. core dump for the vxencryptd may be generated. [root@server101 /]# coredumpctl list TIME PID UID GID SIG COREFILE EXE SIZE Tue 2025-02-25 15:18:21 IST 172573 0 0 SIGSEGV present /usr/sbin/vxencryptd 2.0M Tue 2025-02-25 15:18:21 IST 172599 0 0 SIGSEGV present /usr/sbin/vxencryptd 1.9M RESOLUTION: VxVM encryption was unable to handle IO greater than 1Mb size. Code changes has been done to fix this issue. * 4189564 (Tracking ID: 4189567) SYMPTOM: System panics while VVRCert DESCRIPTION: logical_block_size is being set to 0 instead of a default value. This causes incorrect queue limits to be set, which can lead to a system panic. Previously, we defaulted to a valid size if the value was too low, but that check is now missing. RESOLUTION: Populating logical_block_size value correctly * 4189695 (Tracking ID: 4188763) SYMPTOM: Stale and incorrect symbolic links to VxDMP devices in "/dev/disk/by-uuid". DESCRIPTION: On some of the systems with Infoscale installed there can be stale symbolic links of /boot , /boot/efi to "VxDMP" devices instead of "SD" devices. DMP uses "blkid" command" to get the O.S device based on UUID. But on some systems "blkid" command is taking long time for its completion. In this scenario there can be stale symbolic link to VxDMP device. RESOLUTION: Code changes are done to use "udevadm info" command instead of "blkid". * 4189698 (Tracking ID: 4189447) SYMPTOM: VxVM (Veritas Volume Manager) creates some required files under /tmp and /var/tmp directories. DESCRIPTION: VxVM (Veritas Volume Manager) creates some .lock files under /etc/vx directory. The non-root users have access to these .lock files, and they may accidentally modify, move or delete those files. Such actions may interfere with the normal functioning of the Veritas Volume Manager. RESOLUTION: This Fix address the issue by masking the write permission for non-root users for these .lock files. * 4189751 (Tracking ID: 4189428) SYMPTOM: Vulnerabilities have been reported in third party components, [curl , libxml] that are used by VxVM. DESCRIPTION: Third party components [curl , libxml] in their current versions, used by VxVM have been reported with security vulnerabilities which needs RESOLUTION: [curl , libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed. * 4189773 (Tracking ID: 4189301) SYMPTOM: Frequent IPM handle purging causes VVR SG tswitchover to fail. DESCRIPTION: Frequent IPM handle purging causes VVR SG tswitchover to fail. The switchover will pass in those iterations where a new IPM handle was opened while the VCS tries to do switchover. In order to handle the above scenario, we have enhanced the frequency VVR IPM heartbeat handle creation. Now as soon as a command is fired we check for the handle availability in local list of handles. If not found then create it. This helped in handling the frequent purging of IPM handles by creating the IPM handles when and if required. RESOLUTION: Enhancing the checks for IPM handle availability in the local list of handles and then creating one helped in handling the IPM handle purging. Patch ID: VRTSaslapm 8.0.2.2600 * 4189576 (Tracking ID: 4185193) SYMPTOM: Customers experience UDID mismatch error on one of the NVMe devices, while test setup and in-house setup do not show the issue. DESCRIPTION: When using VxVM/ASL 7.4.2.5300 with RHEL 8 and 4 NVMe devices, customers encounter a UDID mismatch error on one of the devices. However, identical configurations on test setups and in-house environments work as expected. After debugging, it was discovered that an issue lies with the ioctl() vendor side. RESOLUTION: A sysfs approach has been adopted to resolve the UDID mismatch issue. * 4189696 (Tracking ID: 4188831) SYMPTOM: Hitachi VSPOne array support is not present. DESCRIPTION: Hitachi VSPOne array support is not present. Adding support for Hitachi VSPOne array. Support for VSP One Block 20 Series ---- Firmware details A3-XX, Standard Inquiry Byte[32-35] ---- Vendor and model name Vendor: HITACHI, Standard Inquiry Byte[8-15] RESOLUTION: Added support for Hitachi VSPOne array. * 4189772 (Tracking ID: 4189561) SYMPTOM: Added support for Netapp ASA r2 array DESCRIPTION: Netapp ASA r2 is new array and current ASL doesn't support it. So it will not be claimed/core dump with previous ASL. This array support has been added in the current ASL. RESOLUTION: Added support for Netapp ASA r2 array Patch ID: VRTSvxvm-8.0.2.2400 * 4189251 (Tracking ID: 4189428) SYMPTOM: Vulnerabilities have been reported in third party components, [curl , libxml] that are used by VxVM. DESCRIPTION: Third party components [curl , libxml] in their current versions, used by VxVM have been reported with security vulnerabilities which needs RESOLUTION: [curl , libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed. Patch ID: VRTSvxvm-8.0.2.2300 * 4124889 (Tracking ID: 4090828) SYMPTOM: Dumped fmrmap data for better debuggability for corruption issues DESCRIPTION: vxplex att/vxvol recover cli will internally fetch fmrmaps from kernel using existing ioctl before starting attach operation and get data in binary format and dump to file and store it with specific format like volname_taskid_date. RESOLUTION: Changes done now dumps the fmrmap data into a binary file. * 4128883 (Tracking ID: 4112687) SYMPTOM: vxdisk resize corrupts disk public region and causes file system mount fail. DESCRIPTION: For single path disk, during two transactions of resize operation, the private region IOs could be incorrectly sent to partition 3 of the GPT disk, which would cause 48 more sectors shift. This may make the private region data written to public region and cause corruption. RESOLUTION: Code changes have been made to fix the problem. * 4137508 (Tracking ID: 4066310) SYMPTOM: New feature for performance improvement DESCRIPTION: Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP. RESOLUTION: resolved * 4137995 (Tracking ID: 4117350) SYMPTOM: Below error is observed when trying to import # vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed: Replicated dg record is found. Did you want to import hardware replicated LUNs? Try vxdg [-o usereplicatedev=only] import option with -c[s] Please refer to system log for details. DESCRIPTION: REPLICATED flag is used to identify a hardware replicated device so to import dg on the REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed . RESOLUTION: REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks. * 4138279 (Tracking ID: 4116496) SYMPTOM: System panic at dmp_process_errbp+47 with following call stack. machine_kexec __crash_kexec crash_kexec oops_end no_context __bad_area_nosemaphore do_page_fault page_fault [exception RIP: dmp_process_errbp+47] dmp_daemons_loop kthread ret_from_fork DESCRIPTION: When a LUN is detached, bio->bi_disk will be set to NULL, which would cause NULL pointer reference panic when VxDMP calls bio_dev(bio). RESOLUTION: Code changes have been made to avoid panic. * 4143558 (Tracking ID: 4141890) SYMPTOM: TUTIL0 field may not get cleared sometimes after cluster reboot. DESCRIPTION: TUTIL field may not get cleared sometimes after cluster reboot due to cleanup issue for volume start operation. RESOLUTION: Autofix can cleanup this and trigger recovery. Also Fix is checked-in for this. * 4152907 (Tracking ID: 4132751) SYMPTOM: If a disaster on VVR primary site occurs during VVR online add volume operation, and a takeover is performed, then the online addvol fails if it is performed after the original primary site is up. DESCRIPTION: If a disaster on VVR primary site occurs during VVR online add volume operation, and a takeover is performed, then the online addvol fails if it is performed after the original primary site is up. RESOLUTION: Required changes have been to resolve the issue * 4153377 (Tracking ID: 4152445) SYMPTOM: Replication failed to start due to vxnetd threads not running on secondary site. DESCRIPTION: Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached. RESOLUTION: Code changes have been made to add lock protection to avoid the race condition. * 4153566 (Tracking ID: 4090410) SYMPTOM: PID: 19769 TASK: ffff8fd2f619b180 CPU: 31 COMMAND: "vxiod" #0 [ffff8fcef196bbf0] machine_kexec at ffffffffbb2662f4 #1 [ffff8fcef196bc50] __crash_kexec at ffffffffbb322a32 #2 [ffff8fcef196bd20] panic at ffffffffbb9802cc #3 [ffff8fcef196bda0] volrv_seclog_bulk_cleanup_verification at ffffffffc09f099a [vxio] #4 [ffff8fcef196be18] volrv_seclog_write1_done at ffffffffc09f0a41 [vxio] #5 [ffff8fcef196be48] voliod_iohandle at ffffffffc0827688 [vxio] #6 [ffff8fcef196be88] voliod_loop at ffffffffc082787c [vxio] #7 [ffff8fcef196bec8] kthread at ffffffffbb2c5e61 DESCRIPTION: This panic on secondary node is explicitly triggered when unexpected data is detected during data verification process. This is due to incorrect data sent by primary for a specific network failure scenario. RESOLUTION: The source has been changed to fix this problem on primary. * 4153570 (Tracking ID: 4134305) SYMPTOM: Illegal memory access is detected when an admin SIO is trying to lock a volume. DESCRIPTION: While locking a volume, an admin SIO is converted to an incompatible SIO, on which collecting ilock stats causes memory overrun. RESOLUTION: The code changes have been made to fix the problem. * 4153597 (Tracking ID: 4146424) SYMPTOM: CVM node join operation may hang with vxconfigd on master node stuck in following code path. ioctl () kernel_ioctl () kernel_get_cvminfo_all () send_slaves () master_send_dg_diskids () dg_balance_copies () client_abort_records () client_abort () dg_trans_abort () dg_check_kernel () vold_check_signal () request_loop () main () DESCRIPTION: During vxconfigd level communication between master and slave nodes, if GAB returns EAGAIN, vxconfigd code does a poll on the GAB fd. In normal circumstances, the GAB will return the poll call with appropriate return value. If however, the poll timeout occurs (poll returning 0), it was erroneously treated as success and the caller assumes that message was sent, when in fact it had failed. This resulted in the hang in the message exchange between master and slave vxconfigd. RESOLUTION: Fix is to retry the send operation on GAB fd after some delay if the poll times out in the context of EAGAIN or ENOMEM error. The fix is applicable on both master and slave side functions * 4153874 (Tracking ID: 4010288) SYMPTOM: On setup Replace node fails due to DCM log plex not getting recovered. DESCRIPTION: This is happening because dcm log plex kstate is going enabled with state RECOVER and stale flag set on it. Plex attach expect plex kstate to be not enabled to allow attach operation which fails in this case. Due to some race, plex state of log dcm plex is getting set to enabled. RESOLUTION: Changes done to detect such problematic dcm plex state and correct it and then normal plex attach transactions are triggered. * 4154104 (Tracking ID: 4142772) SYMPTOM: In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode. DESCRIPTION: When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared. RESOLUTION: The code changes have been made to fix the issue. * 4154107 (Tracking ID: 3995831) SYMPTOM: System hung: A large number of SIOs got queued in FMR. DESCRIPTION: When IO load is high, there may be not enough chunks available. In that case, DRL flushsio needs to drive fwait queue which may get some available chunks. Due a race condition and a bug inside DRL, DRL may queue the flushsio and fail to trigger flushsio again, then DRL ends in a permanent hung situation, not able to flush the dirty regions. The queued SIOs fails to be driven further hence system hung. RESOLUTION: Code changes have been made to drive SIOs which got queued in FMR. * 4155091 (Tracking ID: 4118510) SYMPTOM: Volume manager tunable to control log file permissions DESCRIPTION: With US President Executive Order 14028 compliance changes, all product log file permissions changed to 600. Introduced tunable "log_file_permissions" to control the log file permissions to 600 (default), 640 or 644. The tunable can be changed at install time or any time with reboot. RESOLUTION: Added the log_file_permissions tunable. * 4155719 (Tracking ID: 4154921) SYMPTOM: system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on. DESCRIPTION: Due to the different reasons, DMP might disable its subpaths. In a particular scenario, DMP might fail to reset IO QUIESCES flag on its subpaths, which caused IOs got queued in DMP defer queue. In case the upper layer, like zfs, kept waiting for IOs to complete, this bug might cause whole system hang. RESOLUTION: Code changes have been made to reset IO quiesce flag properly after disabled dmp path. * 4157012 (Tracking ID: 4145715) SYMPTOM: Replication disconnect DESCRIPTION: There was issue with dummy update handling on secondary side when temp logging is enabled. It was observed that update next to dummy update is not found on secondary site. Dummy update was getting written with incorrect metadata about size of VVR update. RESOLUTION: Fixed dummy update size metadata getting written on disk. * 4157643 (Tracking ID: 4159198) SYMPTOM: vxfmrmap utility generated coredump in solaris due to missing id in pfmt DESCRIPTION: The coredump was seen due to missing id in pfmt. RESOLUTION: Added id in pfmt() statement. * 4158517 (Tracking ID: 4159199) SYMPTOM: coredump was being generated while running the TC "./scripts/admin/vxtune/vxdefault.tc" on AIX 7.3 TL2 gettimeofday(??, ??) at 0xd02a7dfc get_exttime(), line 532 in "vm_utils.c" cbr_cmdlog(argc = 2, argv = 0x2ff224e0, a_client_id = 0), line 275 in "cbr_cmdlog.c" main(argc = 2, argv = 0x2ff224e0), line 296 in "vxtune.c" DESCRIPTION: Passing NULL parameter to gettimeofday function was causing coredump creation RESOLUTION: Code changes have been made to pass timeval parameter instead of NULL to gettimeofday function. * 4161646 (Tracking ID: 4149528) SYMPTOM: ---------- Vxconfigd and vx commands hang. The vxconfigd stack is seen as follows. volsync_wait volsiowait voldco_read_dco_toc voldco_await_shared_tocflush volcvm_ktrans_fmr_cleanup vol_ktrans_commit volconfig_ioctl volsioctl_real vols_ioctl vols_unlocked_ioctl do_vfs_ioctl ksys_ioctl __x64_sys_ioctl do_syscall_64 entry_SYSCALL_64_after_hwframe DESCRIPTION: ------------ There is a hang in CVM reconfig and DCO-TOC protocol. This results in vxconfigd and vxvm commands to hang. In case overlapping reconfigs, it is possible that rebuild seqno on master and slave end up having different values. At this point if some DCO-TOC protocol is also in progress, the protocol gets hung due to difference in the rebuild seqno (messages are dropped). One can find messages similar to following in the /etc/vx/log/logger.txt on master node. We can see the mismatch in the rebuild seqno in the two messages. Look at the strings - "rbld_seq: 1" "fsio-rbld_seqno: 0". The seqno received from slave is 1 and the one present on master is 0. Jan 16 11:57:56:329170 1705386476329170 38ee FMR dco_toc_req: mv: masterfsvol1-1 rcvd req withold_seq: 0 rbld_seq: 1 Jan 16 11:57:56:329171 1705386476329171 38ee FMR dco_toc_req: mv: masterfsvol1-1 pend rbld, retry rbld_seq: 1 fsio-rbld_seqno: 0 old: 0 cur: 3 new: 3 flag: 0xc10d st RESOLUTION: ---------- Instead of using rebuild seqno to determine whether the DCO TOC protocol is running the same reconfig, using reconfig seqno as a rebuild seqno. Since the reconfig seqno on all nodes in the cluster is same, the DCO TCO protocol will find consistent rebuild seqno during CVM reconfig and will not result in some node dropping the DCO TOC protocol messages. Added CVM protocol version check while using reconfig seqno as rebuild seqno. Thus new functionality will come into effect only if CVM protocol version is >= 300. * 4162049 (Tracking ID: 4156271) SYMPTOM: Add node operation failed due to the missing selinux permission. DESCRIPTION: Following vxvm related denials are seen during add node operation. {'vxvm_t': 'allow vxvm_t dhcpc_t:file read;'}, {'vxvm_t': 'allow vxvm_t unconfined_service_t:sem associate;'} Steps: 1. Configure Cluster on physical 5561 machine 2. Perform Add node operation 3. Selinux Validation failed. *On Media-only VM setup these denials are not observed ISO : 3.2-20240212071558 Build Details: Platformx Core: 8.8.17-20240202002246 VxSupportability VERSION: 1.1.36-20240206092827 NetBackup: 10.3.0.1-0042 Access Build Version: NSO-8.8-2024-02-12a MSDP Version: 19.0.1-0031 InfoScale VERSION: LxRT-8.0.2-dorado-2024-02-05a Setup Details: P@ssw0rd@1234 ' >https://vflex5561-41-vip.vxindia.veritas.com:14161/#/infrastructure/node admin_user/P@ssw0rd@1234 RESOLUTION: Code changes have been committed for the issue. * 4162053 (Tracking ID: 4132221) SYMPTOM: Supportability requirement for easier path link to dmpdr utility DESCRIPTION: The current paths of DMPDR utility are so long and hard to remember for the customers. So it was requested to create a symbolic link to this utility for easier access. RESOLUTION: Code changes are made to create a symlink to this utility for easier access * 4162055 (Tracking ID: 4116024) SYMPTOM: kernel panicked at gab_ifreemsg with following stack: gab_ifreemsg gab_freemsg kmsg_gab_send vol_kmsg_sendmsg vol_kmsg_sender DESCRIPTION: In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k. RESOLUTION: Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics. * 4162058 (Tracking ID: 4046560) SYMPTOM: vxconfigd aborts on Solaris if device's hardware path is more than 128 characters. DESCRIPTION: When vxconfigd started, it claims the devices exist on the node and updates VxVM device database. During this process, devices which are excluded from vxvm gets excluded from VxVM device database. To check if device to be excluded, we consider device's hardware full path. If hardware path length is more than 128 characters, vxconfigd gets aborted. This issue occurred as code is unable to handle hardware path string beyond 128 characters. RESOLUTION: Required code changes has been done to handle long hardware path string. * 4162917 (Tracking ID: 4139166) SYMPTOM: Enable VVR Bunker feature for shared diskgroups. DESCRIPTION: VVR Bunker feature was not supported for shared diskgroup configurations. RESOLUTION: Enable VVR Bunker feature for shared diskgroups. * 4162966 (Tracking ID: 4146885) SYMPTOM: Restarting syncrvg after termination will start sync from start DESCRIPTION: vradmin syncrvg would terminate after 2 minutes of inactivity like network error. If run again, it would restart from scratch RESOLUTION: Continue vradmin syncrvg operation from where it was terminated * 4163010 (Tracking ID: 4146080) SYMPTOM: SELinux denials are observed on primary master causing failure in operation. DESCRIPTION: Detailed steps are shown below: 1. Configured 2+2RHEL9 node, shared storage, CVR setup with 3RVG(2 with async and 1 with sync replication) and 3/4 volumes each (some EC volumes and some non-EC) 4. Started fio on all volumes from primary master 5. EC online volume (sdg1_rvg1_vol3) was added in RVG1 of 500G 6. While online volume was added, performed primary slave reboot (pundl360g10-15v153) 7. observing ET: 4146079 and 8. Selinux denial messages(vxvm) on Primary master Repeatedly observing the below shown selinux denials: time->Fri Jan 5 16:06:40 2024 type=PROCTITLE msg=audit(1704451000.811:292): proctitle=7073002D6566 type=SYSCALL msg=audit(1704451000.811:292): arch=c000003e syscall=257 success=no exit=-13 a0=ffffff9c a1=7ffe2d59f3f0 a2=0 a3=0 items=0 ppid=15022 pid=15023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ps" exe="/usr/bin/ps" subj=system_u:system_r:vxvm_t:s0 key=(null) type=AVC msg=audit(1704451000.811:292): avc: denied { open } for pid=15023 comm="ps" path="/proc/3444/stat" dev="proc" ino=21284 scontext=system_u:system_r:vxvm_t:s0 tcontext=system_u:system_r:nscd_t:s0 tclass=file permissive=0 ---- time->Fri Jan 5 16:06:41 2024 type=PROCTITLE msg=audit(1704451001.160:302): proctitle=7073002D6F00756E6974003135303537 type=SYSCALL msg=audit(1704451001.160:302): arch=c000003e syscall=257 success=no exit=-13 a0=ffffff9c a1=7ffda59013d0 a2=0 a3=0 items=0 ppid=15105 pid=15106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ps" exe="/usr/bin/ps" subj=system_u:system_r:vxvm_t:s0 key=(null) type=AVC msg=audit(1704451001.160:302): avc: denied { open } for pid=15106 comm="ps" path="/proc/3444/stat" dev="proc" ino=21284 scontext=system_u:system_r:vxvm_t:s0 tcontext=system_u:system_r:nscd_t:s0 tclass=file permissive=0 RESOLUTION: Code changes have been committed. * 4164114 (Tracking ID: 4162873) SYMPTOM: disk reclaim is slow. DESCRIPTION: Disk reclaim length should be decided by storage's max reclaim length. But Volume Manager split the reclaim request into smaller segments than the maximum reclaim length, which led to a performance regression. RESOLUTION: Code change has been made to avoid splitting the reclaim request in volume manager level. * 4164250 (Tracking ID: 4154121) SYMPTOM: When the replicated disks are in SPLIT mode, importing its disk group on target node failed with "Device is a hardware mirror". DESCRIPTION: When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group on target node failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported when enable use_hw_replicatedev. RESOLUTION: The code is enhanced to import the replicated disk group on target node when enable use_hw_replicatedev. * 4164252 (Tracking ID: 4159403) SYMPTOM: When the replicated disks are in SPLIT mode and use_hw_replicatedev is on, disks are marked as cloned disks after the hardware replicated disk group gets imported. DESCRIPTION: add clearclone option automatically when import the hardware replicated disk group to clear the cloned flag on disks. RESOLUTION: The code is enhanced to import the replicated disk group with clearclone option. * 4164254 (Tracking ID: 4160883) SYMPTOM: clone_flag was set on srdf-r1 disks after reboot. DESCRIPTION: Clean clone got reset in case of AUTOIMPORT, which misled the clone_flag got set on the disk in the end. RESOLUTION: Code change has been made to correct the behavior of setting clone_flag on a disk. * 4164475 (Tracking ID: 4164474) SYMPTOM: Detached plex of an encrypted volume does not attach after node reboot. DESCRIPTION: If a plex is detached before reboot, it is expected to get automatically attached post reboot assuming no issues in storage connectivity. This was not working for encrypted volumes. This was mainly due to the race seen between starting of vxencryptd threads and vxrecover attaching the plexes. RESOLUTION: We have now added code to explicitly trigger vxrecover in the vxencryptd context when all the worker threads are spawned. * 4165276 (Tracking ID: 4161331) SYMPTOM: vxresize if failing to perform IOCTL and hence volume size is not able to change. DESCRIPTION: Following vxresize command is failing due to missing selinux permission- r6515-001v026.vxindia.veritas.com:~ # /usr/sbin/vradmin -g sfsdg resizevol rvg_MASTER_FS MASTER_FS_1 25091144 VxVM VVR vradmin INFO V-5-52-1206 Volume length rounded-up to 25091152 so that it is a multiple of the disk group alignment. Message from Primary: VxVM vxresize WARNING V-5-1-2527 VX_GETFSOPT ioctl failed -- errno 13 Refer to https://jira.community.veritas.com/browse/IA-54270 for more details [pid 290834] openat(AT_FDCWD, "/vx/MASTER_FS", O_RDONLY) = 9 [pid 290834] ioctl(9, _IOC(_IOC_WRITE, 0x46, 0x5, 0x1658), 0x7fffc0e6c5ec) = -1 EACCES (Permission denied) [pid 290834] write(2, "VxVM vxresize WARNING V-5-1-2527"..., 33VxVM vxresize WARNING V-5-1-2527 ) = 33 [pid 290834] write(2, "VX_GETFSOPT ioctl failed -- errn"..., 37VX_GETFSOPT ioctl failed -- errno 13 RESOLUTION: Code changes have been committed. * 4165431 (Tracking ID: 4160809) SYMPTOM: vxconfigd hang during VxVM transaction causing cluster hang situation DESCRIPTION: VxVM Volume with Data Change Object (DCO) configured with volume pre-allocates memory to perform bitmap read/write operations. This memory is pre-allocated during volume create/start times using KMEM cache (kmem_cache_alloc() call) . If system is under memory pressure, this memory allocation with KMEM cache gets stuck for long time waiting for memory to be grabbed. This leads to VxVM transaction hang like situation and eventually leads to IO slowness or clusterwide status for long time causing application IO timeouts. RESOLUTION: Changes done in FMR memory buffer allocation logic to use _get_free_pages() and vmalloc() based allocation instead of going through kmem_cache_alloc() calls to avoid hang situations. The code has been added to ensure allocation code quickly falls back to vmalloc() if _get_free_pages() is unable to allocate memory and thus avoiding hang like situation. * 4165889 (Tracking ID: 4165158) SYMPTOM: Plexes of layered volumes in VVR environment, remain in STALE state even after manual or vxattachd driven vxrecover operation. DESCRIPTION: The issue is that the stale TUTILs are not getting detected and cleared by vxattachd under some specific conditions. - The specific conditions are: 1. volume under rvg (VVR) + 2. volume should be layered. - This is because, it relies on "vxprint -a" o/p. - Looks like vxprint -a does not capture the layered volume in its o/p. - When vxprint -a is given the name of the layered volume (vxprint -a vol-L01), then it prints the layered volume correctly RESOLUTION: - Found another option -h, which when used with -a, does show layered volume, without giving the object name. - After modifying the vxattachd to use "-ah" instead of "-a", it was able to recover the volumes. - Also extending the logic to clear stale tutils to private disk groups. * 4166881 (Tracking ID: 4164734) SYMPTOM: Support for TLS1.1 is not disabled. DESCRIPTION: In VxVM product we have disabled support for TLS 1.0, SSLv2 and SSLv3 already. Support TLS1.1 is not disabled.TLSv1.1 has security vulnerabilities RESOLUTION: Make required code change to disable support for TLS1.1. * 4166882 (Tracking ID: 4161852) SYMPTOM: Post InfoScale upgrade, command "vxdg upgrade" succeed but throw error "RLINK is not encypted". DESCRIPTION: In "vxdg upgrade" codepath we need to regenerate the encryption keys if encrypted Rlinks are present in VxVM configuration. But key regeneration code was getting called even if Rlinks are not encrypted. And so further code was throwing error that "VxVM vxencrypt ERROR V-5-1-20484 Rlink is not encrypted!" RESOLUTION: Necessary code changes has been made to invoke encryption key regeneration for RLinks only if it is encrypted. * 4172377 (Tracking ID: 4172033) SYMPTOM: Data corruption after recovery of volume DESCRIPTION: When disabled / detached volume gets started after storage coming back it was leaving stale agenodes in memory which was causing detach tracking not happening for subsequent IOs on same region as stale agenode. RESOLUTION: Cleaned up stale agendas at appropriate stage. * 4172424 (Tracking ID: 4168552) SYMPTOM: CFS file system gets hung, and following error log will be there in syslog: [ 2492.765405] blk_insert_cloned_request: over max segments limit. (272 > 256) DESCRIPTION: As vxdmp is a layered (stackable) block device driver it has to inherit all device queue_limits of the underlying device, which was not happening properly that's why this issue has been observed. RESOLUTION: Fix has been merged into the 8.0.2 branch, And fix is use proper API i.e. blk_stack_limits to set device queue_limits. * 4173722 (Tracking ID: 4158303) SYMPTOM: NULL pointer dereference at dmp_error_analysis_callback with below stack. #5 [] __bad_area_nosemaphore #6 [] do_page_fault #7 [] page_fault [exception RIP: dmpsvc_da_analyze_error+417] #8 [] dmp_error_analysis_callback at [vxdmp] #9 [] dmp_daemons_loop at [vxdmp] DESCRIPTION: BLK-MQ code processes IO in request failed to deal with the bio. The bio was a dummy bio which might be added just for compatibility, and result in system panic. RESOLUTION: Code change has been made to check whether IO is request based, if yes, handle it differently. * 4174239 (Tracking ID: 4171979) SYMPTOM: System panics with following message "kernel BUG at fs/inode.c:1578!" DESCRIPTION: System panics with Panic Stack: #5 [] do_invalid_op #6 [] invalid_op [exception RIP: iput+436] #7 [] bd_acquire #8 [] blkdev_open #9 [] do_dentry_open #10 [] path_openat #11 [] do_filp_open RESOLUTION: There is a inode reference leaks in our code due to which the inode reference count increases till its reaches its maximum permissible value(i.e. max size of a 32-bit unsigned int, or 4294967295). Once it hits this then it wraps back to 0 which is an invalid value which causes the system panic. Code change to fix inode reference count leaks have been done. * 4175713 (Tracking ID: 4175712) SYMPTOM: Vulnerabilities have been reported in third party components, [curl , libxml , openssl] that are used by VxVM. DESCRIPTION: Third party components [curl , libxml , openssl] in their current versions, used by VxVM have been reported with security vulnerabilities which needs RESOLUTION: [curl , libxml , openssl] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed. * 4177400 (Tracking ID: 4173284) SYMPTOM: "dmpdr -o refresh" command if failing with error: #usr/lib/vxvm/voladm.d/bin/dmpdr -o refresh Global symbol "$mask" requires explicit package name (did you forget to declare "my $mask"?) at /usr/lib/vxvm/voladm.d/lib/Comm.pm line 186. Global symbol "$mask" requires explicit package name (did you forget to declare "my $mask"?) at /usr/lib/vxvm/voladm.d/lib/Comm.pm line 190. Compilation failed in require at /usr/lib/vxvm/voladm.d/bin/dmpdr line 14. BEGIN failed--compilation aborted at /usr/lib/vxvm/voladm.d/bin/dmpdr line 14. DESCRIPTION: Instead of global a local variable was used in perl because of which this issue occurs RESOLUTION: Code change have been done to fix the problem. * 4177791 (Tracking ID: 4167359) SYMPTOM: EMC DeviceGroup missing SRDF SYMDEV. After doing disk group import, import failed with "Disk write failure" and corrupts disks headers. DESCRIPTION: SRDF will not make all disks read-writable (RW) on the remote side during an SRDF failover. When an SRDF SYMDEV is missing, the missing disk in pairs on the remote side remains in a write-disabled (WD) state. This leads to write errors, which can further cause disk header corruption. RESOLUTION: Code change has been made to fail disk group if any disks in this group are detected as WD. * 4177793 (Tracking ID: 4168665) SYMPTOM: use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master. DESCRIPTION: use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master. RESOLUTION: Reset "ret" before making another attempt of dg import. * 4178101 (Tracking ID: 4113841) SYMPTOM: VVR panic happened in below code path: kmsg_sys_poll() nmcom_get_next_mblk() nmcom_get_hdr_msg() nmcom_get_next_msg() nmcom_wait_msg_tcp() nmcom_server_main_tcp() DESCRIPTION: When the network scan tool send request to VVR which is unexpected, during the VVR connection handshake, the tcp connection may be terminated immediately by the network scan tool, which may lead to the sock released. Hence, VVR panic when try to refer to it as hit the NULL pointer during the processing. RESOLUTION: The code change has been made to check sock is valid, otherwise, return without continue with the VVR connection. * 4178106 (Tracking ID: 4176336) SYMPTOM: VVR replication pauses to due to network disconnection (VVR heartbeat timeout). DESCRIPTION: VVR UDP heartbeat packets are sent on port 4145, which is also used for replicating data. While the replication is done on TCP, the heartbeats are sent on UDP protocol. The UDP packets are from 4145 port in source to 4145 port at destination. However in case a port scanner like nmap if run parallelly in VVR environment produces 0 length UDP packets from a random port to replication port 4145. VVR failed to handle these 0 length UDP packets, hence the issue. RESOLUTION: Code change has been made to handle 0 length rogue UDP packets. * 4178177 (Tracking ID: 4153457) SYMPTOM: When using Dell/EMC PowerFlex ScaleIO storage, Veritas File System(VxFS) on Veritas Volume Manager(VxVM) volumes fail to mount after reboot. DESCRIPTION: During system boot it was seen that the ScaleIO devices are detected after VxVM has completed it auto discovery of the storage devices. Hence VxVM doesn't auto detect the ScaleIO devices and fail to auto-import the diskgroup and mount the filesystem. RESOLUTION: Appropriate code changes are done to auto discover the ScaleIO devices. * 4178186 (Tracking ID: 4152014) SYMPTOM: the excluded dmpnodes are visible after system reboot when SELinux is disabled. DESCRIPTION: During system reboot, disks' hardware soft links failed to be created before DMP exclusion in function, hence DMP failed to recognize the excluded dmpnodes. RESOLUTION: Code changes have been made to reduce the latency in creation of hardware soft links and remove tmpfs /dev/vx on an SELinux Disabled platform. * 4178201 (Tracking ID: 4120878) SYMPTOM: System doesn't come up on taking a reboot after enabling dmp_native_support. System goes into maintenance mode. DESCRIPTION: "vxio.ko" is dependent on the new "storageapi.ko" module. "storageapi.ko" was missing from VxDMP_initrd file, which is created when dmp_native_support is enabled. So on reboot, without "storageapi.ko" present, "vxio.ko" fails to load. RESOLUTION: Code changes have been made to include "strorageapi.ko" in VxDMP_initrd. * 4178207 (Tracking ID: 4118809) SYMPTOM: System panic at dmp_process_errbp with following call stack. machine_kexec __crash_kexec crash_kexec oops_end no_context __bad_area_nosemaphore do_page_fault page_fault [exception RIP: dmp_process_errbp+203] dmp_daemons_loop kthread ret_from_fork DESCRIPTION: When a LUN is detached, VxDMP may invoke its error handler to process the error buffer, during that period the OS SCSI device node could have been removed, which will make VxDMP can not find the corresponding path node and introduces a pointer reference panic. RESOLUTION: Code changes have been made to avoid panic. * 4178260 (Tracking ID: 4175390) SYMPTOM: Plexes will have STATE field as TEMPRMSD and the TUTIL0 field as NEW. DESCRIPTION: When adding multiple mirrors to a volume in background parallelly, mirror plexes will be go in TEMPRMSD state. vxtask list will show task for only one plex. For remaining plex no tasks will be created. # vxassist -g mirrordg -b mirror vol1 ibm_shark0_11 # vxassist -g mirrordg -b mirror vol1 ibm_shark0_12 [##]# vxprint <snip> v vol1 fsgen ENABLED 4194304 - ACTIVE - - pl vol1-01 vol1 ENABLED 4194304 - ACTIVE - - sd ibm_shark0_10-01 vol1-01 ENABLED 4194304 0 - - - pl vol1-02 vol1 ENABLED 4194304 - ACTIVE - - sd ibm_shark0_11-01 vol1-02 ENABLED 4194304 0 - - - pl vol1-03 vol1 DISABLED 4194304 - TEMPRMSD NEW - sd ibm_shark0_12-01 vol1-03 ENABLED 4194304 0 - - - RESOLUTION: Code changes has been done to fix the issue. * 4179072 (Tracking ID: 4178449) SYMPTOM: vxconfigd gets abort for segfault, vold core file can see thread stack corruption. DESCRIPTION: In vxconfigd multi-threads mode, two threads were writing translog in parallel using static buffer, which can be realloc for bigger buffer, resulting in a thread accessing it but post-free. RESOLUTION: Use thread-safe popen() and mutex to protect static buffer from use-post-free. * 4179818 (Tracking ID: 4178920) SYMPTOM: "vxdmp V-5-0-0 failed to get request for devno for IO offset" continuously appears in the system log. DESCRIPTION: DMP sets BLK_MQ_REQ_NOWAIT when allocating a request from the OS. This means that the OS might not be able to allocate a request for the I/O operation when the system is busy. As a result, DMP reports a warning message. When this occurs, DMP will retry the allocation later. This message is harmless. RESOLUTION: Code change has been made to reduce the log level of this message. * 4183337 (Tracking ID: 4184198) SYMPTOM: In CVR environments, during heavy application writes on CVM slave nodes, the number of I/O's written to SRL per second reduces. DESCRIPTION: In CVR environments, during heavy application writes on CVM slave nodes, the io's get throttle at logowner node causing the overall IOPS on data volume to reduce dramatically. The throttling is due to lock contention at the logowner during bursty workload on slave nodes. This can be validated by looking at vxstat output on data volumes. RESOLUTION: VxVM changes done to remove the lock contention at logwner for writes on CVM slave nodes and also modified code to request SRL position for writes in FIFO order. * 4184100 (Tracking ID: 4183777) SYMPTOM: System log is flooding with the fake alarms "VxVM vxio V-5-0-0 read/write on disk: xxx took longer to complete". DESCRIPTION: When vol_ioship_stats_enable is disabled, volume layer uses jiffies to initialize IO's start time. Later DMP uses the current time of day to reset the start time. This causes the big discrepancy when compare the different time formats, hence the issue. RESOLUTION: Code changes have been done to set the IO's start and end time using the same format. * 4185142 (Tracking ID: 4185141) SYMPTOM: Support VxVM on SLES15 SP6 DESCRIPTION: VxVM encountered breakages with SLES15 SP6 RESOLUTION: Code changes have been done to support VxVM and ASL-APM for SLES15 SP6 * 4187579 (Tracking ID: 4187459) SYMPTOM: Plex attach operations are taking an excessive amount of time to sync when Azure 4K Native disks are configured. All VxVM commands hang. DESCRIPTION: A defect in the DCO (Data Change Object) has been identified, which may generate non-4K-aligned I/Os. Customers may encounter this issue if the underlying disks are unable to handle unaligned I/Os. RESOLUTION: Code changes have been made to make sure IO is aligned with disk sector size. * 4188380 (Tracking ID: 4067191) SYMPTOM: In CVR environment after rebooting Slave node, Master node may panic in volrv_free_mu DESCRIPTION: As part of CVM Master switch a rvg_recovery is triggered. In this step race condition can occured between the VVR objects due to which the object value is not updated properly and can cause panic. RESOLUTION: Code changes are done to handle the race condition between VVR objects. Patch ID: VRTSaslapm 8.0.2.2300 * 4137497 (Tracking ID: 4011780) SYMPTOM: This is new array and we need to add support for EMC PowerStore plus PP. DESCRIPTION: EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL. RESOLUTION: Code changes to support EMC PowerStore plus PP have been done. * 4188382 (Tracking ID: 4188381) SYMPTOM: Support ASLAPM on SLES15 SP6 DESCRIPTION: ASLAPM encountered breakages with SLES15 SP6 RESOLUTION: Code changes have been done to support VxVM and ASL-APM for SLES15 SP6 Patch ID: VRTSvxvm-8.0.2.1400 * 4124889 (Tracking ID: 4090828) SYMPTOM: Dumped fmrmap data for better debuggability for corruption issues DESCRIPTION: vxplex att/vxvol recover cli will internally fetch fmrmaps from kernel using existing ioctl before starting attach operation and get data in binary format and dump to file and store it with specific format like volname_taskid_date. RESOLUTION: Changes done now dumps the fmrmap data into a binary file. * 4129765 (Tracking ID: 4111978) SYMPTOM: Replication failed to start due to vxnetd threads not running on secondary site. DESCRIPTION: Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached. RESOLUTION: Code changes have been made to add lock protection to avoid the race condition. * 4130858 (Tracking ID: 4128351) SYMPTOM: System hung observed when switching log owner. DESCRIPTION: VVR mdship SIOs might be throttled due to reaching max allocation count,etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung. RESOLUTION: Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain. * 4130861 (Tracking ID: 4122061) SYMPTOM: Observing hung after resync operation, vxconfigd was waiting for slaves' response. DESCRIPTION: VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node. RESOLUTION: Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function. * 4132775 (Tracking ID: 4132774) SYMPTOM: Existing VxVM package fails to load on SLES15SP5 DESCRIPTION: There are multiple changes done in this kernel related to handling of SCSI passthrough requests ,initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5. RESOLUTION: Required changes have been done to make VxVM compatible with SLES15SP5. * 4133930 (Tracking ID: 4100646) SYMPTOM: Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks DESCRIPTION: Due to multiple reason stale tutil may remain stamped on dcl subdisks which may cause next vxrecover instances not able to recover dcl plex. RESOLUTION: Issue is resolved by vxattachd daemon intelligently detecting these stale tutils and clearing+triggering recoveries after 10 min interval. * 4133946 (Tracking ID: 3972344) SYMPTOM: After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150 Volume <volume_name> does not exist' is logged. DESCRIPTION: In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message. The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s] RESOLUTION: Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly. * 4135127 (Tracking ID: 4134023) SYMPTOM: vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error: # vxconfigrestore -p LINUXSRDF VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ...... VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed: Replicated dg record is found. Did you want to import hardware replicated LUNs? Try vxdg [-o usereplicatedev=only] import option with -c[s] Please refer to system log for details. ... ... VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed. DESCRIPTION: H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed. RESOLUTION: The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore . * 4135388 (Tracking ID: 4131202) SYMPTOM: In VVR environment, 'vradmin changeip' would fail with following error message: VxVM VVR vradmin ERROR V-5-52-479 Host <host> not reachable. DESCRIPTION: Existing heartbeat to new secondary host is assumed, whereas it starts after the changeip operation. RESOLUTION: Heartbeat assumption is fixed. * 4136419 (Tracking ID: 4089696) SYMPTOM: In FSS environment, with DCO log attached to VVR SRL volume, the reboot of the cluster may result into panic on the CVM master node as follows: voldco_get_mapid voldco_get_detach_mapid voldco_get_detmap_offset voldco_recover_detach_map volmv_recover_dcovol vol_mv_fmr_precommit vol_mv_precommit vol_ktrans_precommit_parallel volobj_ktrans_sio_start voliod_iohandle voliod_loop DESCRIPTION: If DCO is configured with SRL volume, and both SRL volume plexes and DCO plexes get I/O error, this panic occurs in the recovery path. RESOLUTION: Recovery path is fixed to manage this condition. * 4136428 (Tracking ID: 4131449) SYMPTOM: In CVR environments, there was a restriction to configure up to four RVGs per diskgroup as more RVGs resulted in degradation of I/O performance in case of VxVM transactions. DESCRIPTION: In CVR environments, VxVM transactions on an RVG also impacted I/O operations on other RVGs in the same diskgroup resulting in I/O performance degradation in case of higher number of RVGs configured in a diskgroup. RESOLUTION: VxVM transaction impact has been isolated to each RVG resulting in the ability to scale beyond four RVGs in a diskgroup. * 4136429 (Tracking ID: 4077944) SYMPTOM: In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang. DESCRIPTION: In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases. RESOLUTION: Resolved the issue by making sure the application throttled I/Os get driven in all the cases. * 4136802 (Tracking ID: 4136751) SYMPTOM: Selinux denies access to such files where support_t permissions are required DESCRIPTION: Selinux denies access to such files where support_t permissions are required to fix such denials added this fix RESOLUTION: code changes have been done for this issue, hence resolved * 4136859 (Tracking ID: 4117568) SYMPTOM: Vradmind dumps core with the following stack: #1 std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x7ffdc380d810, __str=<error reading variable: Cannot access memory at address 0x3736656436303563>) #2 0x000000000040e02b in ClientMgr::closeStatsSession #3 0x000000000040d0d7 in ClientMgr::client_ipm_close #4 0x000000000058328e in IpmHandle::~IpmHandle #5 0x000000000057c509 in IpmHandle::events #6 0x0000000000409f5d in main DESCRIPTION: After terminating vrstat, the StatSession in vradmind was closed and the corresponding Client object was deleted. When closing the IPM object of vrstat, try to access the removed Client, hence the core dump. RESOLUTION: Core changes have been made to fix the issue. * 4136866 (Tracking ID: 4090476) SYMPTOM: Storage Replicator Log (SRL) is not draining to secondary. rlink status shows the outstanding writes never got reduced in several hours. VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL DESCRIPTION: In poor network environment, VVR seems not syncing. Another reconfigure happened before VVR state became clean, VVR atomic window got set to a large size. VVR couldnt complete all the atomic updates before the next reconfigure. VVR ended kept sending atomic updates from VVR pending position. Hence VVR appears to be stuck. RESOLUTION: Code changes have been made to update VVR pending position accordingly. * 4136868 (Tracking ID: 4120068) SYMPTOM: A standard disk was added to a cloned diskgroup successfully which is not expected. DESCRIPTION: When add a disk to a disk group, a pre-check will be made to avoid ending up with a mixed diskgroup. In a cluster, the local node might fail to use the latest record to do the pre-check, which caused a mixed diskgroup in the cluster, further caused node join failure. RESOLUTION: Code changes have been made to use latest record to do a mixed diskgroup pre-check. * 4136870 (Tracking ID: 4117957) SYMPTOM: During a phased reboot of a two node Veritas Access cluster, mounts would hang. Transaction aborted waiting for io drain. VxVM vxio V-5-3-1576 commit: Timedout waiting for Cache XXXX to quiesce, iocount XX msg 0 DESCRIPTION: Transaction on Cache object getting failed since there are IOs waiting on the cache object. Those queued IOs couldn't be proceeded due to the missing flag VOLOBJ_CACHE_RECOVERED on the cache object. A transact might kicked in when the old cache was doing recovery, therefore the new cache object might fail to inherit VOLOBJ_CACHE_RECOVERED, further caused IO hung. RESOLUTION: Code changes have been to fail the new cache creation if the old cache is doing recovery. * 4137174 (Tracking ID: 4081740) SYMPTOM: vxdg flush command slow due to too many luns needlessly access /proc/partitions. DESCRIPTION: Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group. RESOLUTION: Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt. * 4137175 (Tracking ID: 4124223) SYMPTOM: Core dump is generated for vxconfigd in TC execution. DESCRIPTION: TC creates a scenario where 0s are written in first block of disk. In such case, Null check is necessary in code before some variable is accessed. This Null check is missing which causes vxconfigd core dump in TC execution. RESOLUTION: Necessary Null checks is added in code to avoid vxconfigd core dump. * 4137508 (Tracking ID: 4066310) SYMPTOM: New feature for performance improvement DESCRIPTION: Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP. RESOLUTION: resolved * 4137615 (Tracking ID: 4087628) SYMPTOM: When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state . DESCRIPTION: During Resiliency tests, performed sequence of operations as following. 1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs. 2. The low owner service groups for both the RVGs are online on a Slave node. 3. Rebooted another Slave node where logowner is not online. 4. After Slave node come back from reboot, it is unable to join CVM Cluster. 5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node. RESOLUTION: In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed. * 4137753 (Tracking ID: 4128271) SYMPTOM: In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place. DESCRIPTION: If there has been an SRL overflow, then RVG recovery takes more time as it was loaded with more work than required because the recovery related metadata was not updated. RESOLUTION: Updated the metadata correctly to reduce the RVG recovery time. * 4137757 (Tracking ID: 4136458) SYMPTOM: In CVR environment, if CVM slave node is acting as logowner, the DCM resync issues after snapshot restore may hang showing 0% sync is remaining. DESCRIPTION: The DCM resync completion is not correctly communicated to CVM master resulting into hang. RESOLUTION: The DCM resync operation is enhanced to correctly communicate resync completion to CVM master. * 4137986 (Tracking ID: 4133793) SYMPTOM: DCO experience IO Errors while doing a vxsnap restore on vxvm volumes. DESCRIPTION: Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error. RESOLUTION: Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. * 4138051 (Tracking ID: 4090943) SYMPTOM: On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog: VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary. DESCRIPTION: When RVG logowner node panic, RVG recovery happens in 3 phases. At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect and during this time if there is logowner change then Rlink won't get connected. RESOLUTION: Handled in-memory and on-disk SRL positions correctly. * 4138069 (Tracking ID: 4139703) SYMPTOM: System gets panicked on RHEL9.2 AWS environment while registering the pgr key. DESCRIPTION: On RHEL 9.2, Observing panic while reading PGR keys on AWS VM. 2) Reproduction steps: Run "/etc/vx/diag.d/vxdmppr read /dev/vx/dmp/ip-10-20-2-49_nvme4_0" on AWS nvme 9.2 setup. 3) Build details: ga8_0_2_all_maint 4) Test Bed details: AWS VM with RHEL 9.2 Nodes: Access details(login) Console details: 4) OS and Kernel details: 5.14.0-284.11.1.el9_2.x86_64 5). Crash dump and core dump location with Binary 6) Failure signature: PID: 8250 TASK: ffffa0e882ca1c80 CPU: 1 COMMAND: "vxdmppr" #0 [ffffbf3c4039f8e0] machine_kexec at ffffffffb626c237 #1 [ffffbf3c4039f938] __crash_kexec at ffffffffb63c3c9a #2 [ffffbf3c4039f9f8] crash_kexec at ffffffffb63c4e58 #3 [ffffbf3c4039fa00] oops_end at ffffffffb62291db #4 [ffffbf3c4039fa20] do_trap at ffffffffb622596e #5 [ffffbf3c4039fa70] do_error_trap at ffffffffb6225a25 #6 [ffffbf3c4039fab0] exc_invalid_op at ffffffffb6d256be #7 [ffffbf3c4039fad0] asm_exc_invalid_op at ffffffffb6e00af6 [exception RIP: kfree+1074] RIP: ffffffffb6578e32 RSP: ffffbf3c4039fb88 RFLAGS: 00010246 RAX: ffffa0e7984e9c00 RBX: ffffa0e7984e9c00 RCX: ffffa0e7984e9c60 RDX: 000000001bc22001 RSI: ffffffffb6729dfd RDI: ffffa0e7984e9c00 RBP: ffffa0e880042800 R8: ffffa0e8b572b678 R9: ffffa0e8b572b678 R10: 0000000000005aca R11: 00000000000000e0 R12: fffff20e00613a40 R13: fffff20e00613a40 R14: ffffffffb6729dfd R15: 0000000000000000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #8 [ffffbf3c4039fbc0] blk_update_request at ffffffffb6729dfd #9 [ffffbf3c4039fc18] blk_mq_end_request at ffffffffb672a11a #10 [ffffbf3c4039fc30] dmp_kernel_nvme_ioctl at ffffffffc09f2647 [vxdmp] #11 [ffffbf3c4039fd00] dmp_dev_ioctl at ffffffffc09a3b93 [vxdmp] #12 [ffffbf3c4039fd10] dmp_send_nvme_passthru_cmd_over_node at ffffffffc09f1497 [vxdmp] #13 [ffffbf3c4039fd60] dmp_pr_do_nvme_read.constprop.0 at ffffffffc09b78e1 [vxdmp] #14 [ffffbf3c4039fe00] dmp_pr_read at ffffffffc09e40be [vxdmp] #15 [ffffbf3c4039fe78] dmpioctl at ffffffffc09b09c3 [vxdmp] #16 [ffffbf3c4039fe88] dmp_ioctl at ffffffffc09d7a1c [vxdmp] #17 [ffffbf3c4039fea0] blkdev_ioctl at ffffffffb6732b81 #18 [ffffbf3c4039fef0] __x64_sys_ioctl at ffffffffb65df1ba #19 [ffffbf3c4039ff20] do_syscall_64 at ffffffffb6d2515c #20 [ffffbf3c4039ff50] entry_SYSCALL_64_after_hwframe at ffffffffb6e0009b RIP: 00007fef03c3ec6b RSP: 00007ffd1acad8a8 RFLAGS: 00000202 RAX: ffffffffffffffda RBX: 00000000444d5061 RCX: 00007fef03c3ec6b RDX: 00007ffd1acad990 RSI: 00000000444d5061 RDI: 0000000000000003 RBP: 0000000000000003 R8: 0000000001cbba20 R9: 0000000000000000 R10: 00007fef03c11d78 R11: 0000000000000202 R12: 00007ffd1acad990 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000002 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b RESOLUTION: resolved * 4138075 (Tracking ID: 4129873) SYMPTOM: In CVR environment, the application I/O may hang if CVM slave node is acting as RVG logowner and a data volume grow operation is triggered followed by a logclient node leaving the cluster. DESCRIPTION: When logowner is not CVM master, and data volume grow operation is taking place, the CVM master controls the region locking for IO operations. In case, a logclient node leaves the cluster, the I/O operations initiated by it are not cleaned up correctly due to lack of co-ordination between CVM master and RVG logowner node. RESOLUTION: Co-ordination between CVM master and RVG logowner node is fixed to manage the I/O cleanup correctly. * 4138101 (Tracking ID: 4114867) SYMPTOM: Getting these error messages while adding new disks [root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1 KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/ [root@server101 ~]# [root@server101 ~]# systemctl restart systemd-udevd.service [root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid" invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D') DESCRIPTION: In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue. RESOLUTION: Code changes have been made to correct the problem. * 4138107 (Tracking ID: 4065490) SYMPTOM: systemd-udev threads consumes more CPU during system bootup or device discovery. DESCRIPTION: During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware path symbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to each storage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached to system, then usage of "find" command causes high CPU consumption. Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled. RESOLUTION: Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being set only when SELinux is enabled on system. * 4138224 (Tracking ID: 4129489) SYMPTOM: With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output. DESCRIPTION: There was an issue with disk discovery at OS and DDL layer. RESOLUTION: Integration issue with disk discovery was resolved. * 4138236 (Tracking ID: 4134069) SYMPTOM: VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node. DESCRIPTION: Initial synchronization and DCM replay of VVR required the filesystem to be mounted locally on the logowner node as VVR did not have capability to fetch the required information from a remotely mounted filesystem mount point. RESOLUTION: VVR is updated to fetch the required SmartMove related information from a remotely mounted filesystem mount point. * 4138237 (Tracking ID: 4113240) SYMPTOM: In CVR environment, with hostname binding configured, Rlink on VVR secondary may have incorrect VVR primary IP. DESCRIPTION: VVR Secondary Rlink picks up a wrong IP randomly since the replication is configured using virtual host which maps to multiple IPs. RESOLUTION: VVR Primary IP is corrected on the VVR Secondary Rlink. * 4138251 (Tracking ID: 4132799) SYMPTOM: If GLM is not loaded, start CVM fails with the following errors: # vxclustadm -m gab startnode VxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - VxVM vxclustadm ERROR V-5-1-9743 errno 3 DESCRIPTION: The error number but the error message is printed while joining CVM fails. RESOLUTION: The code changes have been made to fix the issue. * 4138348 (Tracking ID: 4121564) SYMPTOM: Memory leak for volcred_t could be observed in vxio. DESCRIPTION: Memory leak could occur if some private region IOs hang on a disk and there are duplicate entries for the disk in vxio. RESOLUTION: Code has been changed to avoid memory leak. * 4138537 (Tracking ID: 4098144) SYMPTOM: vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume DESCRIPTION: vxtask remains stuck since the parent process doesn't exit. It was seen that all childs are completed, but the parent is not able to exit. (gdb) p active_jobs $1 = 1 Active jobs are reduced as in when childs complete. Somehow one count is pending and we don't know which child exited without decrementing count. Instrumentation messages are added to capture the issue. RESOLUTION: Added a code that will create a log file in /etc/vx/log/. This file will be deleted when vxrecover exists successfully. The file will be present when vxtask parent hang issue is seen. * 4138538 (Tracking ID: 4085404) SYMPTOM: Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured. DESCRIPTION: The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM. RESOLUTION: The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush. * 4140598 (Tracking ID: 4141590) SYMPTOM: Some incidents do not appear in changelog because their cross-references are not properly processed DESCRIPTION: Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution. RESOLUTION: All cross-references are traversed to find parent-child only if it present and then find top. * 4143580 (Tracking ID: 4142054) SYMPTOM: System panicked in the following stack: [ 9543.195915] Call Trace: [ 9543.195938] dump_stack+0x41/0x60 [ 9543.195954] panic+0xe7/0x2ac [ 9543.195974] vol_rv_inactive+0x59/0x790 [vxio] [ 9543.196578] vol_rvdcm_flush_done+0x159/0x300 [vxio] [ 9543.196955] voliod_iohandle+0x294/0xa40 [vxio] [ 9543.197327] ? volted_getpinfo+0x15/0xe0 [vxio] [ 9543.197694] voliod_loop+0x4b6/0x950 [vxio] [ 9543.198003] ? voliod_kiohandle+0x70/0x70 [vxio] [ 9543.198364] kthread+0x10a/0x120 [ 9543.198385] ? set_kthread_struct+0x40/0x40 [ 9543.198389] ret_from_fork+0x1f/0x40 DESCRIPTION: - From the SIO stack, we can see that it is a case of done being called twice. - Looking at vol_rvdcm_flush_start(), we can see that when child sio is created, it is being directly added to the the global SIO queue. - This can cause child SIO to start while vol_rvdcm_flush_start() is still in process of generating other child SIOs. - It means that, say the first child SIO gets done, it can find the children count going to zero and calls done. - The next child SIO, also independently find children count to be zero and call done. RESOLUTION: The code changes have been done to fix the problem. * 4143857 (Tracking ID: 4130393) SYMPTOM: vxencryptd crashed repeatedly due to segfault. DESCRIPTION: Linux could pass large IOs with 2MB size to VxVM layer, however vxencryptd only expects IOs with maximum IO size 1MB from kernel and only pre-allocates 1MB buffer size for encryption/decryption. This would cause vxencryptd to crash when processing large IOs. RESOLUTION: Code changes have been made to allocate enough buffer. * 4145064 (Tracking ID: 4145063) SYMPTOM: vxio Module fails to load post VxVM package installation. DESCRIPTION: Following message is seen in dmesg: [root@dl360g10-115-v23 ~]# dmesg | grep symbol [ 2410.561682] vxio: no symbol version for storageapi_associate_blkg RESOLUTION: Because of incorrectly nested IF blocks in the "src/linux/kernel/vxvm/Makefile.target", the code for the RHEL 9 block was not getting executed, because of which certain symbols were not present in the vxio.mod.c file. This in turn caused the above mentioned symbol warning to be seen in dmesg. Fixed the improper nesting of the IF conditions. * 4146550 (Tracking ID: 4108235) SYMPTOM: System wide hang causing all application and config IOs hang DESCRIPTION: Memory pools are used in vxio driver for managing kernel memory for different purposes. One of the pool called 'NMCOM pool' used on VVR secondary was causing memory leak. Memory leak was not getting detected from pool stats as metadata referring to pool itself was getting freed. RESOLUTION: Bug causing memory leak is fixed. There was race condition in VxVM transaction code path on secondary side of VVR where memory was not getting freed when certain conditions was hit. * 4149499 (Tracking ID: 4149498) SYMPTOM: While upgrading the VxVM package, a number of warnings are seen regarding .ko files not being found for various modules. DESCRIPTION: These warnings are seen because all the unwanted .ko files have been removed. RESOLUTION: Code changes have been done to not get these warnings. * 4150099 (Tracking ID: 4150098) SYMPTOM: After few VxVM operations ,if a reboot is taken file system goes into read-only mode and vxconfigd does not come up . DESCRIPTION: SELinux context of /etc/fstab is getting updated which is causing the issue. RESOLUTION: Fixed the SELinux context of /etc/fstab. * 4150459 (Tracking ID: 4150160) SYMPTOM: System gets panicked in dmp code path DESCRIPTION: CMDS-fsmigadm test hits "Oops: 0003 [#1] PREEMPT SMP PTI" 2) Reproduction steps: Running cmds-fsmigadm test. 3) Build details: # rpm -qi VRTSvxvm Name : VRTSvxvm Version : 8.0.3.0000 Release : 0716_RHEL9 Architecture: x86_64 Install Date: Wed 10 Jan 2024 11:46:24 AM IST Group : Applications/System Size : 414813743 License : Veritas Proprietary Signature : RSA/SHA256, Thu 04 Jan 2024 04:24:23 PM IST, Key ID 4e84af75cc633953 Source RPM : VRTSvxvm-8.0.3.0000-0716_RHEL9.src.rpm Build Date : Thu 04 Jan 2024 06:35:01 AM IST Build Host : vmrsvrhel9bld.rsv.ven.veritas.com Packager : enterprise_technical_support@veritas.com Vendor : Veritas Technologies LLC URL : www.veritas.com/support Summary : Veritas Volume Manager RESOLUTION: removed buggy code and fixed it. Patch ID: VRTSaslapm 8.0.2.1400 * 4137995 (Tracking ID: 4117350) SYMPTOM: Below error is observed when trying to import # vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed: Replicated dg record is found. Did you want to import hardware replicated LUNs? Try vxdg [-o usereplicatedev=only] import option with -c[s] Please refer to system log for details. DESCRIPTION: REPLICATED flag is used to identify a hardware replicated device so to import dg on the REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed . RESOLUTION: REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks. Patch ID: VRTSvxvm-8.0.2.1300 * 4132775 (Tracking ID: 4132774) SYMPTOM: Existing VxVM package fails to load on SLES15SP5 DESCRIPTION: There are multiple changes done in this kernel related to handling of SCSI passthrough requests ,initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5. RESOLUTION: Required changes have been done to make VxVM compatible with SLES15SP5. * 4133312 (Tracking ID: 4128451) SYMPTOM: A hardware replicated disk group fails to be auto-imported after reboot. DESCRIPTION: Currently the standard diskgroup and cloned diskgroup are supported with auto-import. Hardware replicated disk group isn't supported yet. RESOLUTION: Code changes have been made to support hardware replicated disk groups with autoimport. * 4133315 (Tracking ID: 4130642) SYMPTOM: node failed to rejoin the cluster after switched from master to slave due to the failure of the replicated diskgroup import. The below error message could be found in /var/VRTSvcs/log/CVMCluster_A.log. CVMCluster:cvm_clus:monitor:vxclustadm nodestate return code:[101] with output: [state: out of cluster reason: Replicated dg record is found: retry to add a node failed] DESCRIPTION: The flag which shows the diskgroup was imported with usereplicatedev=only failed to be marked since the last time the diskgroup got imported. The missing flag caused the failure of the replicated diskgroup import, further caused node rejoin failure. RESOLUTION: The code changes have been done to flag the diskgroup after it got imported with usereplicatedev=only. * 4133946 (Tracking ID: 3972344) SYMPTOM: After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150 Volume <volume_name> does not exist' is logged. DESCRIPTION: In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message. The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s] RESOLUTION: Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly. * 4135127 (Tracking ID: 4134023) SYMPTOM: vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error: # vxconfigrestore -p LINUXSRDF VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ...... VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed: Replicated dg record is found. Did you want to import hardware replicated LUNs? Try vxdg [-o usereplicatedev=only] import option with -c[s] Please refer to system log for details. ... ... VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed. DESCRIPTION: H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed. RESOLUTION: The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore . Patch ID: VRTSaslapm 8.0.2.1300 * 4137283 (Tracking ID: 4137282) SYMPTOM: Support for ASLAPM on SLES15SP5 DESCRIPTION: SLES15SP5 is a new release and hence APM module should be recompiled with new kernel. RESOLUTION: Compiled APM with new kernel. Patch ID: VRTSvxvm-8.0.2.1200 * 4119267 (Tracking ID: 4113582) SYMPTOM: In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode. DESCRIPTION: Reboot of primary nodes resulted in missing write completions of updates on the primary SRL volume. After the node came up, last update received by VVR secondary was incorrectly compared with the missing updates. RESOLUTION: Fixed the check to correctly compare the last received update by VVR secondary. * 4123065 (Tracking ID: 4113138) SYMPTOM: In CVR environments configured with virtual hostname, after node reboots on VVR Primary and Secondary, 'vradmin repstatus' invoked on the secondary site shows stale information with following warning message: VxVM VVR vradmin INFO V-5-52-1205 Primary is unreachable or RDS has configuration error. Displayed status information is from Secondary and can be out-of-date. DESCRIPTION: This issue occurs when there is a explicit RVG logowner set on the CVM master due to which the old connection of vradmind with its remote peer disconnects and new connection is not formed. RESOLUTION: Fixed the issue with the vradmind connection with its remote peer. * 4123069 (Tracking ID: 4116609) SYMPTOM: In CVR environments where replication is configured using virtual hostnames, vradmind on VVR primary loses connection with its remote peer after a planned RVG logowner change on the VVR secondary site. DESCRIPTION: vradmind on VVR primary was unable to detect a RVG logowner change on the VVR secondary site. RESOLUTION: Enabled primary vradmind to detect RVG logowner change on the VVR secondary site. * 4123080 (Tracking ID: 4111789) SYMPTOM: In VVR/CVR environments, VVR would use any IP/NIC/network to replicate the data and may not utilize the high performance NIC/network configured for VVR. DESCRIPTION: The default value of tunable was set to 'any_ip'. RESOLUTION: The default value of tunable is set to 'replication_ip'. * 4124291 (Tracking ID: 4111254) SYMPTOM: vradmind dumps core with the following stack: #3 0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6 #4 0x000000000045922c in RDS::getHandle () #5 0x000000000056ec04 in StatsSession::addHost () #6 0x000000000045d9ef in RDS::addRVG () #7 0x000000000046ef3d in RDS::createDummyRVG () #8 0x000000000044aed7 in PriRunningState::update () #9 0x00000000004b3410 in RVG::update () #10 0x000000000045cb94 in RDS::update () #11 0x000000000042f480 in DBMgr::update () #12 0x000000000040a755 in main () DESCRIPTION: vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set. RESOLUTION: The issue has been fixed by making code changes. * 4124794 (Tracking ID: 4114952) SYMPTOM: With VVR configured with a virtual hostname, after node reboots on DR site, 'vradmin pauserep' command failed with following error: VxVM VVR vradmin ERROR V-5-52-421 vradmind server on host <host> not responding or hostname cannot be resolved. DESCRIPTION: The virtual host mapped to multiple IP addresses, and vradmind was using incorrectly mapped IP address. RESOLUTION: Fixed by using the correct mapping of IP address from the virtual host. * 4124796 (Tracking ID: 4108913) SYMPTOM: Vradmind dumps core with the following stacks: #3 0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6 #4 0x00000000005d7a90 in VList::concat () at VList.C:1017 #5 0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280 #6 0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389 #7 0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764 #8 0x00000000004093e9 in process_message () at srvmd.C:418 #9 0x000000000040a66d in main () at srvmd.C:733 #0 0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6 #1 0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6 #2 0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6 #3 0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6 #4 0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6 #5 0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491 #6 0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480 #7 0x00000000005d7244 in VElem::~VElem () at VList.C:480 #8 0x00000000005d8ad9 in VList::~VList () at VList.C:1167 #9 0x000000000040a71a in main () at srvmd.C:743 #0 0x000000000040b826 in DList::head () at ../include/DList.h:82 #1 0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318 #2 0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157 #3 0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117 #4 0x000000000046f610 in RDS::collectStats () at RDS.C:6011 #5 0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799 #6 0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0 #7 0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6 DESCRIPTION: There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted. RESOLUTION: The code changes have been made to fix the issue. * 4125392 (Tracking ID: 4114193) SYMPTOM: 'vradmin repstatus' command showed replication data status incorrectly as 'inconsistent'. DESCRIPTION: vradmind was relying on replication data status from both primary as well as DR site. RESOLUTION: Fixed replication data status to rely on the primary data status. * 4125811 (Tracking ID: 4090772) SYMPTOM: vxconfigd/vx commands hang on secondary site in a CVR environment. DESCRIPTION: Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying to open the secondary RVG volume will acquire a lock and wait for SRL positions to match. During this if any vxvm transaction kicked in will also have to wait for same lock. Further logowner node panic'd which triggered logownership change protocol which hung as earlier transaction was stuck. As logowner change protocol could not complete, in absence of valid logowner SRL position could not match and caused deadlock. That lead to vxconfigd and vx command hang. RESOLUTION: Added changes to allow read operation on volume even if SRL positions are unmatched. We are still blocking write IOs and just allowing open() call for read-only operations, and hence there will not be any data consistency or integrity issues. * 4128127 (Tracking ID: 4132265) SYMPTOM: Machine with NVMe disks panics with following stack: blk_update_request blk_mq_end_request dmp_kernel_nvme_ioctl dmp_dev_ioctl dmp_send_nvme_passthru_cmd_over_node dmp_pr_do_nvme_read dmp_pgr_read dmpioctl dmp_ioctl blkdev_ioctl __x64_sys_ioctl do_syscall_64 DESCRIPTION: Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported. RESOLUTION: Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR. * 4128835 (Tracking ID: 4127555) SYMPTOM: While adding secondary site using the 'vradmin addsec' command, the command fails with following error if diskgroup id is used in place of diskgroup name: VxVM vxmake ERROR V-5-1-627 Error in field remote_dg=<dgid>: name is too long DESCRIPTION: Diskgroup names can be 32 characters long where as diskgroup ids can be 64 characters long. This was not handled by vradmin commands. RESOLUTION: Fix vradmin commands to handle the case where longer diskgroup ids can be used in place of diskgroup names. * 4129766 (Tracking ID: 4128380) SYMPTOM: If VVR is configured using virtual hostname and 'vradmin resync' command is invoked from a DR site node, it fails with following error: VxVM VVR vradmin ERROR V-5-52-405 Primary vradmind server disconnected. DESCRIPTION: In case of virtual hostname maps to multiple IPs, vradmind service on the DR site was not able to reach the VVR logowner node on the primary site due to incorrect IP address mapping used. RESOLUTION: Fixed vradmind to use correct mapped IP address of the primary vradmind. * 4130402 (Tracking ID: 4107801) SYMPTOM: /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards. DESCRIPTION: vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp . This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3. This folder is explicitly removed from SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder. RESOLUTION: Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links". * 4130827 (Tracking ID: 4098391) SYMPTOM: Kernel panic is observed with following stack: #6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e [exception RIP: bfq_bio_bfqg+37] RIP: ffffffffb1e78135 RSP: ffffa479c21cf7a0 RFLAGS: 00010002 RAX: 000000000000001f RBX: 0000000000000000 RCX: ffffa479c21cf860 RDX: ffff8bd779775000 RSI: ffff8bd795b2fa00 RDI: ffff8bd795b2fa00 RBP: ffff8bd78f136000 R8: 0000000000000000 R9: ffff8bd793a5b800 R10: ffffa479c21cf828 R11: 0000000000001000 R12: ffff8bd7796b6e60 R13: ffff8bd78f136000 R14: ffff8bd795b2fa00 R15: ffff8bd7946ad0bc ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458 #8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f #9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09 #10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3 #11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b #12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a #13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1 #14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5 #15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5 #16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2 #17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d #18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377 #19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa #20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c #21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4 #22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e #23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp] #24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp] #25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp] #26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp] #27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19 #28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719 #29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262 #30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296 #31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b #32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c DESCRIPTION: VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel >= 5.14.21-150400.24.11.1 RESOLUTION: It is recommended to use mq-deadline as io scheduler. Code changes have been done to automatically change the disk io scheduler to mq-deadline. * 4130947 (Tracking ID: 4124725) SYMPTOM: With VVR configured using virtual hostnames, 'vradmin delpri' command could hang after doing the RVG cleanup. DESCRIPTION: 'vradmin delsec' command used prior to 'vradmin delpri' command had left the cleanup in an incomplete state resulting in next cleanup command to hang. RESOLUTION: Fixed to make sure that 'vradmin delsec' command executes its workflow correctly. Patch ID: VRTSvxvm-8.0.2.1100 * 4125322 (Tracking ID: 4119950) SYMPTOM: Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM. DESCRIPTION: Third party components [curl and libxml] in their current versions, used by VxVM have been reported with security vulnerabilities which needs RESOLUTION: [curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed. Patch ID: VRTSveki-8.0.2.2400 * 4182361 (Tracking ID: 4182362) SYMPTOM: Veritas Infoscale Availability does not support SLES15SP6. DESCRIPTION: Veritas Infoscale Availability does not support SLES15SP6. RESOLUTION: Veritas Infoscale Availability support for SLES15SP6 is now introduced. Patch ID: VRTSveki-8.0.2.1500 * 4135795 (Tracking ID: 4135683) SYMPTOM: Enhancing debugging capability of VRTSveki package installation DESCRIPTION: Enhancing debugging capability of VRTSveki package installation using temporary debug logs for SELinux policy file installation. RESOLUTION: Code is changed to store output of VRTSveki SELinux policy file installation in temporary debug logs. * 4140468 (Tracking ID: 4152368) SYMPTOM: Some incidents do not appear in changelog because their cross-references are not properly processed DESCRIPTION: Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution. RESOLUTION: All cross-references are traversed to find parent-child only if it present and then find top. Patch ID: VRTSveki-8.0.2.1300 * 4134084 (Tracking ID: 4134083) SYMPTOM: The VEKI module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: VEKI module is updated to accommodate the changes in the kernel and load as expected on SLES15SP5. Patch ID: VRTSveki-8.0.2.1200 * 4130816 (Tracking ID: 4130815) SYMPTOM: VEKI rpm does not have changelog DESCRIPTION: Changelog in rpm will help to find missing incidents with respect to other version. RESOLUTION: Changelog is generated and added to VEKI rpm. Patch ID: VRTSveki-8.0.2.1100 * 4118568 (Tracking ID: 4110457) SYMPTOM: Veki packaging failure due to missing of storageapi specific files DESCRIPTION: While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes were not taken care in the Veki mk-symlink and build scripts. RESOLUTION: Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles. This is helping to package the storageapi along with veki and resolving all interdependencies Patch ID: VRTSgms-8.0.2.2400 * 4184621 (Tracking ID: 4184622) SYMPTOM: GMS module failed to load on SLES15-SP6 kernel. DESCRIPTION: This issue occurs due to changes in the SLES15-SP6 kernel. RESOLUTION: GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel. Patch ID: VRTSgms-8.0.2.1500 * 4133279 (Tracking ID: 4133278) SYMPTOM: The GMS module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. * 4134948 (Tracking ID: 4134947) SYMPTOM: The GMS module fails to load on azure SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the azure SLES15 SP5 kernel. RESOLUTION: GMS module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5. Patch ID: VRTSgms-8.0.2.1300 * 4133279 (Tracking ID: 4133278) SYMPTOM: The GMS module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. Patch ID: VRTSgms-8.0.2.1200 * 4126266 (Tracking ID: 4125932) SYMPTOM: no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration. DESCRIPTION: no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration. RESOLUTION: Updated the code to build gms with correct kbuild symbols. * 4127527 (Tracking ID: 4107112) SYMPTOM: The GMS module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-gms script to consider kernel-build version in exact-version-module version calculation. * 4127528 (Tracking ID: 4107753) SYMPTOM: The GMS module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-gms script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present. * 4129708 (Tracking ID: 4129707) SYMPTOM: GMS rpm does not have changelog DESCRIPTION: Changelog in rpm will help to find missing incidents with respect to other version. RESOLUTION: Changelog is generated and added to GMS rpm. Patch ID: VRTSglm-8.0.2.2400 * 4174551 (Tracking ID: 4171246) SYMPTOM: vxglm status shows active even if it fails to load module. DESCRIPTION: systemctl status vxglm command shows the vxglm service as active even after it failed to load the module. RESOLUTION: Code changes have been done to fix this issue. * 4184619 (Tracking ID: 4184620) SYMPTOM: GLM module failed to load on SLES15-SP6 kernel. DESCRIPTION: This issue occurs due to changes in the SLES15-SP6 kernel. RESOLUTION: GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel. * 4186391 (Tracking ID: 4117910) SYMPTOM: VRTSglm failed to load on SAP SLES15. DESCRIPTION: VRTSglm failing to install on SAP SLES15. RESOLUTION: VRTSglm updated to support SAP SLES15. Patch ID: VRTSglm-8.0.2.1500 * 4133277 (Tracking ID: 4133276) SYMPTOM: The GLM module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. * 4134946 (Tracking ID: 4134945) SYMPTOM: The GLM module fails to load on azure SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the azure SLES15 SP5 kernel. RESOLUTION: GLM module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5. * 4138274 (Tracking ID: 4126298) SYMPTOM: System may panic due to unable to handle kernel paging request and memory corruption could happen. DESCRIPTION: Panic may occur due to a race between a spurious wakeup and normal wakeup of thread waiting for glm lock grant. Due to the race, the spurious wakeup would have already freed a memory and then normal wakeup thread might be passing that freed and reused memory to wake_up function causing memory corruption and panic. RESOLUTION: Fixed the race between a spurious wakeup and normal wakeup threads by making wake_up lock protected. Patch ID: VRTSglm-8.0.2.1300 * 4133277 (Tracking ID: 4133276) SYMPTOM: The GLM module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. Patch ID: VRTSglm-8.0.2.1200 * 4127524 (Tracking ID: 4107114) SYMPTOM: The GLM module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-glm script to consider kernel-build version in exact-version-module version calculation. * 4127525 (Tracking ID: 4107754) SYMPTOM: The GLM module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-glm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present. * 4129715 (Tracking ID: 4129714) SYMPTOM: GLM rpm does not have changelog DESCRIPTION: Changelog in rpm will help to find missing incidents with respect to other version. RESOLUTION: Changelog is generated and added to GLM rpm. Patch ID: VRTSfsadv-8.0.2.2500 * 4188577 (Tracking ID: 4188576) SYMPTOM: Security vulnerabilities exist in the Curl third-party components used by VxFS. DESCRIPTION: VxFS uses the Curl third-party components in which some security vulnerability exist. RESOLUTION: VxFS is updated to use newer version (8.12.1v) of this third-party components in which the security vulnerabilities have been addressed. Patch ID: VRTSspt-8.0.2.1300 * 4139975 (Tracking ID: 4149462) SYMPTOM: New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version. DESCRIPTION: list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of script refer README.list_missing_incidents in VRTSspt package RESOLUTION: list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of script refer README.list_missing_incidents in VRTSspt package * 4146957 (Tracking ID: 4149448) SYMPTOM: New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog. DESCRIPTION: If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package RESOLUTION: If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package Patch ID: VRTSrest-3.0.10 * 4124960 (Tracking ID: 4130028) SYMPTOM: GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs DESCRIPTION: The get api was returning different response from what was mentioned in the specs RESOLUTION: Changed the response of the GET api of vm and fs apis to match the specs. After the changes client generated code will not get error * 4124963 (Tracking ID: 4127170) SYMPTOM: While modifying the system list for service group when dependency is there, the api would fail DESCRIPTION: While modifying the system list for service group when dependency is there, the api would fail. So we were not able to modify system list if there were dependency of service group on other service group RESOLUTION: Now we have modified the code for api to modify system list for service group when the dependency exists. * 4124964 (Tracking ID: 4127167) SYMPTOM: DELETE rvg was failing when replication was in progress DESCRIPTION: DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume RESOLUTION: DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume * 4124966 (Tracking ID: 4127171) SYMPTOM: While getting excluded disks on Systems API we were getting nodelist instead of nodename in href. When the user would try to GET on that link, the request would fail DESCRIPTION: The GET system list api was returning wrong reference links for excluded disks. When the user would try to GET on that link, the request would fail RESOLUTION: Returning the correct href for excluded disks from GET system api. * 4124968 (Tracking ID: 4127168) SYMPTOM: In GET request on rvgs all datavolumes in RVGs not listed correctly DESCRIPTION: The command which we were using for getting the list of data volumes on rvg, was not returning all data volumes, because of which the api was not returning all the data volumes of rvg RESOLUTION: Changed the command to get data volumes of rvg. Now GET on rvg will return all the data volumes associated with that rvg * 4125162 (Tracking ID: 4127169) SYMPTOM: Get disks api failing when cvm is down on any node DESCRIPTION: When node is out of cluster from CVM, GET disks api is failing and not giving proper output RESOLUTION: Used the appropriate checks to get the proper list of disks from GET disks api. Patch ID: VRTSpython-3.9.16 P07 * 4179488 (Tracking ID: 4179487) SYMPTOM: Upgrading multiple vulnerable third-party modules and cleaning up unused files in the .pyenv directory under VRTSPython for IS 8.0.2. DESCRIPTION: Upgrading multiple vulnerable third-party modules and cleaning up unused files in the .pyenv directory under VRTSPython for IS 8.0.2. RESOLUTION: Upgrading Multiple vulnerable thirdparty modules and cleaning up .pyenv directory unused files under VRTSPython for IS 8.0.2. Patch ID: VRTSsfcpi-8.0.2.1500 * 4115603 (Tracking ID: 4115601) SYMPTOM: On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a unique publisher list during install and upgrade. A publisher list gets displayed during install and upgrade which is not unique. DESCRIPTION: On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a unique publisher list during install and upgrade. A publisher list gets displayed during install and upgrade which is not unique. RESOLUTION: Installer code modified to skip the publisher list during start, stop and uninstall process and get unique publisher list during install and upgrade. * 4115707 (Tracking ID: 4126025) SYMPTOM: While performing full upgrade of the secondary site, SRL missing & RLINK dissociated error observed. DESCRIPTION: While performing full upgrade of the secondary site, SRL missing & RLINK dissociated error observed. SRL volume[s] is[are] in recovery state which leads to failure in associating srl vol with rvg. RESOLUTION: Installer code modified to wait for recovery tasks to complete on all volumes and then proceed with associating srl with rvg. * 4115874 (Tracking ID: 4124871) SYMPTOM: Configuration of Vxfen fails for a three-node cluster on VMs in different AZs. DESCRIPTION: Configuration of Vxfen fails for a three-node cluster on VMs in different AZs RESOLUTION: Added set-strictsrc 0 tunable to the llttab file in the installer. * 4116368 (Tracking ID: 4123645) SYMPTOM: During the Rolling upgrade response file creation, CPI is asking to unmount the VxFS filesystem. DESCRIPTION: During the Rolling upgrade response file creation, CPI is asking to unmount the VxFS filesystem. RESOLUTION: Installer code modified to exclude VxFS file system unmount process. * 4116406 (Tracking ID: 4123654) SYMPTOM: The installer gives a null swap space message. DESCRIPTION: The installer gives a null swap space message, as the swap space requirement is not needed anymore. RESOLUTION: Installer code modified and removed swap space message. * 4116879 (Tracking ID: 4126018) SYMPTOM: During addnode, installer fails to mount resources. DESCRIPTION: During addnode, installer fails to mount resources; as new node system was not added to child SG of resources. RESOLUTION: Installer code modified to add new node to all SGs. * 4116995 (Tracking ID: 4123657) SYMPTOM: Installer while performing upgrade Cluster Protocol version not upgraded post Full Upgrade DESCRIPTION: Installer while performing full upgrade Cluster Protocol version not upgraded post Full Upgrade. RESOLUTION: Installer code modified to retry upgrade to complete Cluster Protocol version . * 4117956 (Tracking ID: 4104627) SYMPTOM: The installer supports maximum of 5 patches. The user is not able to provide more than 5 patches for installation. DESCRIPTION: The latest bundle package installer supports maximum 5 patches. The user is not able to provide more than 5 patches for installation. RESOLUTION: The installer code modified to support maximum of 10 patches for installation. * 4121961 (Tracking ID: 4123908) SYMPTOM: Installer does not register InfoScale hosts to the VIOM Management Server after InfoScale configuration. DESCRIPTION: Installer does not register InfoScale hosts to the VIOM Management Server after InfoScale configuration. RESOLUTION: Installer is enhanced to register InfoScale hosts to VIOM Management Server by using both menu-driven program and responsefile. To register InfoScale hosts to VIOM Management Server by using responsefile, $CFG{gendeploy_path} parameter must be used. The value for $CFG{gendeploy_path} is the absolute path of gendeploy script from the local node. * 4122442 (Tracking ID: 4122441) SYMPTOM: When the product starts through the response file, installer displays keyless licensing information on screen. DESCRIPTION: When the product starts through the response file, installer displays keyless licensing information on screen. RESOLUTION: Licensing code modified to skip licensing information during the product start process. * 4122749 (Tracking ID: 4122748) SYMPTOM: On Linux, had service fails to start during rolling upgrade from InfoScale 7.4.1 or lower to higher InfoScale version. DESCRIPTION: VCS protocol version was supported from InfoScale 7.4.2 onwards. During rolling upgrade process from 7.4.1 or lower to higher InfoScale version, due to wrong release matrix data, installer tries to perform single phase rolling upgrade instead of two-phase rolling upgrade and had service fails to start. RESOLUTION: Installer is enhanced to perform two-phase rolling upgrade if installed Infoscale version is 7.4.1 or older. * 4126470 (Tracking ID: 4130003) SYMPTOM: Installer failed to start vxfs_replication while performing Configuration of Enterprise on OEL 9.2 DESCRIPTION: Installer failed to start vxfs_replication while performing Configuration of Enterprise on OEL 9.2 RESOLUTION: Installer code modified to retry start vxfs_replication while Configuration of Enterprise. * 4127111 (Tracking ID: 4127117) SYMPTOM: On a Linux system, the InfoScale installer configures the GCO(Global Cluster option) only with a virtual IP address. DESCRIPTION: On a Linux system, you can configure the GCO(Global Cluster option) with a hostname by using InfoScale installer on a different cloud platform. RESOLUTION: Installer prompts for the hostname to configure the GCO. * 4130377 (Tracking ID: 4131703) SYMPTOM: Installer performs dmp include/exclude operations if /etc/vx/vxvm.exclude is present on the system. DESCRIPTION: Installer performs dmp include/exclude operations if /etc/vx/vxvm.exclude is present on the system which is not required. RESOLUTION: Removed unnecessary dmp include/exclude operations which are launched after starting services in the container environment. * 4131315 (Tracking ID: 4131314) SYMPTOM: Environment="VCS_ENABLE_PUBSEC_LOG=0" is added from cpi in install section of service file instead of Service section. DESCRIPTION: Environment="VCS_ENABLE_PUBSEC_LOG=0" is added from cpi in install section of service file instead of Service section. RESOLUTION: Environment="VCS_ENABLE_PUBSEC_LOG=0" is added in Service section of service file. * 4131684 (Tracking ID: 4131682) SYMPTOM: On SunOS, installer prompts the user to install 'bourne' package if it is not available. DESCRIPTION: Installer had a dependency on 'usr/sunos/bin/sh', which is from 'bourne' package. 'bourne' package is deprecated with latest SRUs. RESOLUTION: Installer code is updated to use '/usr/bin/sh' instead of 'usr/sunos/bin/sh' thus removing bourne package dependency. * 4132411 (Tracking ID: 4139946) SYMPTOM: Rolling upgrade fails if the recommended upgrade path is not followed. DESCRIPTION: Rolling upgrade fails if the recommended upgrade path is not followed. RESOLUTION: Installer code fixed to resolve rolling upgrade issues if recommended upgrade path is not followed. * 4133019 (Tracking ID: 4135602) SYMPTOM: Installer failed to update main.cf file with VCS user during reconfiguring a secured cluster to non-secured cluster. DESCRIPTION: Installer failed to update main.cf file with VCS user during reconfiguring a secured cluster to non-secured cluster. RESOLUTION: Installer code checks are modified to update VCS user in main.cf file during reconfiguration of cluster from secured to non-secure. * 4133469 (Tracking ID: 4136432) SYMPTOM: Installer failed to add node to a higher version of infoscale node. DESCRIPTION: Installer failed to add node to a higher version of infoscale node. RESOLUTION: Installer code modified to enable adding node to a higher version of infoscale node. * 4135015 (Tracking ID: 4135014) SYMPTOM: CPI installer is asking "Would you like to install InfoScale" after "./installer -precheck" is done. DESCRIPTION: CPI installer is asking "Would you like to install InfoScale" after "./installer -precheck" is done. So it should not ask for installation after precheck is completed. RESOLUTION: Installer code modified to skip the question for installation after precheck is completed. * 4136211 (Tracking ID: 4139940) SYMPTOM: Installer failed to get the package version and failed due to PADV missmatch. DESCRIPTION: Installer failed to get the package version and failed due to PADV missmatch. RESOLUTION: Installer code modified to retrieve proper package version. * 4139609 (Tracking ID: 4142877) SYMPTOM: Missing HF list not displayed during upgrade by using the patch release. DESCRIPTION: Missing HF list not displayed during upgrade by using the patch release. RESOLUTION: Add prechecks in installer for identifying missing HFs and accept action from customer. * 4140512 (Tracking ID: 4140542) SYMPTOM: Installer failed to rolling upgrade for patch DESCRIPTION: Rolling upgrade failed for the patch installer for the build version during mixed ru check RESOLUTION: Installer code modified to handle build version during mixed ru check. * 4157440 (Tracking ID: 4158841) SYMPTOM: The installer supports VRTSrest version changes. DESCRIPTION: The installer now supports VRTSrest version changes. RESOLUTION: The installer code has been modified to enable the support for VRTSrest version changes. * 4157696 (Tracking ID: 4157695) SYMPTOM: In IS 7.4.2 U7 to IS 8.0.2 upgradation, VRTSpython version upgradation fails. DESCRIPTION: In IS 7.4.2 U7 to IS 8.0.2 upgradation, VRTSpython version upgradation fails. RESOLUTION: Validation changes introduced for the VRTSpython package version. * 4158650 (Tracking ID: 4164760) SYMPTOM: The installer is not checking dvd pkg version with the available patch pkg version DESCRIPTION: The installer is not checking dvd pkg version with the available patch pkg version due to that failed to install the latest pkgs. RESOLUTION: The installer code has been modified to check for dvd pkg version with the available patch pkg version to install the latest pkgs. * 4159940 (Tracking ID: 4159942) SYMPTOM: The installer is used to update file permission. DESCRIPTION: The installer is used to update file permission. RESOLUTION: The installer code has been modified so that it will not update existing file permissions. * 4161937 (Tracking ID: 4160983) SYMPTOM: In Solaris, after upgrading the Infoscale to ABE if we boot the current BE then the vxfs modules are not loading properly. DESCRIPTION: In Solaris, the vxfs modules are getting removed from current BE while upgrading the Infoscale to ABE. RESOLUTION: Installer code modified. * 4164945 (Tracking ID: 4164958) SYMPTOM: The installer is making entries in the config files of EO tunable in the security patch. DESCRIPTION: The installer is not checking pkg version for the security patch, which is making entries in config files of EO tunables. RESOLUTION: The installer code has been modified to check pkg version for the security patch, which is making entries in config files of EO tunables. * 4165118 (Tracking ID: 4171259) SYMPTOM: The installer is failing to add new node in cluster due to protocol version mismatch issue. DESCRIPTION: The installer is failing to add new node in cluster due to protocol version mismatch issue.. RESOLUTION: The installer code has been modified to allow node to add in cluster. * 4165727 (Tracking ID: 4165726) SYMPTOM: Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise DESCRIPTION: Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise RESOLUTION: Installer code modified. * 4165730 (Tracking ID: 4165726) SYMPTOM: Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise DESCRIPTION: Getting error message like "A more recent version of InfoScale Availability, 8.0.2.*, is already installed" when user tries to upgrade the patch on GA using RU and when user tries to upgrade the product from InfoScale Availability to InfoScale Enterprise RESOLUTION: Installer code modified. * 4165840 (Tracking ID: 4165833) SYMPTOM: Unable to install InfoScale packages on Solaris using IPS repository. DESCRIPTION: InfoScale installer on Solaris did not support IPS repository-based installation. This feature has now been added to InfoScale on Solaris. RESOLUTION: InfoScale installer is modified to support IPS repository on Solaris. Use the new option named "-ipsrepo" to define the IPS repository path or the repo name for performing IPS-based installation on Solaris. * 4166659 (Tracking ID: 4171256) SYMPTOM: The installer is not allowing to upgrade primary rvg role vvr host to upgrade. DESCRIPTION: The installer is not allowing to upgrade primary rvg role vvr host to upgrade. RESOLUTION: The installer code has been modified to ease the checking vvr rvg role while upgrade. * 4166980 (Tracking ID: 4166979) SYMPTOM: VMwareDisks agent is unable to start and run after upgrade DESCRIPTION: VMwareDisks agent is unable to start and run after upgrade RESOLUTION: Installer code modified. * 4167308 (Tracking ID: 4171253) SYMPTOM: IS 8.0.2U3 : CPI doesn't ask to set EO tunable in case of Infoscale upgrade DESCRIPTION: InfoScale installer CPI doesn't ask to set EO tunable in case of Infoscale upgrade. RESOLUTION: InfoScale installer is modified. * 4177618 (Tracking ID: 4184454) SYMPTOM: We are changing code by adding dbed related checks DESCRIPTION: We are changing code by adding dbed related checks RESOLUTION: Installer code modified * 4178007 (Tracking ID: 4177807) SYMPTOM: We are changing message for CPC fencing not writing in env file /etc/vxenviron file DESCRIPTION: We are changing message for CPC fencing not writing in env file /etc/vxenviron file RESOLUTION: Installer code modified * 4181039 (Tracking ID: 4181037) SYMPTOM: We are making the etc/vx/vxdbed/dbedenv file accessible DESCRIPTION: We are making the etc/vx/vxdbed/dbedenv file accessible RESOLUTION: Installer code modified * 4181282 (Tracking ID: 4181279) SYMPTOM: User was trying to install rpm packages via yum. After installation, configuring a cluster fails DESCRIPTION: User was trying to install rpm packages via yum. After installation, configuring a cluster fails RESOLUTION: Installer code modified * 4181787 (Tracking ID: 4181786) SYMPTOM: In network interface configuration file the interface method is changed to bootproto from dhcp, If we configure the the LLT over UDP. DESCRIPTION: If we use public interface(dhcp) interface for configuring LLT over UDP then as a result CPI changes the interface method to manual. RESOLUTION: Installer code modified * 4184438 (Tracking ID: 4186642) SYMPTOM: The installer is asking VIOM registration question in case of start option. DESCRIPTION: The installer is asking VIOM registration question in case of start option. RESOLUTION: The installer code has been modified to not ask VIOM registration questions in case of start option. Patch ID: -4.01.802.002 * 4173483 (Tracking ID: 4173483) SYMPTOM: Security vulnerability in SLIC component DESCRIPTION: Security vulnerability in SLIC component version 3.5 RESOLUTION: Upgraded the SLIC component to 3.7 Patch ID: VRTSsfmh-vom-HF0802551 * 4189545 (Tracking ID: 4189544) SYMPTOM: N/A DESCRIPTION: VIOM 8.0.2.551 VRTSsfmh package for InfoScale 8.0.2 Update releases RESOLUTION: N/A Patch ID: VRTSdbac-8.0.2.1600 * 4161967 (Tracking ID: 4157901) SYMPTOM: vcsmmconfig.log does not show file permissions 600 if EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM is set to 0. DESCRIPTION: vcsmmconfig.log does not show file permissions 600 if EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM is set to 0. RESOLUTION: Changes done in order to set file permission of vcsmmconfig.log as per EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM. * 4182976 (Tracking ID: 4182977) SYMPTOM: Veritas Infoscale does not support SLES15SP6. DESCRIPTION: Veritas Infoscale does not support SLES15SP6. RESOLUTION: Veritas Infoscale support for SLES15SP6 is now introduced. Patch ID: VRTSdbac-8.0.2.1300 * 4153146 (Tracking ID: 4153140) SYMPTOM: Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed. DESCRIPTION: Needed recompilation of Veritas Infoscale Availability packages with latest changes. RESOLUTION: Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform. Patch ID: VRTSdbac-8.0.2.1200 * 4133167 (Tracking ID: 4131368) SYMPTOM: Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). DESCRIPTION: Veritas Infoscale Availability does not support SUSE Linux Enterprise Server versions released after SLES 15 SP4. RESOLUTION: Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is now introduced. Patch ID: VRTSdbac-8.0.2.1100 * 4133133 (Tracking ID: 4133130) SYMPTOM: Veritas Infoscale Availability does not qualify latest sles15 kernels. DESCRIPTION: Veritas Infoscale Availability qualification for latest sles15 kernels has been provided. RESOLUTION: Veritas Infoscale Availability qualify latest sles15 kernels. Patch ID: VRTSvcsea-8.0.2.2300 * 4189548 (Tracking ID: 4189547) SYMPTOM: Invalid details mentioned while executing fire drill of oracle agent with oracle21c. DESCRIPTION: In Oracle Fire Drill scenario Filesystem column entry in output of df -k was supposed to be compared with "/" but instead it was getting compared with a different value ($fs). RESOLUTION: $basedir i.e. Filesystem column entry of df -k is now getting compared with "/" Patch ID: VRTSvcsea-8.0.2.2100 * 4180094 (Tracking ID: 4180091) SYMPTOM: The offline script times out due to the delay introduced by the fuser check. DESCRIPTION: ASMDG resource timeout while offlining but happens quickly outside of VCS control. The offline script after executing the query " alter diskgroup <DISKGROUPS> dismount; " makes a fuser check on the underlying disks to see if any device is still in use and hangs. RESOLUTION: Alter the sql query to get the list of disks to run a fuser check on. Correct the operator precedence in the sql statement. Patch ID: VRTSvcsea-8.0.2.1400 * 4058775 (Tracking ID: 4073508) SYMPTOM: Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c. DESCRIPTION: Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c. RESOLUTION: Environment variables are used for pointing the updated path for the password file. It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to work Oracle virtual fire-drill feature. Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile" Patch ID: VRTSamf-8.0.2.2100 * 4161436 (Tracking ID: 4161644) SYMPTOM: System panics when VCS enabled AMF module. DESCRIPTION: System panics that indicates after amf_prexec_hook extracted an argument longer than 4k which spans over two pages it is reading the 3rd page, that is illegal because all arguments are loaded in two pages. RESOLUTION: AMF continue to extract arguments from internal buffer before moving to next page. * 4162305 (Tracking ID: 4168084) SYMPTOM: System panics when VCS enabled AMF module to monitor mount point. DESCRIPTION: AMF calls sleepable function while it holds spin lock for mount point event, resulting in system panic. RESOLUTION: Use a busy flag to synchronize multi threads so that it can release spin lock. * 4182736 (Tracking ID: 4182737) SYMPTOM: Veritas Infoscale does not support SLES15SP6. DESCRIPTION: Veritas Infoscale does not support SLES15SP6. RESOLUTION: Veritas Infoscale support for SLES15SP6 is now introduced. Patch ID: VRTSamf-8.0.2.1400 * 4137600 (Tracking ID: 4136003) SYMPTOM: A cluster node panics when VCS enabled AMF module that monitors process on/off. DESCRIPTION: A cluster node panics which indicates that an issue with the AMF module overruns into user space buffer during it is analyzing an argument of 8K size. The AMF module cannot load that length of data into internal buffer, eventually misleading access into user buffer which is not allowed when kernel SMAP in effect. RESOLUTION: The AMF module is constrained to ignore an argument of 8K or bigger size to avoid internal buffer overrun. Patch ID: VRTSamf-8.0.2.1300 * 4137165 (Tracking ID: 4131368) SYMPTOM: Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). DESCRIPTION: Veritas Infoscale Availability does not support SUSE Linux Enterprise Server versions released after SLES 15 SP4. RESOLUTION: Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is now introduced. Patch ID: VRTSamf-8.0.2.1200 * 4133131 (Tracking ID: 4133130) SYMPTOM: Veritas Infoscale Availability does not qualify latest sles15 kernels. DESCRIPTION: Veritas Infoscale Availability qualification for latest sles15 kernels has been provided. RESOLUTION: Veritas Infoscale Availability qualify latest sles15 kernels. Patch ID: VRTSgab-8.0.2.2100 * 4182383 (Tracking ID: 4182384) SYMPTOM: Veritas Infoscale Availability does not support SLES15SP6. DESCRIPTION: Veritas Infoscale Availability does not support SLES15SP6. RESOLUTION: Veritas Infoscale Availability support for SLES15SP6 is now introduced. Patch ID: VRTSgab-8.0.2.1400 * 4153142 (Tracking ID: 4153140) SYMPTOM: Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed. DESCRIPTION: Needed recompilation of Veritas Infoscale Availability packages with latest changes. RESOLUTION: Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform. Patch ID: VRTSgab-8.0.2.1300 * 4137164 (Tracking ID: 4131368) SYMPTOM: Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). DESCRIPTION: Veritas Infoscale Availability does not support SUSE Linux Enterprise Server versions released after SLES 15 SP4. RESOLUTION: Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is now introduced. Patch ID: VRTSgab-8.0.2.1200 * 4133132 (Tracking ID: 4133130) SYMPTOM: Veritas Infoscale Availability does not qualify latest sles15 kernels. DESCRIPTION: Veritas Infoscale Availability qualification for latest sles15 kernels has been provided. RESOLUTION: Veritas Infoscale Availability qualify latest sles15 kernels. Patch ID: VRTSvcsag-8.0.2.2300 * 4189572 (Tracking ID: 4188318) SYMPTOM: Repro steps 1) hastart on the node 2) KVMagent OPEN entry called, if environment is invalid, KVM VCS resource is put into UNKNOWN state and the invalid environment file is created. 3) User corrected the env. 4) Probe the resource. No change to resource state as invalid environment file is present user has to remove it manually. DESCRIPTION: Same as above. RESOLUTION: enhance agent monitor for auto removal of the invalid file if environment is valid * 4189590 (Tracking ID: 4075950) SYMPTOM: When IPv6 VIP switches from node1 to node2 in a cluster, a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address. DESCRIPTION: After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change. RESOLUTION: The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network. * 4189594 (Tracking ID: 4189392) SYMPTOM: Earlier, GCP Disk agent did not support attaching a regional disk in read-only mode to more than one instance. DESCRIPTION: In the existing GoogleDisk agent, we have now added support for attaching GCP regional disks in Read-Only (RO) mode to multiple instances - both inside and outside the cluster if they are in the same region. This enhances flexibility for scenarios requiring simultaneous read access. While Read-Write (RW) mode continues to be restricted to one instance at a time. RESOLUTION: Changes have been made in the GCP Disk agent to recognize and support multi-attach of regional disks in read-only (RO) mode across multiple instances. Patch ID: VRTSvcsag-8.0.2.2100 * 4149272 (Tracking ID: 4164374) SYMPTOM: VCS DNS Agent monitor gets timeout if multiple DNS servers are added as Stealth Masters and if few of them get hung. DESCRIPTION: This is due to the fact that VCS monitor calls nsupdate and dig commands and these calls are sequential to each DNS server. The default timeout of monitor routine (60s) is not enough to complete the nsupdate and dig calls for all servers as nsupdate can have minimum timeout of 20s. So, if there are more than 3 dns servers configured in env and 3 of them are in hung state, monitor routine will get timed out failing over resource even though 4th DNS server might be working fine. RESOLUTION: The changes are made to call nsupdate for all DNS servers in parallel and similar changes for dig command. * 4162659 (Tracking ID: 4162658) SYMPTOM: LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure. DESCRIPTION: If disk is detached unable to failover LVMVolumeGroup. RESOLUTION: Implementation of PanicSystemOnVGLoss attribute. 0 - Default value and behaviour, does not failover (not halting the system). 1 Halt the system if deactivation of volume group fails. 2 - Do not halt the system. Allow failover (Note risk of data corruption). * 4162753 (Tracking ID: 4142040) SYMPTOM: While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated. DESCRIPTION: While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated. During some instances, the user might be informed to manually copy '/etc/VRTSvcs/conf/types.cf' to the existing '/etc/VRTSvcs/conf/config/types.cf' file. Need to remove the message "Implement /etc/VRTSvcs/conf/types.cf to utilize resource type updates" when updating the VRTSvcsag rpm. RESOLUTION: To ensure that '/etc/VRTSvcs/conf/config/types.cf file' is updated correctly following VRTSvcsag updates, the script user_trigger_update_types can be manually triggered by the user. The following message displays: Leaving existing /etc/VRTSvcs/conf/config/types.cf configuration file unmodified Copy /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/user_trigger_update_types to /opt/VRTSvcs/bin/triggers To manually update the types.cf, execute command "hatrigger -user_trigger_update_types 0 * 4177815 (Tracking ID: 4175426) SYMPTOM: VMwareDisk Agent taking longer time to failover. DESCRIPTION: The VMwareDisk agent now relies on vxvm to work fast, otherwise, it will take more time to wait for a return from vxdisk, which doesn't exist at all due to vxvm package is not installed as customer having only availability configured. RESOLUTION: Verifying vxdctl mode and if it is enabled, then only go for vxdisk. * 4180582 (Tracking ID: 4180581) SYMPTOM: Agent is unable to detach the IPv6 address from node when node gets faulted and hence unable to attach on other node. DESCRIPTION: For IPv6, In case of kernel panic, the system gets fault.During Online operation on the AWSIP resource on the fail over stale entry is not getting cleared from the faulted node network interface resulting AWSIP agent failed to fail over. RESOLUTION: Code changes are done to remove stale entry from the faulted node network interface. Patch ID: VRTSvcsag-8.0.2.1500 * 4157581 (Tracking ID: 4157580) SYMPTOM: Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS. DESCRIPTION: There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. RESOLUTION: VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed. Patch ID: VRTSvcsag-8.0.2.1400 * 4114880 (Tracking ID: 4152700) SYMPTOM: When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found. DESCRIPTION: Azure Private DNS Zone with AzureDNSZone Agent is not supported. RESOLUTION: The Azure Private DNS Zone is supported by the AzureDNSZone Agent by installing the Azure library for Private DNS Zone(azure-mgmt-privatedns). This library has functions that can be utilized by for Private DNS zone operations. The resource ID is differentiated based on the Public and the Private DNS zones and the corrective actions are taken accordingly. For DNS zones, the resource ID differs between Public and Private DNS zones. The resource ID can be parsed, and the resource type can be checked to determine whether it is a Public or Private DNS zone. * 4135534 (Tracking ID: 4152812) SYMPTOM: AWS EBSVol agent takes long time to perform online and offline operations on resources. DESCRIPTION: When a large number of AWS EBSVol resources are configured, it takes a long time to perform online and offline operations on these resources. EBSVol is a single threaded agent and hence prevents parallel execution of attach and detach EBS volume commands. RESOLUTION: To resolvethe issue, the default value of 'NumThreads' attribute of EBSVol agent is modified from 1 to 10 and the agent is enhanced to use the locking mechanism to avoid conflicting resource configuration. This results in enhanced response time for parallel execution of attach and detach commands. Also, the default value of MonitorTimeout attribute is modified from 60 to 120. This avoids timeout of monitor entry point when response of AWS CLI/server is unexpectedly slow. * 4137215 (Tracking ID: 4094539) SYMPTOM: The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test. DESCRIPTION: In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource. RESOLUTION: For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words. * 4137376 (Tracking ID: 4122001) SYMPTOM: NIC resource remain online after unplug network cable on ESXi server. DESCRIPTION: Previously MII checking network statistics/ping test but now it's directly marking NIC state ONLINE by checking NIC status in operstate. Now there is no any ping check before that. If it's fail to detect operstate file then only it's going to check for PING test. But on ESXi server environment NIC is already marked ONLINE as operstate file is available with state UP & carrier bit is set. So, even if "NetworkHosts" is not reachable then also it's marking NIC resource ONLINE. RESOLUTION: The NIC agent's already having "PingOptimize" attribute. We introduced new value (2) for "PingOptimize" attribute to make decision of perform PING test. If "PingOptimize = 2" then only it will do PING test or else it will work as per previous design. * 4137377 (Tracking ID: 4113151) SYMPTOM: Dependent DiskGroupAgent fails to get its resource online due to disk group import failure. DESCRIPTION: VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment. RESOLUTION: Added a finite period of wait for VMware disk is present into vxdmp database before online is complete. * 4137602 (Tracking ID: 4121270) SYMPTOM: EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices. DESCRIPTION: After attaching volume to instance its taking some time to update its device mapping in system. Due to which if we run lsblk -d -o +SERIAL immediately after attaching volume then its not showing that volume details in output. Due to which $native_device was getting blank/uninitialized. So, we need to wait for some time to get device mapping updated in system. RESOLUTION: We have added logic to retry once for same command after some interval if in first run we didnt find expected volume device mapping. And now its properly updating attribute NativeDevice. * 4137618 (Tracking ID: 4152886) SYMPTOM: AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a VPC that is shared across multiple AWS accounts. DESCRIPTION: When VPC is shared across multiple AWS accounts, route table associated with the subnets is exclusively owned by the owner account. AWS restricts the modification in the route table from any other account. When AWSIP agent tries to bring OverlayIP resource online on the Instance owned by a different account, may not have privileges to update the route table. In such cases, AWSIP agent fails to edit the route table, and fails to bring OverlayIP resource online and offline. RESOLUTION: To support cross-account deployment, assign appropriate privileges on shared resources. Create an AWS profile to grant permissions to update Route Table of VPC through different nodes belonging to different AWS accounts. This profile is used to update route tables accordingly. A new attribute "Profile" is introduced in AWSIP agent. Use this attribute to configure the above created profile. * 4143918 (Tracking ID: 4152815) SYMPTOM: AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent. DESCRIPTION: AWS EBS Volume which is attached to AWS instance that is not part of cluster is getting attach to the node of cluster during online event. Instead, Unable to detach volume' message should be logged in log file as volume is already in use by another AWS instance in AWS EBSVol agent. RESOLUTION: AWS EBSVol agent is enhanced to avoid attachment of in-use EBS volumes whose instances are not part of cluster. Patch ID: VRTSvcsag-8.0.2.1200 * 4130206 (Tracking ID: 4127320) SYMPTOM: The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin. DESCRIPTION: The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin. RESOLUTION: The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process. Patch ID: VRTSllt-8.0.2.2300 * 4189272 (Tracking ID: 4189271) SYMPTOM: LLT service is unable to start due to LLT_IRQBALANCE DESCRIPTION: Depend on irqbalance and hpe_irqbalance service status, LLT applied irqbalance changes or stops the LLT. RESOLUTION: No need to stop LLT. Correcting LLT_IRQBALANCE, LLT will work accordingly. * 4189571 (Tracking ID: 4167108) SYMPTOM: This is code improvement and user does not experience any DESCRIPTION: replace yield() with cond_resched() RESOLUTION: replace yield() with cond_resched() * 4189853 (Tracking ID: 4189566) SYMPTOM: In an InfoScale FSS environment where LLT links are configured over RDMA, the CVM slave node panics whilst joining the cluster with the CVM master. DESCRIPTION: The panic can occur on any running thread, but the system will typically crash with "BUG: unable to handle kernel paging request" or "general protection fault: 0000 [#1] SMP". The fault could come from a stack involving the kmem_cache family functions. RESOLUTION: Compare the contents of the /etc/sysconfig/llt file on the nodes and make sure the same tunable settings are in place. Patch ID: VRTSllt-8.0.2.2100 * 4132209 (Tracking ID: 4124759) SYMPTOM: Panic happened with llt_ioship_recv on a server running in AWS. DESCRIPTION: In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected. RESOLUTION: To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet. * 4162744 (Tracking ID: 4139781) SYMPTOM: System panics occasionally in LLT stack where LLT over ether enabled. DESCRIPTION: LLT allocates skb memory from own cache for those >4k messages and sets a field of skb_shared_info pointing to a LLT function, later uses the field to determine if skb is allocated from own cache. When receiving a packet, OS also allocates skb from system cache and doesn't reset the field, then passes skb to LLT. Occasionally the stale pointer in memory can mislead LLT to think a skb is from own cache and uses LLT free api by mistake. RESOLUTION: LLT uses a hash table to record skb allocated from own cache and doesn't set the field of skb_shared_info. * 4166061 (Tracking ID: 4167791) SYMPTOM: System panics when LLT lost heartbeat from peer node. DESCRIPTION: LLT node received LLT_NULL packet from peer node to request heartbeat, LLT xmit lock missed one unlock on the course to deal with LLT_NULL packet. Next incoming LLT packet from same node will try to acquire the same lock again, incur recursive mutex_enter. RESOLUTION: Release xmit lock after dealing with LLT_NULL packet. * 4179383 (Tracking ID: 3989372) SYMPTOM: When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out. DESCRIPTION: Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress. RESOLUTION: This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU. * 4182383 (Tracking ID: 4182384) SYMPTOM: Veritas Infoscale Availability does not support SLES15SP6. DESCRIPTION: Veritas Infoscale Availability does not support SLES15SP6. RESOLUTION: Veritas Infoscale Availability support for SLES15SP6 is now introduced. * 4186647 (Tracking ID: 4186645) SYMPTOM: lltdlv hang causes fencing to panic a node due to transient network issue. DESCRIPTION: lltdlv hang is caused due to temporary network issue and fencing panics the node. This works fine when LLT_IRQBALANCE balance is enabled. Hence, requirement is to enable this tunable by default to prevent panics due to transient issues. Llt irqbalance does not work in conjunction with hpe_irqbalance, so add a check for that. RESOLUTION: Enabled LLT_IRQBALANCE by default, to make clusters more resilient to transient issues and avoid fencing to panic the server in such cases. Patch ID: VRTSllt-8.0.2.1400 * 4137611 (Tracking ID: 4135825) SYMPTOM: Once root file system is full during llt start, llt module failing to load forever. DESCRIPTION: Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module. RESOLUTION: If existing links are not present, added the logic to get name of file names to create new links. Patch ID: VRTSllt-8.0.2.1300 * 4132209 (Tracking ID: 4124759) SYMPTOM: Panic happened with llt_ioship_recv on a server running in AWS. DESCRIPTION: In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected. RESOLUTION: To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet. * 4137163 (Tracking ID: 4131368) SYMPTOM: Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). DESCRIPTION: Veritas Infoscale Availability does not support SUSE Linux Enterprise Server versions released after SLES 15 SP5. RESOLUTION: Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is now introduced. Patch ID: VRTSllt-8.0.2.1200 * 4128886 (Tracking ID: 4128887) SYMPTOM: Below warning trace is observed while unloading llt module: [171531.684503] Call Trace: [171531.684505] <TASK> [171531.684509] remove_proc_entry+0x45/0x1a0 [171531.684512] llt_mod_exit+0xad/0x930 [llt] [171531.684533] ? find_module_all+0x78/0xb0 [171531.684536] __do_sys_delete_module.constprop.0+0x178/0x280 [171531.684538] ? exit_to_user_mode_loop+0xd0/0x130 DESCRIPTION: While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed . RESOLUTION: Proc_remove api is used which cleans up the whole subtree. Patch ID: VRTScps-8.0.2.2300 * 4189591 (Tracking ID: 4188652) SYMPTOM: After configuring CP server, getting EO related error in CP server logs. DESCRIPTION: After configuring CP server, getting EO related error in CP server logs. Error out if the flag value is not 0 or 1. RESOLUTION: Resolving the unnecessary error log message even if the tunable value is set to 0. * 4189990 (Tracking ID: 4189584) SYMPTOM: Security vulnerabilities exists Sqlite third-party components used by VCS. DESCRIPTION: VCS uses the Sqlite third-party components in which some security vulnerability exists. RESOLUTION: VCS is updated to use newer versions of Sqlite third-party component in which the security vulnerabilities have been addressed. Patch ID: VRTScps-8.0.2.2100 * 4157581 (Tracking ID: 4157580) SYMPTOM: Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS. DESCRIPTION: There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. RESOLUTION: VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed. Patch ID: VRTScps-8.0.2.1500 * 4161971 (Tracking ID: 4161970) SYMPTOM: Security vulnerabilities exists Sqlite third-party components used by VCS. DESCRIPTION: VCS uses the Sqlite third-party components in which some security vulnerability exists. RESOLUTION: VCS is updated to use newer versions of Sqlite third-party component in which the security vulnerabilities have been addressed. Patch ID: VRTSdbed-8.0.2.1400 * 4188986 (Tracking ID: 4188985) SYMPTOM: Checkpoint creation fails for oracle database application using dbed, if archive log is set as a directory inside the mountpoint DESCRIPTION: Checkpoint creation fails because fsckptadm doesn't support directory level checkpoint. RESOLUTION: Added code change to create checkpoint of the filesystem containing the archive log directory instead of trying to take checkpoint of the directory. Patch ID: VRTSdbed-8.0.2.1300 * 4155837 (Tracking ID: 4137171) SYMPTOM: Traditionally, file permissions were some default. So there was no control to set file permissions as per requirement. DESCRIPTION: In the EO requirement it is decided to set log file permissions as per tunable (VCS_ENABLE_PUBSEC_LOG_PERM) value RESOLUTION: To address mentioned issue we have introduced VCS_ENABLE_PUBSEC_LOG_PERM tunable as part of VRTSdbed.rpm here onwards the log file permissions will be based on the VCS_ENABLE_PUBSEC_LOG_PERM value Default: VCS_ENABLE_PUBSEC_LOG_PERM = 0, File Permission: 0600 VCS_ENABLE_PUBSEC_LOG_PERM = 1, File Permission: 0640 VCS_ENABLE_PUBSEC_LOG_PERM = 2, File Permission: 0644 VCS_ENABLE_PUBSEC_LOG_PERM = 3, File Permission: 0666 Patch ID: VRTSdbed-8.0.2.1100 * 4153061 (Tracking ID: 4092588) SYMPTOM: SFAE failed to start with systemd. DESCRIPTION: SFAE failed to start with systemd as currently SFAE service is used in backward compatibility mode using init script. RESOLUTION: Added systemd support for SFAE, such as; systemctl commands - stop/start/status/restart/enable/disable. Patch ID: VRTSvbs-8.0.2.1200 * 4189595 (Tracking ID: 4188647) SYMPTOM: Virtual Business Operations instance is created and configured but not able to perform any of it's operations as those operations command hangs for a very long period of time. DESCRIPTION: vbsd server was hanging if SSS service is configured on Linux platform. RESOLUTION: Skipped user checks to fix the issue.Now VBS service is working as expected on latest Linux platform. Patch ID: VRTSvbs-8.0.2.1100 * 4157581 (Tracking ID: 4157580) SYMPTOM: Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS. DESCRIPTION: There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. RESOLUTION: VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed. Patch ID: VRTSvxfen-8.0.2.2300 * 4189905 (Tracking ID: 4189906) SYMPTOM: vxfendisk fails due to ksh overwriting positional parameters by default after executing subsequent scripts inside it. DESCRIPTION: vxfenswap invokes vxfendisk to list the disks. vxfendisk sources vxfen_scriptlib.sh to set the environment. Within vxfen_scriptlib.sh, vcs_eo_perm.sh is executed with two parameters. On AIX, due to ksh behavior, this execution overwrites the original positional parameters of vxfendisk, causing unexpected failures. RESOLUTION: Saved the original positional parameters in vxfendisk before environment setup and restored them afterward. Patch ID: VRTSvxfen-8.0.2.2100 * 4156076 (Tracking ID: 4156075) SYMPTOM: EO changes file permission tunable DESCRIPTION: EO changes file permission tunable RESOLUTION: EO changes file permission tunable * 4156379 (Tracking ID: 4156075) SYMPTOM: EO changes file permission tunable DESCRIPTION: EO changes file permission tunable RESOLUTION: EO changes file permission tunable * 4166076 (Tracking ID: 4166666) SYMPTOM: Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guest DESCRIPTION: While configuring read key buffer exceeding the maximum buffer size in KVM hypervisor. RESOLUTION: Reduced maximum number of keys to 1022 to support read key in KVM hypervisor. * 4169032 (Tracking ID: 4166666) SYMPTOM: Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guest DESCRIPTION: While configuring read key buffer exceeding the maximum buffer size in KVM hypervisor. RESOLUTION: Reduced maximum number of keys to 1022 to support read key in KVM hypervisor. * 4176111 (Tracking ID: 4176110) SYMPTOM: vxfentsthdw failed to verify fencing disks compatibility in KVM environment DESCRIPTION: vxfentsthdw failed to read key buffer, as exceeds the maximum buffer size in KVM hypervisor. RESOLUTION: Added new macro with less number of keys to support in kvm environment. * 4177677 (Tracking ID: 4176592) SYMPTOM: Continuous ERROR message in 'vxfen.log' file - "VXFEN already configured" after system startup, despite fencing working correctly. DESCRIPTION: The vxfen-startup script enters a loop trying to configure the vxfen driver, which is already configured, due to an incorrect exit value. This results in the 'vxfen.log' file being flooded with error messages. RESOLUTION: Correct the exit code to ensure the vxfen-startup script exits the loop properly, and handles vxfen already configured as a success. * 4182722 (Tracking ID: 4182723) SYMPTOM: Veritas Infoscale does not support SLES15SP6. DESCRIPTION: Veritas Infoscale does not support SLES15SP6. RESOLUTION: Veritas Infoscale support for SLES15SP6 is now introduced. Patch ID: VRTSvxfen-8.0.2.1400 * 4153144 (Tracking ID: 4153140) SYMPTOM: Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed. DESCRIPTION: Needed recompilation of Veritas Infoscale Availability packages with latest changes. RESOLUTION: Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform. Patch ID: VRTSvxfen-8.0.2.1300 * 4131369 (Tracking ID: 4131368) SYMPTOM: Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server 15 Service Pack 5 (SLES 15 SP5). DESCRIPTION: Veritas Infoscale Availability does not support SUSE Linux Enterprise Server versions released after SLES 15 SP4. RESOLUTION: Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is now introduced. Patch ID: VRTSvxfen-8.0.2.1200 * 4124086 (Tracking ID: 4124084) SYMPTOM: Security vulnerabilities exist in the Curl third-party components used by VCS. DESCRIPTION: Security vulnerabilities exist in the Curl third-party components used by VCS. RESOLUTION: Curl is upgraded in which the security vulnerabilities have been addressed. * 4125891 (Tracking ID: 4113847) SYMPTOM: Even number of cp disks is not supported by design. This enhancement is a part of AFA wherein a faulted disk needs to be replaced as soon as the number of coordination disks is even in number and fencing is up and running DESCRIPTION: Regular split / network partitioning must be an odd number of disks. Even number of cp support is provided with cp_count. With cp_count/2+1, fencing is not allowed to come up. Also if cp_count is not defined in vxfenmode file then by default minimum 3 cp disk are needed, otherwise vxfen does not start. RESOLUTION: In case of even number of cp disk, another disk is added. The number of cp disks is odd and fencing is thus running. * 4125895 (Tracking ID: 4108561) SYMPTOM: Vxfen print keys internal utility was not working because of overrunning of array internally DESCRIPTION: Vxfen print keys internal utility will not work if the number of keys exceed 8 will then return garbage value Overrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8) RESOLUTION: Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now. Patch ID: VRTSvcs-8.0.2.2300 * 4189593 (Tracking ID: 4188662) SYMPTOM: While performing VVR rolling-upgrade from IS 7.4.2 , App group present on secondary site went into faulted state after an upgrade. DESCRIPTION: In types.cf of upgraded secondary nodes newly added attribute EnableSingleWriter was not getting updated because in trigger we were checking if any such attribute already exist by running command '$haattr -display $type | grep $attr' and if exists then we were skipping that command from execution. But in this case if that attribute was part of RegList then also this command was getting successful. RESOLUTION: added conditional check to skip if attribute exist in RegList. Patch ID: VRTSvcs-8.0.2.2200 * 4189253 (Tracking ID: 4189252) SYMPTOM: Security vulnerabilities present in existing version of Netsnmp. DESCRIPTION: Upgrading Netsnmp component to fix security vulnerabilities RESOLUTION: Upgrading Netsnmp component to fix security vulnerabilities for security patch IS 8.0.2U5_SP2. Patch ID: VRTSvcs-8.0.2.2100 * 4162755 (Tracking ID: 4136359) SYMPTOM: When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated. DESCRIPTION: To use new types, attribute(like PanicSystemOnVGLoss). User need to copy /etc/VRTSvcs/conf/types.cf to /etc/VRTSvcs/conf/config/types.cf, this copying may fault the resource due to missing types(like HTC) from new types.cf. RESOLUTION: Implemented new external trigger to manually update the /etc/VRTSvcs/conf/config/types.cf. Follow the post installation instructions of VRTSvcsag rpm. Patch ID: VRTSvcs-8.0.2.1500 * 4157581 (Tracking ID: 4157580) SYMPTOM: Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS. DESCRIPTION: There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS. RESOLUTION: VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed. Patch ID: VRTScavf-8.0.2.2400 * 4162683 (Tracking ID: 4153873) SYMPTOM: CVM master reboot resulted in volumes disabled on the slave node DESCRIPTION: The Infoscale stack exhibits unpredictable behaviour during reboots, sometimes the node hangs to come online, the working node goes into the faulted state and sometimes the cvm won't start on the rebooted node. RESOLUTION: Now we have added the mechanism for making decisions about deport and the code has been integrated with an offline routine. Patch ID: VRTScavf-8.0.2.1500 * 4133969 (Tracking ID: 4074274) SYMPTOM: DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message. DESCRIPTION: In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover. And also we need to change SCSI-3 error message to "PR operation failed". RESOLUTION: For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process. And Pre VxVM 8.0.x, we getting "SCSI-3 PR operation failed" as shown and changes done respectively Sample syntax # /usr/sbin/vxdg -s -o groupreserve=VCS -o clearreserve -cC -t import AIXSRDF VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed: SCSI-3 PR operation failed VRTScavf (CVM) 7.4.2.2201 agent enhanced on AIX to handle EMC SRDF VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed: SCSI-3 PR operation failed failures NEW 8.0.x VxVM error message format: 2023/09/27 12:44:02 VCS INFO V-16-20007-1001 CVMVolDg:<RESOURCE-NAME>:online:VxVM vxdg ERROR V-5-1-19179 Disk group <DISKGROUP-NAME>: import failed: PR operation failed * 4137640 (Tracking ID: 4088479) SYMPTOM: The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing. DESCRIPTION: The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing. #/usr/sbin/vxdg -o groupreserve=VCS -o clearreserve -c -tC import srdfdg VxVM vxdg ERROR V-5-1-19179 Disk group srdfdg: import failed: SCSI-3 PR operation failed RESOLUTION: 06/16 14:31:49: VxVM vxconfigd DEBUG V-5-1-7765 /dev/vx/rdmp/emc1_0c93: pgr_register: setting pgrkey: AVCS 06/16 14:31:49: VxVM vxconfigd DEBUG V-5-1-5762 prdev_open(/dev/vx/rdmp/emc1_0c93): open failure: 47 //#define EWRPROTECT 47 /* Write-protected media */ 06/16 14:31:49: VxVM vxconfigd ERROR V-5-1-18444 vold_pgr_register: /dev/vx/rdmp/emc1_0c93: register failed:errno:47 Make sure the disk supports SCSI-3 PR. AIX differentiates between RW and RD-only opens. When the underlying device state has changed, because of the pending open count(dmp_cache_open feature), device open failed. Patch ID: VRTSodm-8.0.2.2800 * 4175626 (Tracking ID: 4175627) SYMPTOM: ODM module failed to load with latest VxFS. DESCRIPTION: Need to use the ODM package built with latest VXFS because there is an internal dependency of ODM on VxFS. RESOLUTION: Need to use the ODM package built with latest VXFS. Patch ID: VRTSodm-8.0.2.2700 * 4175626 (Tracking ID: 4175627) SYMPTOM: ODM module failed to load with latest VxFS. DESCRIPTION: Need to use the ODM package built with latest VXFS because there is an internal dependency of ODM on VxFS. RESOLUTION: Need to use the ODM package built with latest VXFS. Patch ID: VRTSodm-8.0.2.2400 * 4186392 (Tracking ID: 4117909) SYMPTOM: VRTSodm failed to load on SAP SLES15. DESCRIPTION: VRTSodm failing to install on SAP SLES15. RESOLUTION: VRTSodm updated to support SAP SLES15. * 4187361 (Tracking ID: 4187362) SYMPTOM: ODM module failed to load on SLES15-SP6 kernel. DESCRIPTION: This issue occurs due to changes in the SLES15-SP6 kernel. RESOLUTION: ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel. Patch ID: VRTSodm-8.0.2.1700 * 4154116 (Tracking ID: 4118154) SYMPTOM: System may panic in simple_unlock_mem() when errcheckdetail enabled with stack trace as follows. simple_unlock_mem() odm_io_waitreq() odm_io_waitreqs() odm_request_wait() odm_io() odm_io_stat() vxodmioctl() DESCRIPTION: odm_io_waitreq() has taken a lock and waiting to complete the IO request but it is interrupted by odm_iodone() to perform IO and unlocked a lock taken by odm_io_waitreq(). So when odm_io_waitreq() tries to unlock the lock it leads to panic as lock was unlocked already. RESOLUTION: Code has been modified to resolve this issue. * 4159290 (Tracking ID: 4159291) SYMPTOM: ODM module is not getting loaded with newly rebuilt VxFS. DESCRIPTION: ODM module is not getting loaded with newly rebuilt VxFS, need recompilation of ODM with newly rebuilt VxFS. RESOLUTION: Recompiled the ODM with newly rebuilt VxFS. Patch ID: VRTSodm-8.0.2.1500 * 4133286 (Tracking ID: 4133285) SYMPTOM: The ODM module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. * 4134950 (Tracking ID: 4134949) SYMPTOM: The ODM module fails to load on azure SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the azure SLES15 SP5 kernel. RESOLUTION: ODM module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5. Patch ID: VRTSodm-8.0.2.1400 * 4144274 (Tracking ID: 4144269) SYMPTOM: After installing VRTSvxfs-8.0.2.1400, ODM fails to start. DESCRIPTION: Because of the VxFS version update, the ODM module needs to be repackaged due to an internal dependency on VxFS version. RESOLUTION: As part of this fix, the ODM module has been repackaged to support the updated VxFS version. Patch ID: VRTSodm-8.0.2.1300 * 4133286 (Tracking ID: 4133285) SYMPTOM: The ODM module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. Patch ID: VRTSodm-8.0.2.1200 * 4126262 (Tracking ID: 4126256) SYMPTOM: no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration DESCRIPTION: modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI. RESOLUTION: Modified the code to make sure that modpost picks all the dependent symbols while building ODM module. * 4127518 (Tracking ID: 4107017) SYMPTOM: The ODM module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-odm script to consider kernel-build version in exact-version-module version calculation. * 4127519 (Tracking ID: 4107778) SYMPTOM: The ODM module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-odm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present. * 4129838 (Tracking ID: 4129837) SYMPTOM: ODM rpm does not have changelog DESCRIPTION: Changelog in rpm will help to find missing incidents with respect to other version. RESOLUTION: Changelog is generated and added to ODM rpm. Patch ID: VRTSvxfs-8.0.2.2800 * 4190914 (Tracking ID: 4190078) SYMPTOM: System panicked due to VxFS LRU list inconsistency. DESCRIPTION: Multiple LRU lists changes were introduced to avoid single LRU lock contention seen during parallel lookups. With the multiple LRU lists changes, d_name.hash is used to pick which LRU list the dentry should be put on, and the same LRU list lock is taken when taking the same dentry is taken off the LRU list based its d_name.hash. However, a dentrys d_name could be changed in between adding/removing it to/from the LRU list just which was picked based on its initial d_name.hash, as a result because of which the wrong LRU list lock is taken unexpectedly based on its current (already changed) d_name.hash. This can cause a panic due to the lists being inconsistent because the wrong lock is taken. One of the scenarios that where d_name.hash can be changed is file system rename operation. This is a filesystem metadata only inconsistency and has no effect on the data. RESOLUTION: Once a dentry LRU list is picked based on its initial d_name.hash, make sure the dentry sit on the same LRU list until it gets deleted. Patch ID: VRTSvxfs-8.0.2.2700 * 4135608 (Tracking ID: 4086287) SYMPTOM: VxFS mount command may panic the system with "scheduling while atomic: mount/453834/0x00000002" bug for cluster file system schedule() vxg_svar_sleep_unlock() vxg_get_block() vxg_do_initlock() vxg_api_initlock() vx_glm_init_blocklock() vx_cbuf_lookup() vx_getblk_clust() vx_getblk_cmn() vx_getblk() vx_badmap_rdwr() vx_emap_lookup() vx_reorg_excl() vx_fsadm_query() vx_cfs_fset_mnt() vx_domount() vx_fill_super() mount_bdev() vx_get_sb_bdev_v2() vx_get_sb_impl() vx_get_sb_v2() legacy_get_tree() vfs_get_tree() do_new_mount() DESCRIPTION: VxFS mount code for cluster file system needs spinlock to check for exclusion zones. This operation may requires additional I/O to fetch information disks or need sleep lock. Scheduling out thread under spinlock protection is not safe. Hence Linux may panic the system with "scheduling while atomic: mount/453834/0x00000002" bug. RESOLUTION: The spinlock protecting exclusion zones of VxFS is converted to sleep lock (semaphore) to solve this issue. * 4159938 (Tracking ID: 4155961) SYMPTOM: System panic due to null i_fset in vx_rwlock(). DESCRIPTION: Panic in vx_rwlock due to race between vx_rwlock() and vx_inode_deinit() function. Panic stack [exception RIP: vx_rwlock+174] . . #10 __schedule #11 vx_write #12 vfs_write #13 sys_pwrite64 #14 system_call_fastpath RESOLUTION: Code changes have been done to fix this issue. * 4164927 (Tracking ID: 4187385) SYMPTOM: If same blocks gets allocated to different inodes, one of them being IFAULOG inode. Auditlog file inode is marked bad in pass1b, which later gets freed and auditlog file gets removed. DESCRIPTION: For structural files, in case of duplicate block allocation, inode should not be marked bad for removal instead other files' references to the same block should be removed. RESOLUTION: Changes to skip marking IFAULOG file type inode as bad during pass1b. * 4177631 (Tracking ID: 4177630) SYMPTOM: Save the fsck progress status report to a file by default. DESCRIPTION: The fsck progress status report log is saved at /etc/vx/log/. - Filename is device path name e.g. fsck_dev_vx_dsk_pvtdg_vol1 Message is printed to syslog specifying the full file name. - eg. Fsck progress status will be saved in /etc/vx/log/fsck_dev_vx_dsk_pvtdg_vol1 If the progress status file could not be opened or if there is an error while flushing the file, an error will be printed to syslog. - eg. Failed to open file for logging fsck progress status output With -o status option, in addition to saving progress status in the file, it will be printed on screen. RESOLUTION: Code changes done to save the fsck progress status report to a file by default. * 4177641 (Tracking ID: 4135900) SYMPTOM: asm_exc_page_fault occurred while running LM stress worm DESCRIPTION: Multiple usage of variable before spinlock causing compiler optimisation issue in xted_free function. RESOLUTION: Since variable is not used before spinlock, we omit it. * 4177643 (Tracking ID: 4085144) SYMPTOM: Remount operation with "smartiomode=writeback" triggers kernel panic with message: "BUG: scheduling while atomic: mount.vxfs" DESCRIPTION: While remounting the filesystem with "smartiomode=writeback" option, the kernel takes spin lock but doesn't release it before requesting operations which might sleep. This results in kernel panic. RESOLUTION: Code changes have been done to release the spin lock before such operations. * 4177650 (Tracking ID: 4164503) SYMPTOM: There are some internal vxfs user space library functions which may leak memory in some cases. DESCRIPTION: There are some internal vxfs user space library functions which may leak memory in some cases. Most of the consumers are short-lived binaries, but few of them are daemon as well. RESOLUTION: Code changes have been done to resolve the issue. * 4188062 (Tracking ID: 4188063) SYMPTOM: Pages were getting unlocked even though VxFS locks them. DESCRIPTION: VxFS maintains dirty pages to be flushed in an array which is embedded in iowr structure. in vx_pvn_range_dirty(), VxFS uses kernel APIs to lookup for the dirty pages in given flush range and populate this array based on increasing index/offset. There is performance optimisation where VxFS tries to find dirty pages before start offset of given flush range and flush all of these accumulated pages. This optimisation was calling same kernel API for lookup of dirty pages on same IOWR structure which was passed in vx_pvn_range_dirty(). This new lookup from optimisation was reshuffling dirty page array and was causing extra reference count decrease on pages. This was resulting in pages being unlocked in kernel even though pages are locked by VxFS. This was causing VxFS internal assert to be hit. RESOLUTION: Code is modified to avoid reshuffling of array while doing lookup of dirty pages in vx_pvn_backward_cluster(). * 4188390 (Tracking ID: 4188391) SYMPTOM: There was a mismatch between the setfacl and getfacl command outputs for an empty ACL. DESCRIPTION: There is code in VxFS that instructs the kernel to cache the ACL that was passed in, even though no ACL is saved on disk in the case of an empty ACL. RESOLUTION: Modified VxFS code to cache NULL when the ACL is empty and cache the ACL otherwise. * 4189348 (Tracking ID: 4188888) SYMPTOM: systemctl status fstrim fstrim.service - Discard unused blocks on filesystems from /etc/fstab Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static) Active: failed (Result: exit-code) since Mon 2025-03-03 08:47:18 PST; 2s ago Docs: man:fstrim(8) Process: 15347 ExecStart=/usr/sbin/fstrim --listed-in /etc/fstab:/proc/self/mountinfo --verbose --quiet-unsupported (code=exited, status=64) Main PID: 15347 (code=exited, status=64) CPU: 59ms Mar 03 08:47:18 drserver201 systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab... Mar 03 08:47:18 drserver201 fstrim[15347]: fstrim: /t2: FITRIM ioctl failed: Invalid argument Mar 03 08:47:18 drserver201 fstrim[15347]: fstrim: /t1: FITRIM ioctl failed: Invalid argument Mar 03 08:47:18 drserver201 systemd[1]: fstrim.service: Main process exited, code=exited, status=64/USAGE Mar 03 08:47:18 drserver201 systemd[1]: fstrim.service: Failed with result 'exit-code'. Mar 03 08:47:18 drserver201 systemd[1]: Failed to start Discard unused blocks on filesystems from /etc/fstab. [root@drserver201 ~]# /usr/sbin/fstrim --listed-in /etc/fstab:/proc/self/mountinfo --verbose --quiet-unsupported fstrim: /t2: FITRIM ioctl failed: Invalid argument fstrim: /t1: FITRIM ioctl failed: Invalid argument DESCRIPTION: The VXFS filesystem does not support the fstrim utility. When the fstrim command is executed, it returns an error code EINVAL, leading to confusion that implies the feature is supported, but that an argument or ioctl function is invalid. Instead of returning a misleading error, we should ensure that a valid error code is returned, indicating that fstrim is not supported for VXFS filesystems. RESOLUTION: Code changes have been added to return the valid error code stating that fstrim is not supported on the VXFS Filesystem. * 4189349 (Tracking ID: 4188107) SYMPTOM: Softlockup occurred during Shrinking VxFS file system, the stacktrace of the task caused softlockup is like following: vx_multi_bufinval+0x21d/0x230 [vxfs] vx_reorg_start_io+0x32e/0x3d0 [vxfs] vx_reorg_copy+0x235/0x4c0 [vxfs] vx_reorg_dostruct+0x3b6/0x960 [vxfs] vx_trancommit+0x32f/0x1220 [vxfs] vx_extmap_reorg+0xdd2/0xe90 [vxfs] vx_ilock+0x18/0x50 [vxfs] vx_struct_reorg+0xb5f/0xb90 [vxfs] vx_aioctl_full+0x107d/0x1160 [vxfs] vx_aioctl_common+0x1ba9/0x2410 [vxfs] DESCRIPTION: This issue occurs during reorg of the extent map file, as part of reorg, vx_reorg_copy()/vx_reorg_start_io() is called to swap the extent map between IFEMAP and reorg IFEMP, during this swap, the buffer contains the original emap data needs to be invalidated. There is an excess buffer invalidation issue due to the incorrect buffer length passed to invalidate the buffers. RESOLUTION: Modified the code to pass the correct length of the buffer to be invalidated. * 4189423 (Tracking ID: 4189424) SYMPTOM: FSQA binary freezeit fails with the error "ioctl VX_FREEZE failed" DESCRIPTION: The ioctl call with the command VX_FREEZE is failing with error code 25 (ENOTTY Not a typewriter). RESOLUTION: Modified the code to ensure all the vxfs ioctl based commands are handled properly. * 4189586 (Tracking ID: 4189587) SYMPTOM: The setfacl operation failed with the error: Operation not supported. DESCRIPTION: Newer kernels use dedicated APIs for interacting with POSIX ACLs. The VxFS code also needs to use these newly implemented APIs instead of the generic APIs that are used for all extended attributes. RESOLUTION: Modified VxFS code to use dedicated POSIX ACL APIs for setting and retrieving ACL extended attributes on newer kernels. * 4189598 (Tracking ID: 4187406) SYMPTOM: Panic in locked_inode_to_wb_and_lock_list during OS writeback. DESCRIPTION: There is a race condition between OS writeback, inode eviction and vxfs lookup, the issue here is that vxfs lookup inits OS inode unconditionally even it is held or exposed by OS, as a result of which, i_state is cleared, which breaks the serialisation (by checking if i_state is set with I_SYNC) between OS writeback and inode eviction, hence the race, the panic stacktrace is as following: machine_kexec at ffffffffac867cfe __crash_kexec at ffffffffac9ad94d crash_kexec at ffffffffac9ae881 oops_end at ffffffffac8274f1 no_context at ffffffffac879a03 __bad_area_nosemaphore at ffffffffac879d64 do_page_fault at ffffffffac87a617 page_fault at ffffffffad20111e [exception RIP: locked_inode_to_wb_and_lock_list+28] writeback_sb_inodes at ffffffffacb858b4 __writeback_inodes_wb at ffffffffacb85b0f wb_writeback at ffffffffacb85dcb get_nr_inodes at ffffffffacb6d765 wb_workfn at ffffffffacb86c8a process_one_work at ffffffffac910167 worker_thread at ffffffffac910820 RESOLUTION: Modified the code to not re-init the OS inode when it is under writeback or exposed by OS. * 4189599 (Tracking ID: 3743572) SYMPTOM: When the number of inodes (regular files and directories together) of a clustered file system exceeds the 1-billion limit, the CFS secondary node may hang indefinitely when trying to allocate more inodes with the following stack traces: vx_svar_sleep_unlock vx_event_wait vx_async_waitmsg vx_msg_send llt_msgalloc vx_cfs_getias vx_update_ilist vx_find_partial_au vx_cfs_noinode vx_noinode vx_dircreate_tran vx_pd_create vx_dirlook vx_create1_pd vx_create1 vx_create_vp vx_create DESCRIPTION: The maximum number of inodes supported by VxFS is 1 billion. And the maximum number of inode allocation units (IAU) is 16384. When the file system is running out of inodes, and the maximum inode allocation unit(IAU) limit is reached, VxFS can still create two extra IAUs if there is a hole in the last IAU. Because of the hole, when a CFS secondary requests more inodes, the CFS primary still thinks there is a hole available and notifies the secondary to retry. However, the secondary fails to find a slot since the 1 billion limit is hit, then it goes back to the primary to request free inodes again, and this loops infinitely, hence the hang. RESOLUTION: When the maximum IAU number is reached, prevent primary to create the extra IAUs. * 4189600 (Tracking ID: 4189333) SYMPTOM: File size reported incorrectly after fallocate/truncate operations. DESCRIPTION: With vx_falloc_clear=1, file size inconsistencies occurred due to improper block deallocation after truncate/fallocate. RESOLUTION: Code fixed to ensure correct inode size updates and block handling after truncate/fallocate operations. * 4189601 (Tracking ID: 4120787) SYMPTOM: Data corruption issues with parallel direct IO on ZFOD extents. DESCRIPTION: 1. Data loss occurs when a direct IO spans over more than two adjacent ZFOD extents and those extents are not split correctly in vx_pdio_wait(). 2. Single IO can get split into multiple I/Os without block alignment if the required pages exceed a predefined limit, leading to data corruption due to improper handling of adjacent IOs and extent clearing. RESOLUTION: Code changes are done to ensure that even if the IO spans multiple ZFOD extents, all extents are correctly split. Eliminated redundant calls to the function which clear the zfod extent hence preventing the unnecessary clearing of data from other IOs. * 4189603 (Tracking ID: 4187096) SYMPTOM: Orphaned symlinks were not getting replicated in VFR. DESCRIPTION: During replication sync, there is optimisation where VxFS send attributes/mtime/acls during pass2 itself in the last data record of inode. For any normal symlinks, it doesn't have any data. So we should not process symlink inodes during datapass i.e. pass2. but with this optimisation vfr was sending attributes/mtime for symlink in pass2 itself. Getting attribute/acls will work fine for any normal symlink but it will fail for broken symlink as it will follow the path and gets the acls. For any symlink, VFR shouldn't check for acls as those will be same as original file. RESOLUTION: Code is modified to avoid checking acls on symlinks. * 4189604 (Tracking ID: 4184953) SYMPTOM: mkfs may generate coredump with signal SIGSEGV. Stack trace looks like following: #0 find_space #1 place_extents #2 make_fs #3 main DESCRIPTION: The simple extent allocator in mkfs requires alignment on 8 blocks boundary for those requests bigger than 8 block, but actual allocation of aux bitmap is not always 8-block aligned. Hence, it is possible that the extent allocator can search beyond the actual size of aux bitmap and cause segfault. RESOLUTION: Code changes has been done to make the block calculation aligned with 8-block. * 4189605 (Tracking ID: 4188417) SYMPTOM: Improper handling of null fel pointer in recovery context. DESCRIPTION: Improper handling of null fel pointer in recovery context. RESOLUTION: Code changes have been done to properly handle null fel pointer. * 4189607 (Tracking ID: 4189606) SYMPTOM: SecureFS failed to create checkpoint as per schedule DESCRIPTION: SecureFS allows configuration to allow checkpoint creation at specified interval. Along with this user can specify number of checkpoint to be present at given point in time. This allows filesystem doesn't reach ENOSPC. To preserve certain number of checkpoints, when checkpoint creation scheduled, it delete the oldest checkpoint as well. Currently this deletion is synchronous mode. As it involves processing of inode and disk IO, in case there are large number of inodes, it could result into taking time and thus can miss the next schedule. Hence, we should delete checkpoint in asynchronous way as this activity will be done in background even in case node reboot scenario. RESOLUTION: Code changes are made to remove checkpoint asynchronously which unblocks next schedule to kick in to create checkpoint. * 4189642 (Tracking ID: 4127771) SYMPTOM: full fsck hits assert and fails with run_fsck : Failed to full fsck cleanly, exiting DESCRIPTION: In case of fileset removal if the fs get disabled before freeze (during marking inodes as PTI) we can see FCL transactions followed by superblock free transaction. RESOLUTION: In certain cases assert was not true. Removed the assert. * 4189648 (Tracking ID: 4142106) SYMPTOM: When running fsck -n, warning appears about an incorrect Allocation Unit (AU) state, even though the on disk AU state is correct. DESCRIPTION: This issue happens when fsck -n processes a snapshot inode before the original file inode. Because of this fsck marked some AUs as dirty even though they had fully allocated and clean on-disk state. RESOLUTION: Added code changes to fsck to correctly verify and update the AU state when processing inodes. This ensures the in-memory AU state is accurate for AUs fully allocated, preventing false warnings during fsck -n. * 4189650 (Tracking ID: 4155954) SYMPTOM: Attribute data mismatch even if the node is owner. DESCRIPTION: Hlock may skip copying attribute data if the inode is bad, later while taking RWLOCK it assumes if the node is owner of the inode it should have correct attribute data. RESOLUTION: If inode is bad or file system is disabled, skip copying the attribute data. * 4189652 (Tracking ID: 4188805) SYMPTOM: Online migration process threads hung in allocation. DESCRIPTION: In migration we cannot allocate memory more than 2MB. If there are threads consuming this much memory, other threads need to wait till the memory becomes available. RESOLUTION: Free the buffer after copying the whole file which was missing in certain case. * 4189655 (Tracking ID: 4189654) SYMPTOM: Mount commands returns help as it is not able to parse multi category security SElinux contexts. DESCRIPTION: VxFS mount binary code does not support multi category security SElinux context like: "system_u:object_r:container_file_t:s0:c7,c28" RESOLUTION: Added code change to allow mount command to take comma separated multi category security SElinux context as mount suboption and mount the filesystem according to it. * 4189656 (Tracking ID: 4179548) SYMPTOM: Few in-core buffers may remain unflushed, leading to potential audit log inconsistency in the event of a system panic or crash. DESCRIPTION: During audit log file grow, the vx_multi_bufflush function may skip flushing some buffers if the filesystem block size (f_bsize) is less than 8KB. This is due to the use of a fixed threshold (IADDREXTSIZE, set to 8KB) instead of the actual block size, which can cause dirty buffers to remain in memory. If the system panics before the next scheduled flush, these buffers may never be written to disk, resulting in inconsistencies in the audit log. RESOLUTION: Use fs->f_bsize instead of the IADDREXTSIZE (8KB) in vx_multi_bufflush to ensure buffer flushing behavior aligns with the actual filesystem block size. * 4189659 (Tracking ID: 4180012) SYMPTOM: fsck utility was generating coredump due to a race between multiple threads of fsck. This happens when one thread is printing progress status to the psr file and another one already finished. DESCRIPTION: There is a race between multiple threads of fsck. One thread is printing progress status, while another already finished. File pointer accessed by the thread which is printing progress status is already closed. RESOLUTION: Cancelled the thread which is printing progress status before closing the psr file. * 4189663 (Tracking ID: 4181952) SYMPTOM: Assert is hit when the first extent(offset 0) of an fcl file has more that 1 block associated with it and it is in vx_bufinval. DESCRIPTION: vx_bufinval expects the offset to be non-zero as the first extent has the fcl header. If not, it hits an assert. RESOLUTION: Added a check in vx_fcl_truncate for offset == 0 , to update offset to fs_bsize. * 4189665 (Tracking ID: 4182162) SYMPTOM: Creation operation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS fails with EROFS and ENOTSUP. DESCRIPTION: Creation operation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS fails with EROFS and ENOTSUP. RESOLUTION: Code changes have been done to allow creation of modifiable checkpoint of WORM checkpoint using "fsckptadm createall" command on multiple FS. * 4189668 (Tracking ID: 4189667) SYMPTOM: VxFS medium impact coverity issues DESCRIPTION: Integer overflow in vx_aioctl_device() and do_restore() RESOLUTION: Verified integer overflow before accessing variables * 4189669 (Tracking ID: 4182897) SYMPTOM: LM CMDS->metasave testcase fails while running metasave restore DESCRIPTION: Invalid superblock read is encountered during metasave restore for metaversion 8 and earlier. RESOLUTION: Added superblock read based on metaversion. * 4189672 (Tracking ID: 4188816) SYMPTOM: File system is disabled and migration fails while doing direct writes to file when migration is in progress. DESCRIPTION: dentry_open code was not setting the correct flags (O_RDWR) and setting flag to 0. FMODE_WRITE was not set in f_mode. RESOLUTION: Remove the O_ACCMODE flags before setting the O_RDWR flag. * 4189675 (Tracking ID: 4187574) SYMPTOM: 'vxfstaskd' generates core intermittently when user unmounted a vxfs filesystem where SecureFS was configured. DESCRIPTION: 'vxfstaskd' generates core intermittently when user unmounted a vxfs filesystem where SecureFS was configured. It happens due to an internal race between respective worker threads causing invalid/freed memory access by "vxfstaskd" daemon. RESOLUTION: Code changes have been in "vxfstaskd" daemon to resolve such issue. * 4189676 (Tracking ID: 4187819) SYMPTOM: vxlist timeout or takes around 5 mins on 8.0.2.1500 VxVM/8.0.2.520 VRTSsfmh (RHEL 8.10) DESCRIPTION: vxlist timeout or takes around 5 mins on 8.0.2.1500 VxVM/8.0.2.520 VRTSsfmh (RHEL 8.10). It can happen because of overloaded dcli by unnecessary execution of "vxsnap" command by "vxfstaskd" daemon for every mounted filesystem. RESOLUTION: Code changes have been done stop the un-necessary invocation of "vxsnap" binary on those mounted vxfs filesystem where SecureFS is not configured. * 4189677 (Tracking ID: 4188282) SYMPTOM: While performing the LM Noise replay worm test, the system exits due to a failure to complete a full fsck cleanly. DESCRIPTION: In function vx_upg16_fill_aulog, nblks = IADDREXTSIZE >> fs->fs_bshift. Later in the code, nblks is used in both the vx_read_blk and vx_write_blk. This approach can cause problems when the extents are not aligned to 8KB. RESOLUTION: Fixing the len passed to vx_read_blk and vx_write_blk. * 4189686 (Tracking ID: 4188813) SYMPTOM: Online migration process hangs when doing file operations such as remove, modify, ftrunc on random files in loop from client when migration is running. DESCRIPTION: There was a deadlock where one thread was waiting for lock on the inode and the other thread that had the lock was waiting for the file copy to complete that will be done by 1st thread. RESOLUTION: Fixed the deadlock by adding new migration flag. * 4189702 (Tracking ID: 4189180) SYMPTOM: System got hang due to global lru lock contention. DESCRIPTION: Parallel file system stat calls on a large system could cause heavy lock contention on VxFS global LRU list, since which is protected by single global lock. RESOLUTION: To avoid the contention, multiple LRU list is introduced, the unused dentry is put on lru iist based on the hash of its file name, the lru list is pruned in a round-robin manner. * 4189792 (Tracking ID: 4189761) SYMPTOM: used-after-free memory corruption occurred. DESCRIPTION: VxFS added support set_acl/get_acl() to use local ACL cache in sles15sp6/rhel9, after an ACL cache is created, this cache is released when inode being destroyed but without clear the inode acl fields (a dangling pointer is left here) by linux kernel, later the extra clear of ACL cache during VxFS OS inode deinit, which accesses the dangling pointer, release the acl again, hence the memory corruption. RESOLUTION: fix for this issue is to remove the unnecessary clearing of ACL cache during VxFS inode free, and reset those inode acl fields after the acl cached gets released by linux kernel. * 4190077 (Tracking ID: 4116377) SYMPTOM: When filesystem check is run on a volume, it shows invalid offset for audit log records DESCRIPTION: Filesystem provides a way to audit the certain type of operation and for that it records the data in certain format. During filesystem check we make sure records written are in sane format. During this check due to wrong calculation correct record are showing as wrong record and hence warning message like below, fix invalid aulog end offset? prev end offset: 1572800 new offset: 1507328 (ynq)n RESOLUTION: Corrected the code to correctly do the calculation during format validation. ============================================ ============================================ Please Remove the below text if not required ============================================ ============================================ 1)Test and configuration: [RHEL9][WORM/Aulog]cfs.stress-L3 exits with "run_fsck : Failed to full fsck cleanly" 2) Build details: egan_1336 3)Test Bed details: dl380g9-161.vxindia.veritas.com (root/Gyp.s8m) dl380g9-162.vxindia.veritas.com (root/Gyp.s8m) 4) OS and Kernel details: [root@dl380g9-161 results]# uname -r 5.14.0-70.30.1.el9_0.x86_64 5)Harness location: /fsqa/egan_1336/fsqa/ 6) Issue: = Removing stale directories =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= = Test Configuration file: config/cfs.stress_l3 = Start: 05/04/23 17:04:57 Current: 05/05/23 22:54:04 = config cycle 14 of 24 = workload cycle 1 of 1 = failmode="NOFAIL" exit_on_error="yes" sleeptime=10 = fsadmruns="doit!" cloneruns="3" fclruns="4" fragmentruns="50" = audit_enabled="" compression="50" = istressruns="25" bstressruns="25" = cachetune="" test_dr="" space_defrag="doit!" = logreorg="doit!" nattr="" nxattr="" cioconfig="cio" rcqresize="doit!" = dedupstress="3" = wormstress="doit!" = filesnapstress="4" = monitorstress="overwrite-log" = tranretry="" trylock= keep_file_count=8192 = test_reconfig="" test_piggyback="180" test_migration="1200" = lockdelay="" delticks="" delskip="" = output="log" verbose=2 order=1 = workfile="workfiles/cioall.cfs" nscripts=24 bufsize=8192 = mkfsopts="bsize=1024,version=16,worm,aulog" = odmqa_ndev="" (0 used) odmqa_nraw="" = filesetquotas="" tsreclaim="" = Mostly Online Migration="" = filesystem fillup percentage="" = mntopts="cluster,delaylog,nomtime,noatime" = nodelist="dl380g9-161 dl380g9-162" = rolling_upgrade="" = test_mountumount="" = sync_clone_operations=off = zonelist="" = Multi Device Volumes = No =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= = Filtering and generating scripts = Skipped tests list (if any): log/cfs.stress_l3.skippedtests.230505.2254 = making 8192 files on /mnt1 mv: cannot move 'pre-files' to '/mnt1/keep_file/225424/pre-files': Operation not supported = making 8192 files on /mnt2 mv: cannot move 'pre-files' to '/mnt2/keep_file/225445/pre-files': Operation not supported = plenty of free blocks; skipping pickfiles.awk = making 8192 files on /mnt3 mv: cannot move 'pre-files' to '/mnt3/keep_file/225506/pre-files': Operation not supported = plenty of free blocks; skipping pickfiles.awk = making 8192 files on /mnt4 mv: cannot move 'pre-files' to '/mnt4/keep_file/232205/pre-files': Operation not supported = plenty of free blocks; skipping pickfiles.awk = making 8192 files on /mnt5 mv: cannot move 'pre-files' to '/mnt5/keep_file/225545/pre-files': Operation not supported = plenty of free blocks; skipping pickfiles.awk = making 8192 files on /mnt6 mv: cannot move 'pre-files' to '/mnt6/keep_file/232245/pre-files': Operation not supported = plenty of free blocks; skipping pickfiles.awk = Starting fragmenter for /mnt1 on node 0 ... = Starting fragmenter for /mnt2 on node 1 ... = Starting fragmenter for /mnt3 on node 0 ... = Starting fragmenter for /mnt4 on node 1 ... = Starting fragmenter for /mnt6 on node 1 ... . . . checking extended attributes : 48%^Mchecking extended attributes : 61% retry 0:pass1e:structural fileset processing RCQ and RCT files : 100% retry 0:pass2:primary fileset checking directory linkage : 100% retry 0:pass2:fileset 1002 checking directory linkage : 100% retry 0:pass2b:primary fileset validating dotdot attributes : 100% retry 0:pass2b:fileset 1002 validating dotdot attributes : 100% retry 0:pass3:primary fileset checking dotdot linkages : 100% retry 0:pass3:primary fileset checking link counts : 99% retry 0:pass3:fileset 1002 checking dotdot linkages : 100% retry 0:pass3:fileset 1002 checking link counts : 99% retry 0:pass4:structural fileset checking resource maps : 100% retry 0:pass4:primary fileset checking resource maps : 100% retry 0:pass4:fileset 1002 checking resource maps : 100% OK to clear log? (ynq)n sanity checks/updates have not been completed - restart? (ynq)n UX:vxfs vxumount: ERROR: V-3-26299: cannot umount /mnt5: Device or resource busy run_fsck:case NOFAIL IOERROR:FS busy...sleeping 5 = Checking file system (/dev/vx/rdsk/testdg/vol6) ... UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate fix invalid aulog end offset? prev end offset: 1572800 new offset: 1507328 (ynq)n 1,2c1 < OK to clear log? (ynq)n < sanity checks/updates have not been completed - restart? (ynq)n --- > fix invalid aulog end offset? prev end offset: 1572800 new offset: 1507328 (ynq)n run_fsck : Failed to full fsck cleanly + [ -n doit! -a -f /tmp/wormstress.2328149 ] + set +x As discussed with ArunKumar , he suggested to assign this issue to Narayan. 7)metasave and crashdump location: * 4190275 (Tracking ID: 4190241) SYMPTOM: Customer is seeing I/O errors: =============== Jun 11 13:58:52 pvhomdb01-new kernel: attempt to access beyond end of device Jun 11 13:58:52 pvhomdb01-new kernel: VxVM22002: rw=1, want=36028797016252418, limit=209639424 Jun 11 13:58:52 pvhomdb01-new kernel: vxfs: msgcnt 4 mesg 038: V-2-38: vx_dataioerr - /dev/vx/dsk/vgd_share/sw_lv file system file data write error in dev/block 0/36028797016252416 DESCRIPTION: During direct I/O writes, an incorrect alignment of a 64-bit file offset with 32 bit block size leads to calculation of an invalid block number, resulting in I/O attempts beyond the device boundary and triggering vx_dataioerr. RESOLUTION: Type cast the block size value to 64-bit in the alignment calculation to ensure correct offset alignment and prevent invalid block access. Patch ID: VRTSvxfs-8.0.2.2500 * 4189227 (Tracking ID: 4189228) SYMPTOM: Security vulnerabilities exist in the third-party components [zlib, libexpat] used by VxFS. DESCRIPTION: VxFS uses the third-party components [zlib, libexpat] with some security vulnerabilities. RESOLUTION: VxFS is updated to use a newer version of third-party components [zlib, libexpat] in which the security vulnerabilities have been addressed. Patch ID: VRTSvxfs-8.0.2.2400 * 4144078 (Tracking ID: 4142349) SYMPTOM: Using sendfile() on VxFS file system might result in hang with following stack trace. schedule() mutex_lock() vx_splice_to_pipe() vx_splice_read() splice_file_to_pipe() do_sendfile() do_syscall() DESCRIPTION: VxFS code erroneously tries to take pipe lock twice in the splice read code path, which might result in hang when sendfile() system call is used. RESOLUTION: VxFS now uses generic_file_splice_read() instead of own implementation for splice read. * 4162063 (Tracking ID: 4136858) SYMPTOM: ncheck utility was generating coredump while running on a corrupted filesystem due to the absence of sanity check for directory inodes. DESCRIPTION: ncheck utility was generating coredump while running on a corrupted filesystem due to the absence of sanity check for directory inodes. RESOLUTION: Added a basic sanity check for directory inodes. Now, the utility is not generating any coredump on corrupted FS, instead gracefully exiting in case of any error. * 4162064 (Tracking ID: 4121580) SYMPTOM: Modification operations will be allowed on checkpoint despite having WORM flag set. DESCRIPTION: If checkpoint is mounter on one mode in RW (READ-WRITE) mode, and WORM flag is getting set from other node, it will be allowed. RESOLUTION: Issue is fixed with the code change. * 4162065 (Tracking ID: 4158238) SYMPTOM: vxfsrecover command exits with error if the previous invocation terminated abnormally. DESCRIPTION: vxfsrecover command exits with error if the previous invocation terminated abnormally due to missing cleanup in the binary. RESOLUTION: Code changes have been done to perform cleanup properly in case of abnormal termination of "vxfsrecover" process. * 4162066 (Tracking ID: 4156650) SYMPTOM: Stale checkpoint entries will remain. DESCRIPTION: For example if we have checkpoints T1 (NEWEST), T2, T3 ... TN (OLDEST) In case if recovery happened from T3, then T1 and T2 will not be deleted in future, as we have lost their information. RESOLUTION: Code changes have been done in the vxfstaskd binary to avoid above mentioned issues. * 4162220 (Tracking ID: 4099775) SYMPTOM: System panic. DESCRIPTION: The reason for panic is race between two or threads trying to extend the per node quota file. RESOLUTION: Code is modified to handle this race condition. * 4163183 (Tracking ID: 4158381) SYMPTOM: Server panicked with "Kernel panic - not syncing: Fatal exception" DESCRIPTION: Server panicked due to accessing the freed dentry, also the dentry's hlist has been corrupted. There is a difference betwen the VXFS's dentry implementation and the kernel equivalent of dentry implementation. VXFS implementation of find_alias and splice_alias is based on some old kernel versions of d_find_alias and d_splice_alias. We need to keep them in sync with the newer kernel code to avoid landing into any such issue. RESOLUTION: Addressing the difference between our dentry related function like splice_alias, find_alias and the kernel equivalent of these functions. Made kernel equivalent code changes in our dentry's find_alias and splice_alias functions. * 4164090 (Tracking ID: 4163498) SYMPTOM: Veritas File System df command logging doesn't have sufficient permission while validating tunable configuration DESCRIPTION: While updating Veritas File system df command logging, it does check the mode of permission with tunable configuration to update the log with respective permission. The permission for checking tunable configuration is not correct. RESOLUTION: Updated code to set the permission correctly.. EO df_vxfs shows error with flex metrics-storage service When logging to vxfs_cmdlog with df_vxfs utility, it will check tunable configuration of eo_perm in read write mode. And Tunable configuration file doesn't have sufficient permission for this mode. As the root is mounted as ro (readonly) The vxtunefs command try to update tunable value to config file located in /etc/vx/vxfssystem directory based on eo_perm tunable. [root@nbapp862 vx]# mount | grep / /dev/dm-8 on / type ext4 (ro,relatime,seclabel,stripe=16) /dev/dm-8 on /etc/opt/veritas type ext4 (rw,relatime,seclabel,stripe=16) * 4164270 (Tracking ID: 4156384) SYMPTOM: Filesystem's metadata can get corrupted due to missing transaction in the intent log. This ends up resulting mount failure in some scenario. DESCRIPTION: Filesystem's metadata can get corrupted due to missing transaction in the intent log. This ends up resulting mount failure in some scenario. Also, it is not limited to mount failure. It may result in some more corruption in the filesystem metadata. RESOLUTION: Code changes have been done in reconfig code path to add a missing transaction which will prevent redundant replay of already done transactions which was causing the issue. * 4166501 (Tracking ID: 4163862) SYMPTOM: Mutex lock contention is observed in cluster file system under massive file creation workload DESCRIPTION: Mutex lock contention is observed in cluster file system under massive file creation workload. This mutex lock is used to access/modify delegated inode allocation unit list on cluster file system nodes. As multiple processes creating new files need to read this list, they contend of this mutex lock. Stack trace as following is observed in file creation code path. __mutex_lock vx_dalist_getau vx_cfs_inofindau vx_ialloc vx_dircreate_tran vx_int_create vx_do_create vx_create1 vx_create_vp vx_create vfs_create RESOLUTION: Mutex lock for delegated inode allocation unit list is converted to read write fast sleep lock. It will help to access delegated inode allocation unit list parallelly for file creation processes. * 4166502 (Tracking ID: 4163127) SYMPTOM: Spinlock contention observed during inode allocation for massive file creation operation on cluster file system. DESCRIPTION: While file creation operation happens on cluster node, flag for inode allocation unit is accessed under protection of inode allocation spinlock. Hence, we see the contention with following code path: vx_ismapdelfull vx_get_map_dele vx_mdele_tryhold vx_cfs_inofindau vx_ialloc vx_dircreate_tran vx_int_create vx_do_create vx_create1 vx_create_vp vx_create vfs_create RESOLUTION: Code changes have been done to reduce the contention on inode allocation spinlock. * 4166503 (Tracking ID: 4162810) SYMPTOM: Spikes in CPU usage of glm threads were observed in output of "top" command during massive file creation workload on cluster file system. DESCRIPTION: When running massive file creation workload from a cluster node(s), there was observation that a huge number of GLM threads are shown in top command output. These glm threads are contending for a global spinlock and consuming CPUs cycles with following stack trace. native_queued_spin_lock_slowpath() _raw_spin_lock_irqsave() vx_glmlist_thread() vx_kthread_init() kthread() RESOLUTION: Code is modified to split the heavy global spinlock at different priority level. * 4168357 (Tracking ID: 4076646) SYMPTOM: Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area. DESCRIPTION: Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area. RESOLUTION: Code changes have been done in attribute code path to make sure the free space in attribute area should never exceed length of that area. * 4172054 (Tracking ID: 4162316) SYMPTOM: System PANIC DESCRIPTION: CrowdStrike falcon might generate Kernel PANIC, during migration from native FS to VxFS. RESOLUTION: Required fix is added to vxfs code. * 4173064 (Tracking ID: 4163337) SYMPTOM: Intermittent df slowness seen across cluster due to slow cluster-wide file system freeze. DESCRIPTION: For certain workload, intent log reset can happen relatively frequently and whenever it happens it will trigger cluster-wide freeze. If there are a lot of dirty buffers that need flushing and invalidation, then the freeze might take long time to finish. The slowest part in the invalidation of cluster buffers is the de-initialisation of its glm lock which requires lots of lock release messages to be sent to the master lock node. This can cause flowcontrol to be set at LLT layer and slow down the cluster-wide freeze and block commands like df, ls for that entire duration. RESOLUTION: Code is modified to avoid buffer flushing and invalidation in case freeze is triggered by intent log reset. * 4177627 (Tracking ID: 4160991) SYMPTOM: Accessing the address which is freed and still present in the mlink. DESCRIPTION: In cases of error while flushing it skips writing transactions to the disk, which ultimately can lead to accessing freed memory. RESOLUTION: Code changes doe to resolve the issue. * 4177635 (Tracking ID: 4165264) SYMPTOM: Wastage of kspace memory. DESCRIPTION: we do not free the memory in case VX_CLONEFSET ioctl don't have WORM flag set but retention is non-zero. RESOLUTION: Fixed the leak by doing code changes. * 4177636 (Tracking ID: 4160978) SYMPTOM: Error messages are seen during recovering data for a FCL enabled filesystem DESCRIPTION: During recovery, vxfsrecover utility tries to recover "lost+found/changelog". Any modification to this file is blocked for security reason. Hence, utility is displaying error messages. RESOLUTION: As critical application data should not reside in "lost+found" location, skipping the whole lost+found directory while recovering data through vxfsrecover. * 4177638 (Tracking ID: 4157349) SYMPTOM: Undefined behaviour, usually result to either PANIC / Hang. DESCRIPTION: We were calling calling routines which can block because of several reasons e.g. memory allocation, under spin lock protection. RESOLUTION: Unlocking spin lock before calling routines which can block. * 4177640 (Tracking ID: 4164638) SYMPTOM: VxFS FSCK binary will consume lot of memory. DESCRIPTION: Not freeing thread local heap memory causes to FSCK to unnecessarily keep holding lot of user space memory, which might degrade overall system performance. RESOLUTION: Did code changes to fix the issue. * 4177656 (Tracking ID: 4167362) SYMPTOM: Memory leak observed in fsck through valgrind. DESCRIPTION: When running fsck binary through valgrind observing the memory leak. RESOLUTION: Fixing is done through code to handle memory leaks. * 4177657 (Tracking ID: 4144669) SYMPTOM: File system will be marked for FULLFSCK. DESCRIPTION: inside vx_dirlook function, we are looking for a pass through inode whose fset is marked for deletion. We should return error rather than marking FS for fullback. RESOLUTION: Code changes are done to fix the issue. * 4177661 (Tracking ID: 4141854) SYMPTOM: Conformance->fsadm hits coredump. DESCRIPTION: Issue caused due to recursion in EO logging code. RESOLUTION: Fixing is done by adding a variable which prevents code to go in recursion. * 4177662 (Tracking ID: 4171368) SYMPTOM: Node panicked while unmounting filesystem. DESCRIPTION: panic in iput() due to invalid address in i_sb. If a nested unmount is stuck and the parent mount is unmounted simultaneously, the parent will be force unmounted, and its superblock will be cleared. This superblock address may then be reused and freed by another module, causing the i_sb to have an invalid address. iput vx_os_iput_enqueue [vxfs] vx_do_unmount [vxfs] vx_unmount [vxfs] generic_shutdown_super kill_block_super vx_kill_sb [vxfs] RESOLUTION: Code changes have been made to resolve this issue. * 4177663 (Tracking ID: 4168443) SYMPTOM: System panicked at vx_clonemap after a smap corruption. Following is the panic stack [exception RIP: vx_clonemap+317] #0 vx_tflush_map at ffffffffc0dd7b58 [vxfs] #1 vx_fsq_flush at ffffffffc0ff05c9 [vxfs] #2 vx_fsflush_fsq at ffffffffc0ff2c5f [vxfs] #3 vx_workitem_process at ffffffffc0eef9ea [vxfs] #4 vx_worklist_process at ffffffffc0ef8775 [vxfs] #5 vx_worklist_thread at ffffffffc0ef8828 [vxfs] #6 vx_kthread_init at ffffffffc0f7f8e4 [vxfs] #7 kthread at ffffffff810b3fb1 #8 kthread_create_on_node at ffffffff810b3ee0 #9 ret_from_fork at ffffffff816c0537 DESCRIPTION: There was no proper error handling in case a smap is marked bad while changing its state in vx_smapchange. This can lead to a situation where system panic can occur while trying to flush an emap as the emap buffer will not be present incore RESOLUTION: Code changes have been done to handle smap mapbad error properly. * 4177664 (Tracking ID: 4175488) SYMPTOM: DB2 hang seen with following stacktrace #0 __schedule #1 schedule #2 vx_svar_sleep_unlock #3 vx_rwsleep_rec_lock #4 vx_recsmp_rangelock #5 vx_irwlock #6 vx_irwglock #7 vx_setcache #8 vx_uioctl #9 vx_unlocked_ioctl DESCRIPTION: The VxFS CIO advisory is set to improve performance by enabling concurrent reads and writes on a file. If CIO advisory is being set on a file while another thread is doing a read on the same file/inode (by locking it in SHARED mode) then there can be a condition where the read thread can incorrectly miss unlocking the file and do its processing and exit. As the read thread misses releasing the lock, the inode remains locked in SHARED mode. Later when another thread tries to set CIO advisory to the same file, it needs to lock the inode in EXCLUSIVE mode and it conflicts as the lock is already taken in SHARED mode and never released. This could cause this thread to hang indefinitely. RESOLUTION: Code changes have been done to fix the missing unlock. * 4177785 (Tracking ID: 4171380) SYMPTOM: Memory leak observed during code walk through. DESCRIPTION: Memory leak is seen in function extract_mountpoint in case of error hence handling the case through code. RESOLUTION: Fixing is done through code to handle memory leaks. * 4186376 (Tracking ID: 4117908) SYMPTOM: VRTSvxfs and VRTSfsadv failed to load on SAP SLES15. DESCRIPTION: VRTSvxfs and VRTSfsadv failing to install on SAP SLES15. RESOLUTION: VRTSvxfs and VRTSfsadv updated to support SAP SLES15. * 4187359 (Tracking ID: 4187360) SYMPTOM: VxFS module failed to load on SLES15-SP6 kernel. DESCRIPTION: This issue occurs due to changes in the SLES15-SP6 kernel. RESOLUTION: VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel. * 4188020 (Tracking ID: 4178929) SYMPTOM: ext3 filesystem creation failed with force unmount. DESCRIPTION: If we force unmount VXFS file system and try to create other file system on the same device it was giving error as device in use by the system; will not make a filesystem here! RESOLUTION: Code changes are done to end claim on the devices introduced with newer kernels. Patch ID: VRTSvxfs-8.0.2.1700 * 4159284 (Tracking ID: 4145203) SYMPTOM: vxfs startup scripts fails to invoke veki for kernel version higher than 3. DESCRIPTION: vxfs startup script failed to start Veki, as it was calling system V init script to start veki instead of the systemctl interface. RESOLUTION: Current code changes checks if kernel version is greater than 3.x and if systemd is present then use systemctl interface otherwise use system V interface * 4159938 (Tracking ID: 4155961) SYMPTOM: System panic due to null i_fset in vx_rwlock(). DESCRIPTION: Panic in vx_rwlock due to race between vx_rwlock() and vx_inode_deinit() function. Panic stack [exception RIP: vx_rwlock+174] . . #10 __schedule #11 vx_write #12 vfs_write #13 sys_pwrite64 #14 system_call_fastpath RESOLUTION: Code changes have been done to fix this issue. * 4161120 (Tracking ID: 4161121) SYMPTOM: Non root user is unable to access log files under /var/log/vx directory DESCRIPTION: In vxfs post install script we create /var/log/vx directory with 0600 permission, hence non root user is unable to read logfiles from this location As part of eo log file permission tunable changes , the log file permissions are getting change as expected but due to this directory permission 0600 non root users are unable to access these log files. RESOLUTION: Change this /var/log/vx directory permission to 0755. Patch ID: VRTSvxfs-8.0.2.1600 * 4157410 (Tracking ID: 4157409) SYMPTOM: Security Vulnerabilities were observed in the current versions of third party components [sqlite and expat] used by VxFS . DESCRIPTION: In an internal security scan, security vulnerabilities in [sqlite and expat] were observed. RESOLUTION: Upgrading the third party components [sqlite and expat] to address these vulnerabilities. Patch ID: VRTSvxfs-8.0.2.1500 * 4119626 (Tracking ID: 4119627) SYMPTOM: Command fsck is facing few SELinux permission denials issue. DESCRIPTION: Command fsck is facing few SELinux permission denials issue to manage var_log_t files and search init_var_run_t directories. RESOLUTION: Required SELinux permissions are added for command fsck to be able to manage var_log_t files and search init_var_run_t directories. * 4133481 (Tracking ID: 4133480) SYMPTOM: The VxFS module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. * 4134952 (Tracking ID: 4134951) SYMPTOM: The VxFS module fails to load on azure SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the azure SLES15 SP5 kernel. RESOLUTION: VxFS module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5. * 4146580 (Tracking ID: 4141876) SYMPTOM: Old SecureFS configuration is getting deleted. DESCRIPTION: It is possible that multiple instances of vxschadm binary are getting executed to update the config file, however there are high chances that the last updater will nullify the previous binary changes. RESOLUTION: Added synchronisation mechanism between two processes of vxschadm command running across the Infoscale cluster. * 4148734 (Tracking ID: 4148732) SYMPTOM: Memory by binaries / daemons who are calling this API, e.g. vxfstaskd daemon DESCRIPTION: At every call get_dg_vol_names() is not freeing the 8192 bytes of memory, which will result to increase in the total consumption of virtual memory by vxfstaskd. RESOLUTION: Free the unused memory. * 4150065 (Tracking ID: 4149581) SYMPTOM: WORM checkpoints and files will not be deleted despite their retention period is expired. DESCRIPTION: Frequent FS freeze operations, like creation of checkpoint, may cause SecureClock to get drifted from its regular update cycle. RESOLUTION: Fixed this bug. Patch ID: VRTSvxfs-8.0.2.1400 * 4141666 (Tracking ID: 4141665) SYMPTOM: Security vulnerabilities exist in the Zlib third-party components used by VxFS. DESCRIPTION: VxFS uses Zlib third-party components with some security vulnerabilities. RESOLUTION: VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed. Patch ID: VRTSvxfs-8.0.2.1300 * 4133481 (Tracking ID: 4133480) SYMPTOM: The VxFS module fails to load on SLES15 SP5. DESCRIPTION: This issue occurs due to changes in the SLES15 SP5 kernel. RESOLUTION: VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5. * 4133965 (Tracking ID: 4116329) SYMPTOM: fsck -o full -n command will fail with error: "ERROR: V-3-28446: bc_write failure devid = 0, bno = 8, len = 1024" DESCRIPTION: Previously, to correct the file system WORM/SoftWORM, we didn't check if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag. RESOLUTION: Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added. * 4134040 (Tracking ID: 3979756) SYMPTOM: Multiple fcntl F_GETLK calls are taking longer to complete in case of CFS than LM. Each call adds more and more delay and contributes to what seen later as performance degradation. DESCRIPTION: F_SETLK is utilizing the lock caches while taking or invalidating the locks, that's why it does not need to broadcast the messages to peer nodes. Whereas, F_GETLK does not utilize the caches and broadcasts the messages to all the peer nodes. Therefore operations of F_SETLK are not penalized but F_GETLK by almost the factor of 2 when used in CFS as compared to LM. RESOLUTION: Added cache for F_GETLK operation as well so that broadcast messages are avoided which would save some time. Patch ID: VRTSvxfs-8.0.2.1200 * 4121230 (Tracking ID: 4119990) SYMPTOM: Some nodes in cluster are in hang state and recovery is stuck. DESCRIPTION: There is a deadlock where one thread locks the buffer and wait for the recovery to complete. Recovery on the other hand may get stuck while flushing and invalidating the buffers from buffer cache as it cannot lock the buffer. RESOLUTION: If recovery is in progress, release buffer and return VX_ERETRY callers will retry the operation. There are some cases where lock is taken on 2 buffers. For those cases pass the flag VX_NORECWAIT which will retry the operation after releasing both the buffers. * 4125870 (Tracking ID: 4120729) SYMPTOM: Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source. DESCRIPTION: If full sync is started in recovery mode, we don't update state on target during start of replication (from failed to full-sync running). This state change is missed and is causing issues with states for next incremental syncs. RESOLUTION: Updated the code to address the correct state at target when vfr full sync is started in recovery mode * 4125871 (Tracking ID: 4114176) SYMPTOM: After failover, job sync fails with error "Device or resource busy". DESCRIPTION: If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy". RESOLUTION: Code is modified to correct the state on target when job was started in recovery mode. * 4125873 (Tracking ID: 4108955) SYMPTOM: VFR job hangs on source if thread creation fails on target. DESCRIPTION: On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job. RESOLUTION: Code is modified to retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error. * 4125875 (Tracking ID: 4112931) SYMPTOM: vxfsrepld consumes a lot of virtual memory when it has been running for long time. DESCRIPTION: Current VxFS thread pool is not efficient if it has been used for daemon process like vxfsrepld. It didn't release underline resources used by newly created threads which in turn increasing virtual memory consumption of that process. underline resources of threads will be released either when we call pthread_join() on them or when threads are created with detached attribute. with current implementation, pthread_join() is called only when thread pool is destroyed as a part of cleanup. but with vxfsrepld, it is not expected to call pool_destroy() every time when job is successful. pool_destroy is called only when repld is stopped. This was leading to consolidating threads resources and increasing VM usage of process. RESOLUTION: Code is modified to detach threads when it exits. * 4125878 (Tracking ID: 4096267) SYMPTOM: Veritas File Replication jobs might failed when there are large number of jobs run in parallel. DESCRIPTION: File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication. With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and job might failed. RESOLUTION: updated code to handle the code to take a hold while checking invalid job configuration. * 4126104 (Tracking ID: 4122331) SYMPTOM: Block number, device id information, in-core inode state are missing from the error messages logged in syslog while marking a bitmap/inode as "BAD". DESCRIPTION: Block number, device id information, in-core inode state are missing from the error messages logged in syslog upon encountering bitmap corruption or while marking an inode "BAD". RESOLUTION: Code changes have been done to include required missing information in corresponding error messages. * 4127509 (Tracking ID: 4107015) SYMPTOM: The VxFS module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-vxfs script to consider kernel-build version in exact-version-module version calculation. * 4127510 (Tracking ID: 4107777) SYMPTOM: The VxFS module fails to load on linux minor kernel. DESCRIPTION: This issue occurs due to changes in the minor kernel. RESOLUTION: Modified existing modinst-vxfs script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present. * 4127594 (Tracking ID: 4126957) SYMPTOM: If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel, system may crash with following stack: vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs] vx_aioctl_vfs+0x256/0x2d0 [vxfs] vx_admin_ioctl+0x156/0x2f0 [vxfs] vxportalunlockedkioctl+0x529/0x660 [vxportal] do_vfs_ioctl+0xa4/0x690 ksys_ioctl+0x64/0xa0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x5b/0x1b0 DESCRIPTION: There is a race condition between these two operations, due to which by the time fsadm thread tries to access FS data structure, it is possible that umount operation has already freed the structures, which leads to panic. RESOLUTION: As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing. * 4127720 (Tracking ID: 4127719) SYMPTOM: fsdb binary fails to open the device on a VVR secondary volume in RW mode although it has write permissions. fstyp binary could not dump fs_uuid value. DESCRIPTION: We have observed that fsdb when run on a VVR secondary volume bails out. At fs level, the volume has write permission but since it is secondary from VVR perspective, it is not allowed to be opened in write mode at block layer. fstyp binary could not dump fs_uuid value along with other superblock fields. RESOLUTION: Added fallback logic, wherein fsdb if fs_open fails to open the device in Read-Write mode, it will try to open it in Read-Only mode. Fixed fstyp binary to dump fs_uuid value along with other superblock fields. Code changes have been done to reflect these changes. * 4127785 (Tracking ID: 4127784) SYMPTOM: /opt/VRTS/bin/fsppadm validate /mnt4 invalid_uid.xml UX:vxfs fsppadm: WARNING: V-3-26537: Invalid USER id 1xx specified at or near line 10 DESCRIPTION: Before this fix, fsppadm command was not stoping the parsing, and was treating invalid uid/gid as warning only. Here invalid uid/gid means whether they are integer numbers or not. If given uid/gid is not existing then it is still a warning. RESOLUTION: Code added to give user proper error in case if invalid user/group ids are provided. * 4128249 (Tracking ID: 4119965) SYMPTOM: VxFS mount binary failed to mount VxFS with SELinux context. DESCRIPTION: Mounting the file system using VxFS binary with specific SELinux context shows below error: /FSQA/fsqa/vxfsbin/mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1 -ocontext="system_u:object_r:httpd_sys_content_t:s0" UX:vxfs mount: ERROR: V-3-28681: Selinux context is invalid or option/operation is not supported. Please look into the syslog for more information. RESOLUTION: VxFS mount command is modified to pass context options to kernel only if SELinux is enabled. * 4128723 (Tracking ID: 4114127) SYMPTOM: Hang in VxFS internal LM Conformance - inotify test DESCRIPTION: On SLES15 SP4 kernel, we observed that internal test is going into hang state with below process stack: [<0>] fsnotify_sb_delete+0x19d/0x1e0 [<0>] generic_shutdown_super+0x3f/0x120 [<0>] deactivate_locked_super+0x3c/0x70 [<0>] vx_unmount_cleanup_notify.part.37+0x96/0x150 [vxfs] [<0>] vx_kill_sb+0x91/0x2b0 [vxfs] [<0>] deactivate_locked_super+0x3c/0x70 [<0>] cleanup_mnt+0xb8/0x150 [<0>] task_work_run+0x70/0xb0 [<0>] exit_to_user_mode_prepare+0x224/0x230 [<0>] syscall_exit_to_user_mode+0x18/0x40 [<0>] do_syscall_64+0x67/0x80 [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae RESOLUTION: Code changes are done to resolve the hang. * 4129494 (Tracking ID: 4129495) SYMPTOM: Kernel panic observed in internal VxFS LM conformance testing. DESCRIPTION: Kernel panic has been observed in internal VxFS testing, OS writeback thread marking inode for writeback and then calling filesystem hook vx_writepages. The OS writeback is not expected to get inside iput(), as it would self deadlock while waiting on writeback. This deadlock causing tsrapi command hang which further causing kernel panic. RESOLUTION: Modified code to avoid deallocation of inode when the inode writeback is in progress. * 4129681 (Tracking ID: 4129680) SYMPTOM: VxFS rpm does not have changelog DESCRIPTION: Changelog in rpm will help to find missing incidents with respect to other version. RESOLUTION: Changelog is generated and added to VxFS rpm. INSTALLING THE PATCH -------------------- Run the Installer script to automatically install the patch: ----------------------------------------------------------- Please be noted that the installation of this P-Patch will cause downtime. To install the patch perform the following steps on at least one node in the cluster: 1. Copy the patch infoscale-sles15_x86_64-Patch-8.0.2.3200.tar.gz to /tmp 2. Untar infoscale-sles15_x86_64-Patch-8.0.2.3200.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/infoscale-sles15_x86_64-Patch-8.0.2.3200.tar.gz # tar xf /tmp/infoscale-sles15_x86_64-Patch-8.0.2.3200.tar 3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/hf # ./installVRTSinfoscale802P3200 [<host1> <host2>...] You can also install this patch together with 8.0.2 base release using Install Bundles 1. Download this patch and extract it to a directory 2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script with -patch_path option where -patch_path should point to the patch directory # ./installer -patch_path [<path to this patch>] [<host1> <host2>...] Install the patch manually: -------------------------- Manual installation is not recommended. REMOVING THE PATCH ------------------ Manual uninstallation is not recommended. SPECIAL INSTRUCTIONS -------------------- NONE OTHERS ------ NONE
Applies to the following product releases
Update files
|
File name | Description | Version | Platform | Size |
---|