Sign In
Forgot Password

Don’t have an account? Create One.

InfoScale 8.0 Update 4 Cumulative Patch on SLES15 Platform

Cumulative Patch

Abstract

InfoScale 8.0 Update 4 Cumulative Patch on SLES15 Platform

Description

This is a Cumulative patch release for SLES15 platform on InfoScale 8.0

 

Supported Platforms:

SLES15SP4 , SLES15SP6

 

SORT ID: 22240


PATCH ID:

InfoScale 8.0 Patch 3600

(SLES15 Support on IS 8.0)

 

Pre-requisites

You should be minimally at IS 8.0-GA to install this update.

 

1. In case the internet is not available, Installation of the patch must be performed concurrently with the latest CPI patch downloaded from Download Center.

                          * * * READ ME * * *
                       * * * InfoScale 8.0 * * *
                         * * * Patch 3600 * * *
                         Patch Date: 2025-06-08


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0 Patch 3600


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSperl
VRTSpython
VRTSrest
VRTSsfcpi
VRTSsfmh
VRTSspt
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0
   * InfoScale Enterprise 8.0
   * InfoScale Foundation 8.0
   * InfoScale Storage 8.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSsfcpi-8.0.0.1500
* 4065168 (4065169) The InfoScale product installer fails to upgrade the VRTSveki package from the base Infoscale version.
* 4065279 (4065280) The product installer uninstalls the VRTSrest package during an upgrade from the base InfoScale version to a InfoScale patch bundle.
* 4068297 (4068298) Installer fails to configure the REST server while using the -rest_server option.
* 4068710 (4071333) On a Solaris system, cluster formation fails after performing a rolling upgrade phase 1 from 6.2.1 to 8.0 with patch_path option and rebooting the system.
* 4068713 (4071331) On a Solaris system, while upgrading to Infoscale 8.0 with patch by using patch_path option;  some of the services fail to stop.
* 4070269 (4071337) On Solaris, after upgrading Infoscale from 6.2.1 to 8.0 with a patch_path option; system goes in a panic while starting the vxfen service.
* 4070643 (4070427) After upgrading InfoScale from 6.2.1 to 8.0, installer fails to start services.
* 4074981 (4075597) On a Linux system, Infoscale configuration fails after installation.
* 4075804 (4073591) On Solaris, vxprint command throws error while checking disk group - 'VxVM vxprint ERROR V-5-1-924 Record -g not found'.
* 4079926 (4079922) Installer fails to complete installation after it automatically downloads a required support patch from SORT that contains a VRTSperl package.
* 4080099 (4080098) Installer fails to complete the CP server configuration.
* 4080288 (4079853) Patch installer flashes a false error message with -precheck option.
* 4082266 (4082265) On RHEL 8.6 and above, installer fails to complete the LLT over RDMA configuration due to a missing ipcalc package.
* 4084976 (4084975) Installer fails to complete the CP server configuration.
* 4085000 (4085017) Installer fails to start the vxfen service during CPS-based fencing.
* 4085612 (4087319) On RHEL 7.4.2, Installer fails to uninstall VxVM while upgrading from 7.4.2 to 8.0U1.
* 4085770 (4086045) When Infoscale cluster is reconfigured, LLT, GAB, VXFEN services fail to start after reboot.
* 4086257 (4086533) VRTSfsadv pkg fails to upgrade from 7.4.2 U4 to 8.0 U1 while using yum upgrade.
* 4086661 (4086623) Installer fails to complete the CP server configuration.
* 4086724 (4086742) On InfoScale 8.0 U1, addnode operation fails during symmetry check of a new node with other nodes in the cluster.
* 4087907 (4087906) Executive order compliant logging support was not available on Linux.
* 4088743 (4088698) CPI installer tries to download a must-have patch whose version is lower than the version specified in media path.
* 4089937 (4089934) Installer does not update the '/opt/VRTSvcs/conf/config/types.cf' file after a VRTSvcs patch upgrade.
* 4092409 (4092408) On a Linux platform, CPI installer fails to correctly identify status of vxfs_replication service.
* 4095938 (4095939) For Infoscale 8.0, vxfen service fails to start after product configuration.
* 4097529 (4076583) On a Solaris system, the InfoScale installer runs set/unset publisher several times but does not disable 
the publisher. The deployment process slows down as a result.
* 4099557 (4099558) On Linux platforms; vxportal, fdd, vxcafs services fail to start.
* 4101461 (4104411) Onenode VCS CPS configuration does not need a Virtual IP address as a single node does not have a failover IP address configured.
* 4111808 (4112806) Shift operation removed as it causes pkg install/upgrade failure.
* 4116696 (4116695) On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a 
unique publisher list during install and upgrade.
* 4117196 (4104627) Providing multiple-patch support up to 10 patches.
* 4120655 (4120656) VRTSrest package fails to install with 8.0 update 2 release.
* 4122750 (4122748) On Linux, had service fails to start during rolling upgrade from InfoScale 7.4.1 or lower to higher InfoScale version.
* 4131684 (4131682) On SunOS, installer prompts the user to install 'bourne' package if it is not available.
* 4133017 (4135748) On Linux, the Installer service fails to check "mokutil.x86_64" during installation as prerequisite package.
* 4135632 (4135014) CPI installer should not ask for install InfoScale after "./installer -precheck" is done.
* 4136433 (4136432) add node to higher version of infoscale node fails.
* 4161806 (4160983) In Solaris, the vxfs modules are getting removed from current BE while upgrading the Infoscale to ABE.
* 4175681 (4177792) The installer will make correct entry for logging tunable for vxdbdctrl.service.
* 4188708 (4188987) The installer will now configure CPS using -fencing option.
Patch ID: VRTSrest-2.0.0.1300
* 4088973 (4089451) When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error .
* 4089033 (4089453) Some VCS REST APIs were crashing the Gunicorn worker.
* 4089041 (4089449) GET resources API on empty service group was throwing an error.
* 4089046 (4089448) Logging in REST API is not EO-compliant.
Patch ID: VRTSamf-8.0.0.3200
* 4145249 (4136003) A cluster node panics when the AMF module overruns internal buffer to analyze arguments of an executable binary.
* 4162992 (4161644) System panics when AMF enabled and there are Process/Application resources.
* 4177981 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).
* 4178828 (4168084) AMF caused kernel BUG: scheduling while atomic when umount file system.
* 4187383 (4180026) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).
* 4188734 (4182737) Veritas Infoscale does not support SLES15SP6.
* 4188838 (4188625) AMF accesses proc_mounts in wrong way on SLES12SP5 with kernel update 4.12.14-122.231
Patch ID: VRTSamf-8.0.0.2500
* 4124418 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSamf-8.0.0.2300
* 4111444 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
Patch ID: VRTSamf-8.0.0.1800
* 4089724 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSamf-8.0.0.1300
* 4067092 (4056991) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).
Patch ID: VRTScps-8.0.0.3200
* 4188653 (4188652) After configuring CP server, getting EO related error in CP server logs.
Patch ID: VRTScps-8.0.0.1900
* 4091306 (4088158) Security vulnerabilities exists in Sqlite third-party components used by VCS.
Patch ID: VRTScps-8.0.0.1800
* 4073050 (4018218) Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2
Patch ID: VRTScps-8.0.0.1400
* 4066225 (4056666) The Error writing to database message may intermittently appear in syslogs on CP servers.
Patch ID: VRTSllt-8.0.0.3200
* 4135413 (4084657) RHEL8/7.4.1 new installation, fencing/LLT panic while using TCP over LLT.
* 4135420 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
* 4136152 (4124759) Panic happened with llt_ioship_recv on a server running in AWS.
* 4145248 (4139781) Unexpected or corrupted skb, memory type missing in buffer header.
* 4156794 (4135825) Once root file system is full during llt start, llt module failing to load forever.
* 4156815 (4087543) Node panic observed at llt_rdma_process_ack+189
* 4167861 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15.
* 4188639 (4167108) replace yield() with cond_resched()
* 4188655 (4186645) Enable LLT_IRQBALANCE by default, and add a check on hpe_irqbalance to detect any anomalies
* 4188699 (4182384) Veritas Infoscale Availability does not support SLES15SP6.
Patch ID: VRTSllt-8.0.0.2500
* 4124419 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSllt-8.0.0.2300
* 4111469 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
* 4112345 (4087662) During memory fragmentation LLT module may fail to allocate large memory leading to the node eviction or a node not being able to join.
Patch ID: VRTSllt-8.0.0.1800
* 4061158 (4061156) IO error on /sys/kernel/slab folder
* 4079637 (4079636) LLT over IPsec is causing node crash
* 4079662 (3981917) Support LLT UDP multiport on 1500 MTU based networks.
* 4080630 (4046953) Delete / disable confusing messages.
Patch ID: VRTSllt-8.0.0.1400
* 4066063 (4066062) Node panic is observed while using llt udp with multiport enabled.
* 4066667 (4040261) During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.
Patch ID: VRTSvcsea-8.0.0.3200
* 4188599 (4180091) Offline script not able to exit as fuser check is being run on all disks, even the ones not under VCS control.
Patch ID: VRTSvcsea-8.0.0.2500
* 4118769 (4073508) Oracle virtual fire-drill is failing.
Patch ID: VRTSdbed-8.0.0.1800
* 4079372 (4073842) SFAE changes to support Oracle 21c
Patch ID: VRTSvcsag-8.0.0.3200
* 4118448 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents
* 4121275 (4121270) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
* 4122004 (4122001) NIC resource remain online after unplug network cable on ESXi server.
* 4127323 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.
* 4161344 (4152812) AWS EBSVol agent takes long time to perform online and offline operations on resources.
* 4161350 (4152815) AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.
* 4162952 (4142040) While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.
* 4163231 (4152700) When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.
* 4163234 (4152886) AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a shared VPC.
* 4165268 (4162658) LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure.
* 4169950 (4169794) Add a new attribute 'LinkTest' to the NIC agent to monitor a n/w device for which no IP is configured
* 4177815 (4175426) VMwareDisk Agent taking longer time to failover.
* 4178980 (4175426) VMwareDisk Agent taking longer time to failover.
* 4188654 (4188318) KVMGuest agent , auto removal of invalid env file once environment become valid.
Patch ID: VRTSvcsag-8.0.0.2500
* 4118318 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.
* 4118448 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents
* 4118455 (4118454) Process agent fails to come online when root user shell is set to /sbin/nologin.
* 4118767 (4094539) Agent resource monitor not parsing process name correctly.
Patch ID: VRTSgms-8.0.0.3600
* 4189081 (4184622) GMS support for SLES15-SP6.
Patch ID: VRTSgms-8.0.0.2800
* 4057427 (4057176) Rebooting the system results into emergency mode due to corruption of module dependency files.
* 4119111 (4119110) GMS support for SLES15-SP4 azure kernel.
Patch ID: VRTSgms-8.0.0.2400
* 4111346 (4092229) GMS support for SLES15 SP4.
Patch ID: VRTSgms-8.0.0.1800
* 4079190 (4071136) gms.config is not created when installing GMS rpm.
Patch ID: VRTSgms-8.0.0.1200
* 4065686 (4056803) GMS support for SLES15 SP3.
Patch ID: VRTSveki-8.0.0.3600
* 4188696 (4182362) Veritas Infoscale Availability does not support SLES15SP6.
Patch ID: VRTSveki-8.0.0.2800
* 4118568 (4110457) Veki packaging were failing due to dependency
* 4119216 (4119215) VEKI support for SLES15-SP4 azure kernel.
Patch ID: VRTSveki-8.0.0.2400
* 4111580 (4111579) VEKI support for SLES15 SP4.
Patch ID: -5.34.0.4
* 4072234 (4069607) Security vulnerability detected on VRTSperl 5.34.0.0 released with Infoscale 8.0.
* 4075150 (4075149) Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython released with Infoscale 8.0.
Patch ID: VRTSdbac-8.0.0.2800
* 4178724 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).
Patch ID: VRTSdbac-8.0.0.2400
* 4124424 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSdbac-8.0.0.2300
* 4111610 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
Patch ID: VRTSdbac-8.0.0.1800
* 4089728 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSdbac-8.0.0.1200
* 4056997 (4056991) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).
Patch ID: VRTSpython-3.9.2.25
* 4189377 (4154096) Fixing security issue in PyKMIP module under VRTSpython.
Patch ID: VRTSfsadv-8.0.0.2600
* 4103001 (4103002) Replication failures observed in internal testing
Patch ID: VRTSfsadv-8.0.0.2100
* 4092150 (4088024) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.1800
* 4078335 (4076412) Addressing Executive Order (EO) 14028,  initial requirements intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents.
Patch ID: VRTSfsadv-8.0.0.1200
* 4066092 (4057644) Getting a warning in the dmesg i.e. SysV service '/etc/init.d/fsdedupschd' lacks a native systemd unit file.
Patch ID: VRTSsfmh-HF0800510
* 4157270 (4157265) sfmh for IS 8.0 RHEL8.9 Platform Patch
Patch ID: VRTSvxfs-8.0.0.3600
* 4163383 (4155961) Panic in vx_rwlock during force unmount.
* 4188774 (3743572) File system may get hang when reaching 1 billion inode 
limit
* 4188801 (4164638) Fixing thread local memory leak in FSCK binary.
* 4188802 (4164888) Fixing bugs in fsckptadm binary regarding WORM and MAXTS options.
* 4188803 (4165264) Memory leak happened inside vxfs kernel driver.
* 4189077 (4187360) VxFS support for SLES15-SP6.
Patch ID: VRTSvxfs-8.0.0.3100
* 4154855 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.2900
* 4092518 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4097466 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4107367 (4108955) VFR job hangs on source if thread creation fails on target.
* 4111457 (4117827) For EO compliance, there is a requirement to support 3 types of log file permissions  600, 640 and 644 with 600 being default 
new eo_perm tunable is added in vxtunefs command to manage the log file permissions.
* 4112417 (4094326) mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"
* 4114621 (4113060) On newer linux kernels, executing a binary on a vxfs mountpoint resulted in an EINVAL error.
* 4118795 (4100021) Running setfacl followed by getfacl resulting in "No such device or address" error.
* 4119023 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4119107 (4119106) VxFS support for SLES15-SP4 azure kernel.
* 4123143 (4123144) fsck binary generating coredump
Patch ID: VRTSvxfs-8.0.0.2600
* 4084880 (4084542) Enhance fsadm defrag report to display if FS is badly fragmented.
* 4088079 (4087036) The fsck binary has been updated to fix a failure while running with the "-o metasave" option on a shared volume.
* 4111350 (4098085) VxFS support for SLES15 SP4.
* 4111910 (4090127) CFS hang in vx_searchau().
Patch ID: VRTSvxfs-8.0.0.2500
* 4112919 (4110764) Security Vulnerability observed in Zlib a third party component used by VxFS .
Patch ID: VRTSvxfs-8.0.0.2100
* 4095889 (4095888) Security vulnerabilities exist in the Sqlite third-party components used by VxFS.
* 4068960 (4073203) Veritas file replication might generate a core while replicating the files to target.
* 4071108 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4072228 (4037035) VxFS should have the ability to control the number of inactive processing threads.
* 4078520 (4058444) Loop mounts using files on VxFS fail on Linux systems.
* 4079142 (4077766) VxFS kernel module might leak memory during readahead of directory blocks.
* 4079173 (4070217) Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.
* 4082260 (4070814) Security Vulnerability observed in Zlib a third party component VxFS uses.
* 4082865 (4079622) Existing migration read/write iter operation handling is not fully functional as vxfs uses normal read/write file operation only.
* 4083335 (4076098) Fix migration issues seen with falcon-sensor.
* 4085623 (4085624) While running fsck, fsck might dump core.
* 4085839 (4085838) Command fsck may generate core due to processing of zero size attribute inode.
* 4086085 (4086084) VxFS mount operation causes system panic.
* 4088341 (4065575) Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition
Patch ID: VRTSvxfs-8.0.0.1700
* 4081150 (4079869) Security Vulnerability in VxFS third party components
* 4083948 (4070814) Security Vulnerability in VxFS third party component Zlib
Patch ID: VRTSvxfs-8.0.0.1400
* 4055808 (4062971) Enable partition directory on WORM file system
* 4056684 (4056682) New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.
* 4062606 (4062605) Minimum retention time cannot be set if the maximum retention time is not set.
* 4065565 (4065669) Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.
* 4065651 (4065666) Enable partition directory on WORM file system having WORM enabled on files with retention period not expired.
Patch ID: VRTSvxfs-8.0.0.1300
* 4065679 (4056797) VxFS support for SLES15 SP3.
Patch ID: VRTSvxfen-8.0.0.3200
* 4173687 (4166666) Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guest
* 4176817 (4176110) vxfentsthdw failed to verify fencing disks compatibility in KVM environment
* 4178096 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).
* 4178826 (4176592) Flooding of 'vxfen.log' file with the error message - "VXFEN already configured".
* 4187379 (4180026) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).
* 4188703 (4182723) Veritas Infoscale does not support SLES15SP6.
Patch ID: VRTSvxfen-8.0.0.2500
* 4117657 (4108561) Reading vxfen reservation not working
* 4124421 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSvxfen-8.0.0.2300
* 4111571 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
Patch ID: VRTSvxfen-8.0.0.1800
* 4087166 (4087134) The error message 'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed' appears after starting vxfen service.
* 4088061 (4089052) On RHEL9, Coordination Point Replacement operation is causing node panic
Patch ID: VRTSvxfen-8.0.0.1400
* 3951882 (4004248) vxfend generates core sometimes during vxfen race in CPS based fencing configuration
Patch ID: VRTSvcs-8.0.0.3200
* 4161822 (4129493) Tenable security scan kills the Notifier resource.
* 4162953 (4136359) When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated.
* 4188663 (4188662) App group faulted during upgrade
Patch ID: VRTSvcs-8.0.0.2300
* 4038088 (4100720) netstat command is deprecated in SLES15.
Patch ID: VRTSvcs-8.0.0.2100
* 4103077 (4103073) Upgrading Netsnmp component to fix security vulnerabilities .
Patch ID: VRTSvcs-8.0.0.1800
* 4084675 (4089059) gcoconfig.log file permission is changes to 0600.
Patch ID: VRTSvcs-8.0.0.1400
* 4065820 (4065819) Protocol version upgrade failed.
Patch ID: VRTSvxvm-8.0.0.3300
* 4080269 (4044898) Copy rlink tags from reprec to info rec, through vxdg upgrade path.
* 4105598 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4111560 (4098391) Continuous system crash is observed during VxVM installation.
* 4120194 (4120191) IO hang occurred when flushing SRL to DCM because of a deadlock issue.
* 4123243 (4129663) Generate and add changelog in vxvm and aslapm rpm
* 4162732 (4073653) VxVM commands get hung after pause-resume and resync operation in CVR setup.
* 4162734 (4098144) vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume
* 4162735 (4132265) Machine attached with NVMe devices may panic.
* 4162738 (4128451) A hardware replicated disk group fails to be auto-imported after reboot.
* 4162739 (4130642) node failed to rejoin the cluster after this node switched from master to slave due to the failure of the replicated diskgroup import.
* 4162740 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed.
* 4162741 (4128351) System hung observed when switching log owner.
* 4162742 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response
* 4162743 (4087628) CVM goes into faulted state when slave node of primary is rebooted.
* 4162745 (4145063) unknown symbol message logged in syslogs while inserting vxio module.
* 4162747 (4152014) the excluded dmpnodes are visible after system reboot when SELinux is disabled.
* 4162748 (4132799) No detailed error messages while joining CVM fail.
* 4162749 (4134790) Hardware Replicated DG was marked with clone flag on SLAVEs.
* 4162750 (4077944) In VVR environment, application I/O operation may get hung.
* 4162751 (4132221) Supportability requirement for easier path link to dmpdr utility
* 4162754 (4154121) add a new tunable use_hw_replicatedev to enable Volume Manager to import the hardware replicated disk group.
* 4162756 (4159403) add clearclone option automatically when import the hardware replicated disk group.
* 4162757 (4160883) clone_flag was set on srdf-r1 disks after reboot.
* 4163989 (4162873) disk reclaim is slow.
* 4164137 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150'  Volume <vol_name> not found'
* 4164248 (4162349) vxstat not showing data under MIN and MAX header when using -S option
* 4164276 (4142772) Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again.
* 4164312 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes.
* 4164539 (4161852) [BankOfAmerica][Infoscale][Upgrade] Post InfoScale upgrade, command "vxdg upgrade" succeeds but generates apparent error "RLINK is not encypted"
* 4164693 (4149498) Getting unsupported .ko files not found warning while upgrading VM packages.
* 4167050 (4134069) VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.
* 4167712 (4166086) Invalid device symbolic links exist under root.
* 4175213 (4153457) When using Dell/EMC PowerFlex ScaleIO storage, Veritas File System(VxFS) on Veritas Volume Manager(VxVM) volumes fail to mount after reboot.
* 4178146 (4168846) Support VxVM on RHEL9.4
* 4178967 (4167359) EMC DeviceGroup missing SRDF SYMDEV leads to DG corruption.
* 4178977 (4173284) dmpdr command failing
* 4178982 (4158316) DMP failed to do thin reclaim on the array which doesn't support WRITESAME.
* 4179185 (4168665) use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.
* 4179379 (4179002) VxFS got corrupted after dynamic LUN expansion on rhel9.
* 4182224 (4176336) VVR heartbeat timeout due to unknown 0 length UDP packets on port 4145.
* 4187376 (4185158) Support VxVM  on RHEL9.5
* 4187887 (4183410) Failed mount causes vxvm-boot service to fails which kills the vxconfigd process
* 4187902 (4080897) Performance drop on raw VxVM volume in RHEL 8.x compared to RHEL7.X
* 4188953 (4178449) vxconfigd thread stack corrupted for segfault when writing to translog.
* 4188954 (4178801) Remove or suppress error message in vxprint output for non-root user.
* 4188970 (4164734) Disable support for TLS1.1
* 4189028 (4185141) Support VxVM on SLES15 SP6
* 4189075 (4132774) VxVM support on SLES15 SP5
* 4189151 (4189376) /dev/vx/.dmp/<device> may point to invalid/stale devices link under "/dev/disk/by-path/"
Patch ID: VRTSaslapm 8.0.0.3300
* 4189390 (4188104) dummy incident for archival.
Patch ID: VRTSvxvm-8.0.0.2700
* 4154821 (4149248) Security vulnerabilities have been discovered in third-party components (OpenSSL, Curl, and libxml) employed by VxVM.
Patch ID: VRTSaslapm 8.0.0.2700
* 4154821 (4149248) Security vulnerabilities have been discovered in third-party components (OpenSSL, Curl, and libxml) employed by VxVM.
Patch ID: VRTSvxvm-8.0.0.2600
* 4121828 (4124457) VxVM Support for SLES15 Azure SP4 kernel
Patch ID: VRTSaslapm 8.0.0.2600
* 4101808 (4101807) VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.
* 4116688 (4085145) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
* 4117385 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
* 4121828 (4124457) VxVM Support for SLES15 Azure SP4 kernel
Patch ID: VRTSvxvm-8.0.0.2300
* 4108322 (4107083) In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.
* 4111302 (4092495) VxVM Support on SLES15SP4
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4111560 (4098391) Continuous system crash is observed during VxVM installation.
* 4112219 (4069134) "vxassist maxsize alloc:array:<enclosure_name>" command may fail.
* 4113223 (4093067) System panic occurs because of NULL pointer in block device structure.
* 4113225 (4068090) System panic occurs because of block device inode ref count leaks.
* 4113328 (4102439) Volume Manager Encryption EKM Key Rotation (vxencrypt rekey) Operation Fails on IS 7.4.2/rhel7
* 4113331 (4105565) In CVR environment, system panic due to NULL pointer when VVR was doing recovery.
* 4113342 (4098965) Crash at memset function due to invalid memory access.
* 4115475 (4017334) vxio stack trace warning message kmsg_mblk_to_msg can be seen in systemlog
Patch ID: VRTSaslapm 8.0.0.2300
* 4115481 (4098395) VRTSaslapm package(rpm) doesn't function correctly for SLES15SP4 .
Patch ID: VRTSvxvm-8.0.0.1800
* 4067609 (4058464) vradmin resizevol fails when FS is not mounted on master.
* 4067635 (4059982) vradmind need not check for rlink connect during migrate.
* 4070098 (4071345) Unplanned fallback synchronisation is unresponsive
* 4078531 (4075860) Tutil putil rebalance flag is not getting cleared during +4 or more node addition
* 4079345 (4069940) FS mount failed during Cluster configuration on 24-node physical HP BOM2 setup.
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4080105 (4045837) Sub disks are in relocate state after exceed fault slave node panic.
* 4080122 (4044068) After disc replacement, Replace Node operation failed at Configure Netbackup stage.
* 4080269 (4044898) Copy rlink tags from reprec to info rec, through vxdg upgrade path.
* 4080276 (4065145) multivolume and vset not able to overwrite encryption tags on secondary.
* 4080277 (3966157) SRL batching feature is broken
* 4080579 (4077876) System is crashed when EC log replay is in progress after node reboot.
* 4080845 (4058166) Increase DCM log size based on volume size without exceeding region size limit of 4mb.
* 4080846 (4058437) Replication between 8.0 and 7.4.x fails due to sector size field.
* 4081790 (4080373) SFCFSHA configuration failed on RHEL 8.4.
* 4083337 (4081890) On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4.
* 4085619 (4086718) VxVM modules fail to load with latest minor kernel of SLES15SP2
* 4087233 (4086856) For Appliance FLEX product using VRTSdocker-plugin, need to add platform-specific dependencies service ( docker.service and podman.service ) change.
* 4087439 (4088934) Kernel Panic while running LM/CFS CONFORMANCE - variant (SLES15SP3)
* 4087791 (4087770) NBFS: Data corruption due to skipped full-resync of detached mirrors of volume after DCO repair operation
* 4088076 (4054685) In case of CVR environment, RVG recovery gets hung in linux platforms.
* 4088483 (4088484) Failed to load DMP_APM NVME modules
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSaslapm 8.0.0.1800
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSvxvm-8.0.0.1700
* 4081684 (4082799) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1600
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4062799 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4066213 (4052580) Supporting multipathing for NVMe devices under VxVM.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSaslapm 8.0.0.1600
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSvxvm-8.0.0.1400
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4065569 (4056156) VxVM Support for SLES15 Sp3
* 4066259 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
* 4066735 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.
* 4066834 (4046007) The private disk region gets corrupted if the cluster name is changed in FSS environment.
* 4067237 (4058894) Messages in /var/log/messages regarding "ignore_device".
Patch ID: VRTSaslapm 8.0.0.1400
* 4067239 (4057110) ASLAPM rpm Support on SLES15 sp3
Patch ID: VRTSvxvm-8.0.0.1300
* 4065628 (4065627) VxVM Modules failed to load after OS or kernel upgrade  .
Patch ID: VRTSodm-8.0.0.3600
* 4189078 (4187362) ODM support for SLES15-SP6.
Patch ID: VRTSodm-8.0.0.3100
* 4154894 (4144269) After installing VRTSvxfs ODM fails to start.
Patch ID: VRTSodm-8.0.0.2900
* 4057432 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
* 4119105 (4119104) ODM support for SLES15-SP4 azure kernel.
Patch ID: VRTSodm-8.0.0.2600
* 4111349 (4092232) ODM support for SLES15 SP4.
Patch ID: VRTSodm-8.0.0.2500
* 4114322 (4114321) VRTSodm driver will not load with VRTSvxfs 8.0.0.2500 patch.
Patch ID: VRTSodm-8.0.0.1800
* 4089136 (4089135) VRTSodm driver does not load with VRTSvxfs patch.
Patch ID: VRTSodm-8.0.0.1200
* 4065680 (4056799) ODM support for SLES15 SP3.
Patch ID: VRTSgab-8.0.0.3200
* 4187371 (4180026) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).
* 4188700 (4188701) Veritas Infoscale Availability does not support SLES15SP6.
Patch ID: VRTSgab-8.0.0.2500
* 4124420 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSgab-8.0.0.2300
* 4111469 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
* 4111618 (4106321) After stopping HAD on SFCFHA stack for SLES15 SP4 minor kernel(kernel version > 5.14.21-150400.24.28) panic is observed.
Patch ID: VRTSgab-8.0.0.1800
* 4089723 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSgab-8.0.0.1300
* 4067091 (4056991) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).
Patch ID: VRTScavf-8.0.0.3600
* 4162960 (4153873) Deport decision was being dependent on local system only not on all systems in the cluster
Patch ID: VRTScavf-8.0.0.2800
* 4118779 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups.
Patch ID: VRTScavf-8.0.0.2400
* 4112609 (4079285) CVMVolDg resource takes many minutes to online with CPS fencing.
* 4112708 (4054462) In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.
Patch ID: VRTSglm-8.0.0.3600
* 4189080 (4184620) GLM support for SLES15-SP6.
Patch ID: VRTSglm-8.0.0.2800
* 4119113 (4119112) GLM support for SLES15-SP4 azure kernel.
Patch ID: VRTSglm-8.0.0.2400
* 4087258 (4087259) System panics while upgrading CFS protocol from 90 to 135 (latest).
* 4111341 (4092225) GLM support for SLES15 SP4.
Patch ID: VRTSglm-8.0.0.1800
* 4089163 (4089162) The GLM module fails to load.
Patch ID: VRTSglm-8.0.0.1200
* 4065685 (4056801) GLM support for SLES15 SP3.
Patch ID: VRTSspt-8.0.0.1700
* 4117339 (4132683) Need a tool to collect IS product specific information from both standalone node and clustered env.
* 4124177 (4125116) Logs collected from different tools of VRTSspt are stored at different location.
* 4129889 (4114988) No Progress status reporting through long running metasave utility
* 4139975 (4149462) New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.
* 4146957 (4149448) New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSsfcpi-8.0.0.1500

* 4065168 (Tracking ID: 4065169)

SYMPTOM:
The product installer fails to upgrade the VRTSveki package from the base Infoscale version.

DESCRIPTION:
This issue occurs because the product installer does not stop the veki service before it upgrades the base version of its package to a patched version.

RESOLUTION:
The InfoScale product installer is enhanced to stop the veki service before an upgrade so that the patch is installed successfully.

* 4065279 (Tracking ID: 4065280)

SYMPTOM:
The product installer uninstalls the VRTSrest package during an upgrade from the base InfoScale version to a InfoScale patch bundle.

DESCRIPTION:
The base VRTSrest package version (2.0.0.0000) is not defined in the installer scripts. Therefore, the product installer incorrectly assumes the base VRTSrest version to be 8.0.0.0000 and uninstalls the VRTSrest package that has the version 2.0.0.0000.

RESOLUTION:
The product installer scripts are updated to define the correct VRTSrest base version.

* 4068297 (Tracking ID: 4068298)

SYMPTOM:
Installer fails to configure the REST server while using the -rest_server option.

DESCRIPTION:
Installer fails to configure the REST server on the running cluster from any node other than node0 and the following error message is displayed - CPI ERROR V-9-20-1072 Failed to copy <<log directory>>/rest_server.sh from <<system name>> to <<log directory>>/rest_server.sh on <<system name>>. 
When the installer is invoked on any node in the cluster other than node0 to configure the REST server, installer tries to copy 'rest_server.sh' from node0 to to that node. 'rest_server.sh' is not generated on node0, causing the installer to fail.

RESOLUTION:
Installer does not copy 'rest_server.sh' from one node to another as installer needs to execute 'rest_server.sh' on node0 only.

* 4068710 (Tracking ID: 4071333)

SYMPTOM:
On a Solaris system, cluster formation fails after performing a rolling upgrade phase 1 from 6.2.1 to 8.0 with patch_path option and rebooting the system.

DESCRIPTION:
Installer tries to stop services after the 8.0 packages are installed, before installing the patches. When upgrade is performed from 6.2.1 to 8.0 with patches on Solaris, gab service fails to stop and recommends a reboot. The disabled services which were stopped because of stop operation are not enabled. As services are disabled/unloaded, those are not online automatically after a reboot and hence cluster is not formed.

RESOLUTION:
Code changes implemented in the Installer to check whether a reboot required is due to failure of services to stop after installation of the 8.0 packages. In such a case, installer allows start of the services. Modules are enabled and after a reboot all services come up.

* 4068713 (Tracking ID: 4071331)

SYMPTOM:
On a Solaris system, while upgrading to Infoscale 8.0 with patch by using patch_path option;  some of the services fail to stop.

DESCRIPTION:
On a Solaris system while upgrading Infoscale using patch_path option to 8.0 version with patch; installer first installs the 8.0 packages, stops services, and installs patches. When 8.0 packages are installed, on some of the systems it might take more time to load modules. After 8.0 installation is complete, installer runs the command to stop the services which creates a race condition. add_drv/rem_drv goes in busy state resulting in a failure to stop services.

RESOLUTION:
To avoid race condition, Installer code checks and waits if the needed services are started by using package level scripts. Stop commands from the installer are run only then.

* 4070269 (Tracking ID: 4071337)

SYMPTOM:
On Solaris, after upgrading Infoscale from 6.2.1 to 8.0 with a patch_path option; system goes in a panic while starting the vxfen service.

DESCRIPTION:
On Solaris, upgrade from 6.2.1 to 8.0 with patch_path option tries to stop the services after 8.0 packages are installed, but before installing the patches. While starting the services, vxfen goes in a panic and reboots the system.  Here, gab stop fails which keeps the 6.2.1 module loaded. But with the vxfen package/patch installation , vxfen is on 8.0 version. While starting vxfen it tries to find the needed dependencies from gab modules but fails as gab module is not upgraded. This inconsistency makes system panic.

RESOLUTION:
Installer checks if upgrade is from 6.2.1 to 8.0 with patch_path option and in that case, installer only enables services but does not start the services to avoid panic. A system reboot is recommended.

* 4070643 (Tracking ID: 4070427)

SYMPTOM:
After upgrading InfoScale from 6.2.1 to 8.0, installer fails to start services.

DESCRIPTION:
This issue occurs because the installer does not stop the veki service before upgrading the base version to 8.0.

RESOLUTION:
Installer modified to stop the veki service before upgrading.

* 4074981 (Tracking ID: 4075597)

SYMPTOM:
On a Linux system, Infoscale configuration fails after installation with the following error message:
'Can't use string ("0") as a HASH ref while "strict refs" in use at /opt/VRTSperl/lib/site_perl/UXRT80/CPIP/Prod/VCS80.pm line 4235'

DESCRIPTION:
If the kubernetes cluster nodes count is different from the infoscale cluster nodes, installer tries to collect the kubernetes cluster information into a perl hash. If a node is not in the kubernetes cluster, the hash value is incorrectly set as '0' and installer exits with an error message.

RESOLUTION:
The installer is modified to correctly set/unset the hash values.

* 4075804 (Tracking ID: 4073591)

SYMPTOM:
On Solaris, vxprint command throws error while checking disk group - 'VxVM vxprint ERROR V-5-1-924 Record -g not found'.

DESCRIPTION:
On Solaris, while checking the disk groups , vxprint commands throws error - 'ERROR V-5-1-924 Record -g not found'. This error occurs on Solaris, as Solaris  vxprint command does not accept the -l option before -g option.

RESOLUTION:
As a fix, CPI corrected the command argument sequence as vxprint -g <disk group name> -l to work on all Operating systems.

* 4079926 (Tracking ID: 4079922)

SYMPTOM:
The product installer fails to complete the installation after it automatically downloads a required support patch from SORT that contains a VRTSperl package. The following error is logged:
CPI ERROR V-9-0-0
0 No pkg object defined for pkg VRTSperl530 and padv <<PADV>>

DESCRIPTION:
During installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. However, it fails to correctly identify the base version of the VRTSperl package on the system to compare it with the downloaded version. Consequently, even though the appropriate patch is available, the installer fails to complete the installation.

RESOLUTION:
To address this issue, the product installer is updated to correctly identify the base version of the VRTSperl package on a system.

* 4080099 (Tracking ID: 4080098)

SYMPTOM:
Installer fails to complete the CP server configuration. The following error is logged:
CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>
CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>

DESCRIPTION:
Installer uses the 'openssl req' command to create CA certificate and csr file. With VRTSperl 5.34.0.2 onwards, openssl version is updated. The updated version requires config file to be passed with the 'openssl req' command by using -config paramater. Consequently, the installer fails to create CA certificate and csr file causing CP server configuration failure.

RESOLUTION:
Product installer is updated to pass the configuration file with the 'openssl req' command only.

* 4080288 (Tracking ID: 4079853)

SYMPTOM:
Patch installer flashes a false error message with -precheck option.

DESCRIPTION:
while installing patch with -precheck option installer flashes a false error message - 'CPI ERROR V-9-0-0 A more recent version of InfoScale Enterprise, 7.4.2.1100, is already installed on server'.

RESOLUTION:
A check added for hotfixupgrade and precheck both for performing further task.

* 4082266 (Tracking ID: 4082265)

SYMPTOM:
Installer fails to complete the LLT over RDMA configuration due to a missing ipcalc package. The following warning gets logged in the installer logs:
CPI WARNING V-9-40-3396 Failed to configure the IP address for the NIC <<NIC name>> on <<system name>>. Resolve the issue manually and try again.

DESCRIPTION:
Installer uses '/usr/bin/ipcalc' binary to calculate the broadcast address for the given IP address. On RHEL 8.6 and above, '/usr/bin/ipcalc' is provided by ipcalc package which is not available on the system.

RESOLUTION:
ipcalc package added to the installer in the appropriate os libraries.

* 4084976 (Tracking ID: 4084975)

SYMPTOM:
Installer fails to complete the CP server configuration. The following error is logged:
CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>
CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>

DESCRIPTION:
To create CA certificate and csr file, installer uses the 'openssl req' command and passes the openssl configuration file '/opt/VRTSperl/non-perl-libs/bin/openssl.cnf' by using -config parameter to the 'openssl req' command. OpenSSL version 1.0.2, does not have an openssl configuration file. Hence, the installer fails to create CA certificate and csr file, and CP server configuration fails.

RESOLUTION:
Installer updated to check and pass the openssl configuration file only if the file is present on the system.

* 4085000 (Tracking ID: 4085017)

SYMPTOM:
Installer fails to start the vxfen service during CPS-based fencing.

DESCRIPTION:
To successfully complete the CPS-based fencing, TLS version on the cluster nodes and the TLS version on the CP server must be same. Installer fails to start the vxfen service if the TLS version on the cluster nodes and the TLS version on the CP server is different and CPS-based fencing fails to complete.

RESOLUTION:
Installer enhanced to check and take appropriate action if TLS version on the cluster nodes is different from the TLS version on the CP server.

* 4085612 (Tracking ID: 4087319)

SYMPTOM:
Installer fails to uninstall VxVM while upgrading from 7.4.2 to 8.0U1.

DESCRIPTION:
While upgrading, uninstallation of previous rpms fails if the semodule is not loaded. Installer fails to uninstall VxVM if semodule is not loaded before uninstallation.

RESOLUTION:
Installer enhanced to check and take appropriate action if semodule vxvm is not loaded before uninstallation.

* 4085770 (Tracking ID: 4086045)

SYMPTOM:
When Infoscale cluster is reconfigured, LLT, GAB, VXFEN services fail to start after reboot.

DESCRIPTION:
Installer updates the /etc/sysconfig/<<service>> file and incorrectly sets START_<<service>> and STOP_<<service>> value as '0' in pre_configure task even when VCS is not set for reconfiguration. These services thus fail to start after reboot.

RESOLUTION:
Installer is enhanced to not to update the /etc/sysconfig/<<service>> files when VCS is not set for reconfiguration.

* 4086257 (Tracking ID: 4086533)

SYMPTOM:
VRTSfsadv pkg fails to upgrade from 7.4.2 U4 to 8.0 U1 while using yum upgrade. Following error observed:
The fsdedupschd.service is running.  Please stop fsdedupschd.service before upgrading.
error: %prein(VRTSfsadv-8.0.0.1700-RHEL8.x86_64) scriptlet failed, exit status 1

DESCRIPTION:
fsdedupschd service is started as a post-installation task of VRTSfsadv 7.4.2.2600 Package. Before yum upgrade to 8.0 U1, installer does not set up the fsdedupschd service and  VRTSfsadv package fails to upgrade.

RESOLUTION:
Installer is enhanced to handle the start and stop of VRTSfsadv-related services i.e fsdedupschd and vxfs_replication.

* 4086661 (Tracking ID: 4086623)

SYMPTOM:
Installer fails to complete the CP server configuration. The following error is logged:
CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>
CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>

DESCRIPTION:
Installer check the openssl_conf file on client nodes instead of CP server, consequently even if openssl_conf file is not present on CP server installer tries to utilize the same and fails to generate the CA certificate and csr files.

RESOLUTION:
Product installer is updated to check and pass the configuration file from CP server.

* 4086724 (Tracking ID: 4086742)

SYMPTOM:
On InfoScale 8.0 U1, the addnode operation fails with the following error message: 
"Cluster protocol version mismatch was detected between cluster <<cluster_name>> and <<new node name>>.

DESCRIPTION:
On InfoScale 8.0 U1, the cluster protocol version is changed from 260 to 270 in the VxVM component. However, cluster protocol version is not updated on the installer. During the addnode operation, when the new node is compared with the other cluster nodes, the cluster protocol versions do not match, and the error is displayed.

RESOLUTION:
The common product installer is enhanced to check the installed VRTSvxvm package version on the new node and accordingly set the cluster protocol version.

* 4087907 (Tracking ID: 4087906)

SYMPTOM:
Executive order compliant logging support was not available.

DESCRIPTION:
As per the executive order , logs of InfoScale components must be in a particular format. This must be enabled on InfoScale.

RESOLUTION:
Executive order compliant logging support enabled on InfoScale. EO compliant logging can be enabled/disabled by using the installer.

* 4088743 (Tracking ID: 4088698)

SYMPTOM:
CPI installer tries to download a must-have patch whose version is lower than the version specified in media path. If installer is unable to download, the following error message is displayed:
CPI ERROR V-9-30-1114 Failed to connect to SORT (https://sort.veritas.com), the patch <<patchname>> is required to deploy this product.

DESCRIPTION:
CPI installer tries to download a lower version must-have patch even if patch bundle(s) of equal or higher version of all the patches from the must-have patch is provided in the media path.

RESOLUTION:
CPI installer does not download the required must-have patch if equal or higher version patches are supplied in the mediapath.

* 4089937 (Tracking ID: 4089934)

SYMPTOM:
Installer does not update the '/opt/VRTSvcs/conf/config/types.cf' file after a VRTSvcs patch upgrade.

DESCRIPTION:
If '../conf/types.cf' file is changed with a VRTSvcs patch, during patch upgrade installer does not update the '..conf/config/types.cf' file. It needs to be updated  manually to avoid unexpected issues.

RESOLUTION:
Product installer is enhanced to correctly populate the '..conf/config/types.cf' file if '../conf/types.cf' file is changed with a VRTSvcs patch.

* 4092409 (Tracking ID: 4092408)

SYMPTOM:
CPI installer fails to correctly identify status of vxfs_replication service.

DESCRIPTION:
Installer parses 'ps -ef' output to determine if vxfs_replication service has started. Because of an intermediate 'pidof' process, installer incorrectly identifies status of vxfs_replication service as started and during poststart check fails, as vxfs_replication had not started.

RESOLUTION:
Product installer is updated to skip the 'pidof' process while determining the vxfs_replication service status.

* 4095938 (Tracking ID: 4095939)

SYMPTOM:
For Infoscale 8.0, vxfen service fails to start after product configuration.

DESCRIPTION:
Installer allows enabling/disabling EO-compliant logging for version lower than 8.0 U1. Even if EO-compliant logging is not selected, vxfen fails to start as installer adds 'VCS_ENABLE_PUBSEC_LOG' parameter in '/etc/sysconfig/vxfen' file which is not recognized by InfoScale 8.0 and causes vxfen service to fail.

RESOLUTION:
Product installer is updated to allow enabling/disabling EO-compliant logging only if 8.0 U1 or higher version of product is installed.

* 4097529 (Tracking ID: 4076583)

SYMPTOM:
On a Solaris system, the InfoScale installer runs set/unset publisher several times but does not disable 
the publisher. The deployment process slows down as a result.

DESCRIPTION:
CPI installer sets/unsets the Veritas publisher several times which slows the deployment process.

RESOLUTION:
The Installer sets all the publishers together. Subsequently the higher version of the package/patch available in the publishers 
is selected. Solaris and other publishers except Veritas are also disabled.

* 4099557 (Tracking ID: 4099558)

SYMPTOM:
On Linux platforms; vxportal, fdd, vxcafs services fail to start.

DESCRIPTION:
While installing and configuring the product in a single installer process by using VM 1200 patch alongwith the GA DVD, the installer does not start the vxfs services and consequently vxportal, fdd, vxcafs services also fail to start.

RESOLUTION:
Product installer is updated to start the vxfs service while installing and configuring in a single installer process by using VM 1200 patch alongwith the GA DVD.

* 4101461 (Tracking ID: 4104411)

SYMPTOM:
Onenode VCS CPS configuration does not need a Virtual IP address as a single node does not have a failover IP address configured.

DESCRIPTION:
Onenode VCS CPS configuration does not need a Virtual IP address as a single node does not have a failover IP address configured.

RESOLUTION:
Installer code modified to ensure that the Virtual IP address is not needed for Onenode configuration.

* 4111808 (Tracking ID: 4112806)

SYMPTOM:
The shift operation while setting the publisher, skips VRTSperl and VRTSvcs packages during rolling and normal upgrade.

DESCRIPTION:
The shift operation while setting the publisher, skips VRTSperl and VRTSvcs packages during rolling and normal upgrade.

RESOLUTION:
Installer code modified to remove shift operations from set_publisher_sys function.

* 4116696 (Tracking ID: 4116695)

SYMPTOM:
On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a 
unique publisher list during install and upgrade.  A publisher list gets displayed during install and upgrade which is not unique.

DESCRIPTION:
On Solaris, Publisher list gets displayed during Infoscale start,stop, and uninstall process and does not display a 
unique publisher list during install and upgrade.  A publisher list gets displayed during install and upgrade which is not unique.

RESOLUTION:
Installer code modified to skip the publisher list during start, stop and uninstall process and get unique publisher list during install and upgrade.

* 4117196 (Tracking ID: 4104627)

SYMPTOM:
The installer supports maximum of 5 patches. The user is not able to provide more than 5 patches for installation.

DESCRIPTION:
The latest bundle package installer supports maximum 5 patches. The user is not able to provide more than 5 patches for installation.

RESOLUTION:
The installer code modified to support maximum of 10 patches for installation.

* 4120655 (Tracking ID: 4120656)

SYMPTOM:
VRTSrest package fails to install with 8.0 update 2 release.

DESCRIPTION:
VRTSrest package version format is updated from x.x.x.xxxx to x.x.xx.
After installing the VRTSrest package, installer uses 'rpm -qp' command to verify the installed package version if package version format is not like x.x.x.xxxx and fails to verify as 'rpm -qp' command displays a warning related to package signature.

RESOLUTION:
As a fix used '-nosignature' parameter of 'rpm -qp' command to ignore the warning.

* 4122750 (Tracking ID: 4122748)

SYMPTOM:
On Linux, had service fails to start during rolling upgrade from InfoScale 7.4.1 or lower to higher InfoScale version.

DESCRIPTION:
VCS protocol version was supported from InfoScale 7.4.2 onwards. During rolling upgrade process from 7.4.1 or lower to higher InfoScale version, due to wrong release matrix data, installer tries to perform single phase rolling upgrade instead of two-phase rolling upgrade and had service fails to start.

RESOLUTION:
Installer is enhanced to perform two-phase rolling upgrade if installed Infoscale version is 7.4.1 or older.

* 4131684 (Tracking ID: 4131682)

SYMPTOM:
On SunOS, installer prompts the user to install 'bourne' package if it is not available.

DESCRIPTION:
Installer had a dependency on 'usr/sunos/bin/sh', which is from 'bourne' package. 'bourne' package is deprecated with latest SRUs.

RESOLUTION:
Installer code is updated to use '/usr/bin/sh' instead of 'usr/sunos/bin/sh' thus removing bourne package dependency.

* 4133017 (Tracking ID: 4135748)

SYMPTOM:
On Linux, the Installer service fails to check "mokutil.x86_64" during installation as prerequisite package.

DESCRIPTION:
On Linux, the Installer service fails to check "mokutil.x86_64" during installation as prerequisite package.

RESOLUTION:
Installer code is enhanced to check "mokutil.x86_64" during installation as prerequisite package

* 4135632 (Tracking ID: 4135014)

SYMPTOM:
CPI installer is asking "Would you like to install InfoScale" after "./installer -precheck" is done.

DESCRIPTION:
CPI installer is asking "Would you like to install InfoScale" after "./installer -precheck" is done. So it should not ask for installation after precheck is completed.

RESOLUTION:
Installer code modified to skip the question for installation after precheck is completed.

* 4136433 (Tracking ID: 4136432)

SYMPTOM:
Installer failed to add node to a higher version of infoscale node.

DESCRIPTION:
Installer failed to add node to a higher version of infoscale node.

RESOLUTION:
Installer code modified to enable adding node to a higher version of infoscale node.

* 4161806 (Tracking ID: 4160983)

SYMPTOM:
In Solaris, after upgrading the Infoscale to ABE if we boot the current BE then the vxfs modules are not loading properly.

DESCRIPTION:
In Solaris, the vxfs modules are getting removed from current BE while upgrading the Infoscale to ABE.

RESOLUTION:
Installer code modified.

* 4175681 (Tracking ID: 4177792)

SYMPTOM:
The installer is failing to add correct entry for logging tunable for vxdbdctrl.service.

DESCRIPTION:
The installer is failing to correct entry for logging tunable for vxdbdctrl.service.

RESOLUTION:
The installer code has been modified to make correct entry for logging tunable for vxdbdctrl.service.

* 4188708 (Tracking ID: 4188987)

SYMPTOM:
The installer failed to set CPS based fencing using -fencing option.

DESCRIPTION:
The installer failed to set CPS based fencing on IS80 machine.

RESOLUTION:
The installer code has been modified to set CPS based server fencing using -fencing option.

Patch ID: VRTSrest-2.0.0.1300

* 4088973 (Tracking ID: 4089451)

SYMPTOM:
When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error.

DESCRIPTION:
When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error as the command which was being used was not working for read-only filesystem.

RESOLUTION:
Used the appropriate command to get details of mountpoint.

* 4089033 (Tracking ID: 4089453)

SYMPTOM:
Some VCS REST APIs were crashing the Gunicorn worker.

DESCRIPTION:
Calling some VCS-related APIs were crashing the gunicorn worker handling the request. A new worker was automatically being spawned.

RESOLUTION:
Fixed the related foreign function call interface in the source code.

* 4089041 (Tracking ID: 4089449)

SYMPTOM:
GET resources API on empty service group was throwing an error.

DESCRIPTION:
When GET resources API was called on an empty service group it was giving an error, and the scenario is not handled.

RESOLUTION:
Scenario added to the code to resolve the issue.

* 4089046 (Tracking ID: 4089448)

SYMPTOM:
Logging in REST API was not in EO-compliant format.

DESCRIPTION:
Timestamp format is not EO-compliant and some attributes were missing for EO compliance.

RESOLUTION:
Changed timestamp, added new attributes like nodename, response time, source and destination IP addresses, username to REST server logs.

Patch ID: VRTSamf-8.0.0.3200

* 4145249 (Tracking ID: 4136003)

SYMPTOM:
A cluster node panics when VCS enabled AMF module that monitors process on/off.

DESCRIPTION:
A cluster node panics which indicates that an issue with the AMF module overruns into user space buffer during it is analyzing an argument of 8K size. The AMF module cannot load that length of data into internal buffer, eventually misleading access into user buffer which is not allowed when kernel SMAP in effect.

RESOLUTION:
The AMF module is constrained to ignore an argument of 8K or bigger size to avoid internal buffer overrun.

* 4162992 (Tracking ID: 4161644)

SYMPTOM:
System panics when VCS enabled AMF module.

DESCRIPTION:
System panics that indicates after amf_prexec_hook extracted an argument longer than 4k which spans
over two pages it is reading the 3rd page, that is illegal because all arguments are loaded in two pages.

RESOLUTION:
AMF continue to extract arguments from internal buffer before moving to next page.

* 4177981 (Tracking ID: 4164328)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.

* 4178828 (Tracking ID: 4168084)

SYMPTOM:
System panics when VCS enabled AMF module to monitor mount point.

DESCRIPTION:
AMF calls sleepable function while it holds spin lock for mount point event, resulting in system panic.

RESOLUTION:
Use a busy flag to synchronize multi threads so that it can release spin lock.

* 4187383 (Tracking ID: 4180026)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 5(RHEL9.5) is now introduced.

* 4188734 (Tracking ID: 4182737)

SYMPTOM:
Veritas Infoscale does not support SLES15SP6.

DESCRIPTION:
Veritas Infoscale does not support SLES15SP6.

RESOLUTION:
Veritas Infoscale support for SLES15SP6 is now introduced.

* 4188838 (Tracking ID: 4188625)

SYMPTOM:
System panics when AMF is enabled for fs mounting.

DESCRIPTION:
AMF using obsolete way traversing proc_mounts corrupted kernel internal struct, resulting in system panic.

RESOLUTION:
Enable MOUNT_CURSOR to access proc_mounts for kernel update 4.12.14-122.231 and upwards.

Patch ID: VRTSamf-8.0.0.2500

* 4124418 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSamf-8.0.0.2300

* 4111444 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

Patch ID: VRTSamf-8.0.0.1800

* 4089724 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSamf-8.0.0.1300

* 4067092 (Tracking ID: 4056991)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP2.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP3 is
now introduced.

Patch ID: VRTScps-8.0.0.3200

* 4188653 (Tracking ID: 4188652)

SYMPTOM:
After configuring CP server, getting EO related error in CP server logs.

DESCRIPTION:
After configuring CP server, getting EO related error in CP server logs. Error out if the flag value is not 0 or 1.

RESOLUTION:
Resolving the unnecessary error log message even if the tunable value is set to 0.

Patch ID: VRTScps-8.0.0.1900

* 4091306 (Tracking ID: 4088158)

SYMPTOM:
Security vulnerabilities exists Sqlite third-party components used by VCS.

DESCRIPTION:
VCS uses the  Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VCS is updated to use newer versions of Sqlite third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTScps-8.0.0.1800

* 4073050 (Tracking ID: 4018218)

SYMPTOM:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2

DESCRIPTION:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2.

RESOLUTION:
This hotfix updates the VRTScps module so that InfoScale CP Client can establish secure communication with a CP server using TLSv1.2. However, to enable TLSv1.2 communication between the CP client and CP server after installing this hotfix, you must perform the following steps:

To configure TLSv1.2 for CP server
1. Stop the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname> 
2. Check that the vxcpserv daemon is stopped using the following command:
   # ps -eaf | grep "/opt/VRTScps/bin/vxcpserv"
3. When the vxcpserv daemon is stopped, edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true
4. Start the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname>

To configure TLSv1.2 for CP Client
Edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true

Patch ID: VRTScps-8.0.0.1400

* 4066225 (Tracking ID: 4056666)

SYMPTOM:
The Error writing to database message may appear in syslogs intermittently on InfoScale CP servers.

DESCRIPTION:
Typically, when a coordination point server (CP server) is shared among multiple InfoScale clusters, the following messages may intermittently appear in syslogs:
CPS CRITICAL V-97-1400-501 Error writing to database! :database is locked.
These messages appear in the context of the CP server protocol handshake between the clients and the server.

RESOLUTION:
The CP server is updated so that, in addition to its other database write operations, all the ones for the CP server protocol handshake action are also synchronized.

Patch ID: VRTSllt-8.0.0.3200

* 4135413 (Tracking ID: 4084657)

SYMPTOM:
RHEL8/7.4.1 new installation, fencing/LLT panic while using TCP over LLT.

DESCRIPTION:
Customer has new environment configuring Infoscale 7.4.1 with RHEL8.4, using TCP for LLT comms.  The same configuration works fine with RHEL7, but system panics in RHEL8 environment.

RESOLUTION:
Code change done for llt binary to resolve CU panic issue.

* 4135420 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

* 4136152 (Tracking ID: 4124759)

SYMPTOM:
Panic happened with llt_ioship_recv on a server running in AWS.

DESCRIPTION:
In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected.

RESOLUTION:
To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet.

* 4145248 (Tracking ID: 4139781)

SYMPTOM:
System panics occasionally in LLT stack where LLT over ether enabled.

DESCRIPTION:
LLT allocates skb memory from own cache for those >4k messages and sets a field of skb_shared_info pointing to a LLT function, later uses the field to determine if skb is allocated from own cache. When receiving a packet, OS also allocates skb from system cache and doesn't reset the field, then passes skb to LLT. Occasionally the stale pointer in memory can mislead LLT to think a skb is from own cache and uses LLT free api by mistake.

RESOLUTION:
LLT uses a hash table to record skb allocated from own cache and doesn't set the field of skb_shared_info.

* 4156794 (Tracking ID: 4135825)

SYMPTOM:
Once root file system is full during llt start, llt module failing to load forever.

DESCRIPTION:
Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module.

RESOLUTION:
If existing links are not present, added the logic to get name of file names to create new links.

* 4156815 (Tracking ID: 4087543)

SYMPTOM:
Node panic observed at llt_rdma_process_ack+189

DESCRIPTION:
LLT in llt_rdma_process_ack() function, trying to access the header mblk becomes invalid, and sends an unnecessary acknowledgement from the network/rdma layer. When ib_post_send() function fails, OS returns an error, and LLT takes care of this error by sending the packet through non-rdma channel. Even when the OS has returned the error, that packet is still sent down and LLT get an acknowledgement. LLT thus receives two acknowledgements for the same buffer, one which rdma layer sends although it report errors while sending, and other that LLT simulates (by design) after delivering the packet through non-rdma channel.
The first acknowledgement context frees up the buffer and so when LLT again calls same function (llt_rdma_process_ack() ) for the same buffer, as the buffer has already been freed, LLT hits panic in that function.

RESOLUTION:
Added a check that stops buffer being freed up twice and hence does not panic node at llt_rdma_process_ack+189.

* 4167861 (Tracking ID: 4128887)

SYMPTOM:
Below warning trace is observed while unloading llt module:
[171531.684503] Call Trace:
[171531.684505]  <TASK>
[171531.684509]  remove_proc_entry+0x45/0x1a0
[171531.684512]  llt_mod_exit+0xad/0x930 [llt]
[171531.684533]  ? find_module_all+0x78/0xb0
[171531.684536]  __do_sys_delete_module.constprop.0+0x178/0x280
[171531.684538]  ? exit_to_user_mode_loop+0xd0/0x130

DESCRIPTION:
While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed .

RESOLUTION:
Proc_remove api is used which cleans up the whole subtree.

* 4188639 (Tracking ID: 4167108)

SYMPTOM:
This is code improvement and user does not experience any

DESCRIPTION:
replace yield() with cond_resched()

RESOLUTION:
replace yield() with cond_resched()

* 4188655 (Tracking ID: 4186645)

SYMPTOM:
lltdlv hang causes fencing to panic a node due to transient network issue.

DESCRIPTION:
lltdlv hang is caused due to temporary network issue and fencing panics the node. This works fine when LLT_IRQBALANCE balance is enabled. Hence, 
requirement is to enable this tunable by default to prevent panics due to transient issues. Llt irqbalance does not work in conjunction with hpe_irqbalance, so 
add a check for that.

RESOLUTION:
Enabled LLT_IRQBALANCE by default, to make clusters more resilient to transient issues and avoid fencing to panic the server in such cases.

* 4188699 (Tracking ID: 4182384)

SYMPTOM:
Veritas Infoscale Availability does not support SLES15SP6.

DESCRIPTION:
Veritas Infoscale Availability does not support SLES15SP6.

RESOLUTION:
Veritas Infoscale Availability support for SLES15SP6 is now introduced.

Patch ID: VRTSllt-8.0.0.2500

* 4124419 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSllt-8.0.0.2300

* 4111469 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

* 4112345 (Tracking ID: 4087662)

SYMPTOM:
During memory fragmentation LLT module may fail to allocate large memory leading to the node eviction or a node not being able to join.

DESCRIPTION:
When system memory is heavily fragmented, LLT module fails to allocate memory in the form of Linux socket buffers (SKB) from the OS. Due to this a 
cluster node may not be able to join the cluster or a node may get evicted from the cluster.

RESOLUTION:
This HF updates LLT module so that memory is allocated from private memory pools maintained inside LLT and if pools are exhausted LLT module tries 
to allocate memory through vmalloc.

Patch ID: VRTSllt-8.0.0.1800

* 4061158 (Tracking ID: 4061156)

SYMPTOM:
IO error on /sys/kernel/slab folder

DESCRIPTION:
After loading LLT module, LS command throws IO error on /sys/kernel/slab folder

RESOLUTION:
IO error on /sys/kernel/slab folder is now fixed after loading LLT module

* 4079637 (Tracking ID: 4079636)

SYMPTOM:
Kernel is getting panicked with null pointer dereference in llt_dump_mblk when LLT is configured over IPsec

DESCRIPTION:
LLT uses skb's sp pointer to chain socekt buffers internally. When LLT is configured over IPsec, llt will receive skb's with sp pointer from ip layer. These skbs were wrongly identified by llt as chained skbs. Now we are resetting the sp pointer field before re-using for interanl chaining.

RESOLUTION:
No panic observed after applying this patch.

* 4079662 (Tracking ID: 3981917)

SYMPTOM:
LLT UDP multiport was previously supported only on 9000 MTU networks.

DESCRIPTION:
Previously LLT UDP multiport configuration required network links to have 9000 MTU. We have enhanced UDP multiport code, so that now this LLT feature can be configured/run on 1500 MTU links as well.

RESOLUTION:
LLT UDP multiport can be configured on 1500 MTU based networks

* 4080630 (Tracking ID: 4046953)

SYMPTOM:
During LLT configuration, messages related to 9000 MTU are getting printed as error.

DESCRIPTION:
On Azure error messages related to 9000 MTU are getting logged. 
These message indicates that to have optimal performance , use 9000 MTU based networks . These are not actual errors but suggestion.

RESOLUTION:
Since customer is going to use it on Azure where 9000 MTU is not supported, hence removed these messages to avoid confusion.

Patch ID: VRTSllt-8.0.0.1400

* 4066063 (Tracking ID: 4066062)

SYMPTOM:
Node panic

DESCRIPTION:
Node panic observed in llt udp multiport configuration with vx ioship stack.

RESOLUTION:
When llt receives an acknowledgement, it tries to free the packet and corresponding client frags blindly without checking the client status. If the client is unregistered, then the free functions of the frags will be invalid and hence should not be called.

* 4066667 (Tracking ID: 4040261)

SYMPTOM:
During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.

DESCRIPTION:
Some log messages may have IDs like 00000. When such logs are encountered, it may lead to a core dump by the lltconfig process.

RESOLUTION:
VCS is updated to use appropriate message IDs for logs so that such issues do not occur.

Patch ID: VRTSvcsea-8.0.0.3200

* 4188599 (Tracking ID: 4180091)

SYMPTOM:
The offline script times out due to the delay introduced by the fuser check.

DESCRIPTION:
ASMDG resource timeout while offlining but happens quickly outside of VCS control. The offline script after executing the query " alter diskgroup <DISKGROUPS> dismount; " makes a fuser check on the underlying disks to see if any device is still in use and hangs.

RESOLUTION:
Alter the sql query to get the list of disks to run a fuser check on. Correct the operator precedence in the sql statement.

Patch ID: VRTSvcsea-8.0.0.2500

* 4118769 (Tracking ID: 4073508)

SYMPTOM:
Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c.

DESCRIPTION:
Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c.

RESOLUTION:
Environment variables are used for pointing the updated path for the password file.

It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to 
work Oracle virtual fire-drill feature. 

Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile 
ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; 

Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile"

Patch ID: VRTSdbed-8.0.0.1800

* 4079372 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

Patch ID: VRTSvcsag-8.0.0.3200

* 4118448 (Tracking ID: 4075950)

SYMPTOM:
When IPv6 VIP switches from node1 to node2 in a cluster,
a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address.

DESCRIPTION:
After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change.

RESOLUTION:
The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network.

* 4121275 (Tracking ID: 4121270)

SYMPTOM:
EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.

DESCRIPTION:
After attaching volume to instance its taking some time to update its device mapping in system. Due to which if we run lsblk -d -o +SERIAL immediately after attaching volume then its not showing that volume details in output. Due to which $native_device was getting blank/uninitialized.
 
So, we need to wait for some time to get device mapping updated in system.

RESOLUTION:
We have added logic to retry once for same command after some interval if in first run we didnt find expected volume device mapping. And now its properly updating attribute NativeDevice.

* 4122004 (Tracking ID: 4122001)

SYMPTOM:
NIC resource remain online after unplug network cable on ESXi server.

DESCRIPTION:
Previously MII checking network statistics/ping test but now it's directly marking NIC state ONLINE by checking NIC status in operstate. Now there is no any ping check before that. If it's fail to detect operstate file then only it's going to check for PING test. But on ESXi server environment NIC is already marked ONLINE as operstate file is available with state UP & carrier bit is set. So, even if "NetworkHosts" is not reachable then also it's marking NIC resource ONLINE.

RESOLUTION:
The NIC agent's already having "PingOptimize" attribute. We introduced new value (2) for "PingOptimize" attribute to make decision of perform PING test. If "PingOptimize = 2" then only it will do PING test or else it will work as per previous design.

* 4127323 (Tracking ID: 4127320)

SYMPTOM:
The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.

DESCRIPTION:
The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin.

RESOLUTION:
The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process.

* 4161344 (Tracking ID: 4152812)

SYMPTOM:
AWS EBSVol agent takes long time to perform online and offline operations on resources.

DESCRIPTION:
When a large number of AWS EBSVol resources are configured, it takes a long time to perform online and offline operations on these resources. 
EBSVol is a single threaded agent and hence prevents parallel execution of attach and detach EBS volume commands.

RESOLUTION:
To resolvethe issue, the default value of 'NumThreads' attribute of EBSVol agent is modified from 1 to 10 and the agent is enhanced to use the locking mechanism to avoid conflicting resource configuration. 
This results in enhanced response time for parallel execution of attach and detach commands. 
Also, the default value of MonitorTimeout attribute is modified from 60 to 120. This avoids timeout of monitor entry point when response of AWS CLI/server is unexpectedly slow.

* 4161350 (Tracking ID: 4152815)

SYMPTOM:
AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.

DESCRIPTION:
AWS EBS Volume which is attached to AWS instance that is not part of cluster is getting attach to the node of cluster during online event. 

Instead, Unable to detach volume' message should be logged in log file as volume is already in use by another AWS instance in AWS EBSVol agent.

RESOLUTION:
AWS EBSVol agent is enhanced to avoid attachment of in-use EBS volumes whose instances are not part of cluster.

* 4162952 (Tracking ID: 4142040)

SYMPTOM:
While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.

DESCRIPTION:
While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.
During some instances, the user might be informed to manually copy '/etc/VRTSvcs/conf/types.cf' to the existing '/etc/VRTSvcs/conf/config/types.cf' file. Need to remove the message "Implement /etc/VRTSvcs/conf/types.cf to utilize resource type updates" when updating the VRTSvcsag rpm.

RESOLUTION:
To ensure that '/etc/VRTSvcs/conf/config/types.cf file' is updated correctly following VRTSvcsag updates, the script user_trigger_update_types can be manually triggered by the user. The following message displays:
Leaving existing /etc/VRTSvcs/conf/config/types.cf configuration file unmodified
Copy /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/user_trigger_update_types to /opt/VRTSvcs/bin/triggers
To manually update the types.cf, execute command "hatrigger -user_trigger_update_types 0

* 4163231 (Tracking ID: 4152700)

SYMPTOM:
When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.

DESCRIPTION:
Azure Private DNS Zone with AzureDNSZone Agent is not supported.

RESOLUTION:
The Azure Private DNS Zone is supported by the AzureDNSZone Agent by installing the Azure library for Private DNS Zone(azure-mgmt-privatedns). 
This library has functions that can be utilized by for Private DNS zone operations. The resource ID is differentiated based on the Public and the Private DNS zones and the corrective actions are taken accordingly. 
For DNS zones, the resource ID differs between Public and Private DNS zones. The resource ID can be parsed, and the resource type can be checked to determine whether it is a Public or Private DNS zone.

* 4163234 (Tracking ID: 4152886)

SYMPTOM:
AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a VPC that is shared across multiple AWS accounts.

DESCRIPTION:
When VPC is shared across multiple AWS accounts, route table associated with the subnets is exclusively owned by the owner account. AWS restricts 
the modification in the route table from any other account. When AWSIP agent tries to bring OverlayIP resource online on the instance owned by a different 
account, the account may not have privileges to update the route table. In such cases, AWSIP agent fails to edit the route table, and fails to bring OverlayIP resource 
online and offline.

RESOLUTION:
To support cross-account deployment, assign appropriate privileges on the shared resources. Create an AWS profile to grant permissions to update the 
route table of VPC through different nodes belonging to different AWS accounts. This profile is used by the AWSIP agent to update route tables.

A new attribute "Profile" is introduced in the AWSIP agent. Configure this attribute with using the newly created profile.

Note: Adding "ec2:DescribeVpcs" permission is mandatory to use this patch as this is being used to decide if VPC is shared or not. 
Refer to the following technote for more details. https://sort.veritas.com/DocPortal/pdf/infoscale_802u2_awsiptechnote

* 4165268 (Tracking ID: 4162658)

SYMPTOM:
LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure.

DESCRIPTION:
If disk is detached unable to failover LVMVolumeGroup.

RESOLUTION:
Implementation of PanicSystemOnVGLoss attribute.
0 - Default value and behaviour, does not failover (not halting the system).
1  Halt the system if deactivation of volume group fails.
2 - Do not halt the system. Allow failover (Note risk of data corruption).

* 4169950 (Tracking ID: 4169794)

SYMPTOM:
NIC resource FAULTS when we there is no IP and attributes like NetworkHosts,PingTest configured for any interface/aggregation

DESCRIPTION:
For network aggregations or interfaces which have no IP set, but are to be monitored will fail with checks like ipadm, PingTest. To monitor a device state at the data link layer, add a new attribute 'LinkTest' to monitor a n/w device by using 'dladm' and skip the checks on IP(like ipadm, ping test). So, even if 'NetworkHosts' or 'IP' is not set the resource will not fault and will report ONLINE based on the state from dladm.

RESOLUTION:
Add a new attribute 'LinkTest' to monitor a n/w device for which no IP is configured, so that resource can be monitored both by dladm at data link layer, and/or by ipadm, pin tests at the network layer as per the requirement.

* 4177815 (Tracking ID: 4175426)

SYMPTOM:
VMwareDisk Agent taking longer time to failover.

DESCRIPTION:
The VMwareDisk agent now relies on vxvm to work fast, otherwise, it will take more time to wait for a return from vxdisk, which doesn't exist at all due to vxvm package is not installed as customer having only availability configured.

RESOLUTION:
Verifying vxdctl mode and if it is enabled, then only go for vxdisk.

* 4178980 (Tracking ID: 4175426)

SYMPTOM:
VMwareDisk Agent taking longer time to failover.

DESCRIPTION:
The VMwareDisk agent now relies on vxvm to work fast, otherwise, it will take more time to wait for a return from vxdisk, which doesn't exist at all due to vxvm package is not installed as customer having only availability configured.

RESOLUTION:
Verifying vxdctl mode and if it is enabled, then only go for vxdisk.

* 4188654 (Tracking ID: 4188318)

SYMPTOM:
Repro steps
1)	hastart on the node
2)	KVMagent OPEN entry called, if environment is invalid, KVM VCS resource is put into UNKNOWN state and the invalid environment file is created.
3)	User corrected the env.
4)	Probe the resource. No change to resource state as invalid environment file is present user has to remove it manually.

DESCRIPTION:
Same as above.

RESOLUTION:
enhance agent monitor for auto removal of the invalid file if environment is valid

Patch ID: VRTSvcsag-8.0.0.2500

* 4118318 (Tracking ID: 4113151)

SYMPTOM:
Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.

DESCRIPTION:
VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.

RESOLUTION:
Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.

* 4118448 (Tracking ID: 4075950)

SYMPTOM:
When IPv6 VIP switches from node1 to node2 in a cluster,
a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address.

DESCRIPTION:
After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change.

RESOLUTION:
The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network.

* 4118455 (Tracking ID: 4118454)

SYMPTOM:
when root user login shell is set to /sbin/nologin in /etc/passwd file, Process agent resource fails to come online.

DESCRIPTION:
From the engine_A.log,  the below errors were logged:
2023/05/31 11:34:52 VCS NOTICE V-16-10031-20704 Process:Process:imf_getnotification:Received notification for vxamf-group sendmail
2023/05/31 11:35:38 VCS ERROR V-16-10031-9502 Process:sendmail:online:Could not online the resource, make sure user-name is correct.
2023/05/31 11:35:39 VCS INFO V-16-2-13716 Thread(140147853162240) Resource(sendmail): Output of the completed operation (online)
==============================================
This account is currently not available.
==============================================

RESOLUTION:
The Process agent is enhanced to support nologin shell for root user. If user shell is set to /sbin/nologin, the agent starts a process using /bin/bash shell.

* 4118767 (Tracking ID: 4094539)

SYMPTOM:
The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.

DESCRIPTION:
In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.

RESOLUTION:
For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.

Patch ID: VRTSgms-8.0.0.3600

* 4189081 (Tracking ID: 4184622)

SYMPTOM:
GMS module failed to load on SLES15-SP6 kernel.

DESCRIPTION:
This issue occurs due to changes in the SLES15-SP6 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel.

Patch ID: VRTSgms-8.0.0.2800

* 4057427 (Tracking ID: 4057176)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock.

* 4119111 (Tracking ID: 4119110)

SYMPTOM:
GMS module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSgms-8.0.0.2400

* 4111346 (Tracking ID: 4092229)

SYMPTOM:
The GMS module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSgms-8.0.0.1800

* 4079190 (Tracking ID: 4071136)

SYMPTOM:
/etc/vx/gms.config file is not created during GMS rpm installation.

DESCRIPTION:
/etc/vx/gms.config file is not created when installing the GMS rpm. It has to be manually created by the user to control GMS start/stop through GMS_START macro.

RESOLUTION:
Changed GMS rpm spec to create gms.config during installation of GMS rpm.

Patch ID: VRTSgms-8.0.0.1200

* 4065686 (Tracking ID: 4056803)

SYMPTOM:
The GMS module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSveki-8.0.0.3600

* 4188696 (Tracking ID: 4182362)

SYMPTOM:
Veritas Infoscale Availability does not support SLES15SP6.

DESCRIPTION:
Veritas Infoscale Availability does not support SLES15SP6.

RESOLUTION:
Veritas Infoscale Availability support for SLES15SP6 is now introduced.

Patch ID: VRTSveki-8.0.0.2800

* 4118568 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

* 4119216 (Tracking ID: 4119215)

SYMPTOM:
VEKI module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VEKI module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSveki-8.0.0.2400

* 4111580 (Tracking ID: 4111579)

SYMPTOM:
The VEKI module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
VEKI module is updated to accommodate the changes in the kernel and load as expected on SLES15SP4.

Patch ID: -5.34.0.4

* 4072234 (Tracking ID: 4069607)

SYMPTOM:
Security vulnerability detected on VRTSperl 5.34.0.0 released with Infoscale 8.0.

DESCRIPTION:
Security vulnerability detected in the Net::Netmask module.

RESOLUTION:
Upgraded Net::Netmask module and re-created VRTSperl 5.34.0.1 version to fix the vulnerability .

* 4075150 (Tracking ID: 4075149)

SYMPTOM:
Security vulnerabilities detected in OpenSSL packaged VRTSperl/VRTSpython released with Infoscale 8.0.

DESCRIPTION:
Security vulnerabilities detected in the OpenSSL.

RESOLUTION:
Upgraded OpenSSL version and re-created VRTSperl/VRTSpython version to fix the vulnerability .

Patch ID: VRTSdbac-8.0.0.2800

* 4178724 (Tracking ID: 4164328)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.

Patch ID: VRTSdbac-8.0.0.2400

* 4124424 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSdbac-8.0.0.2300

* 4111610 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

Patch ID: VRTSdbac-8.0.0.1800

* 4089728 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSdbac-8.0.0.1200

* 4056997 (Tracking ID: 4056991)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP2.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP3 is
now introduced.

Patch ID: VRTSpython-3.9.2.25

* 4189377 (Tracking ID: 4154096)

SYMPTOM:
In the current module of VRTSpython, open exploitable security issue is seen in PyKMIP module.

DESCRIPTION:
There are open exploitable security issues, in the PyKMIP module under VRTSpython.

RESOLUTION:
To address security issue, fixing the security vulnerability in PyKMIP module under VRTSpython.

Patch ID: VRTSfsadv-8.0.0.2600

* 4103001 (Tracking ID: 4103002)

SYMPTOM:
Replication failures observed in internal testing

DESCRIPTION:
Replication related code changes done in VxFS repository to fix replication failures. The replication binaries are part of VRTSfsadv.

RESOLUTION:
Compiled VRTSfsadv with VxFS changes.

Patch ID: VRTSfsadv-8.0.0.2100

* 4092150 (Tracking ID: 4088024)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed. To accommodate the changes vxfs_solutions is added with libboost_system entries in Makefile [dedup/pdde/sdk/common/Makefile].

Patch ID: VRTSvxfs-8.0.0.1800

* 4078335 (Tracking ID: 4076412)

SYMPTOM:
Addressing Executive Order (EO) 14028,  initial requirements intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents. Executive Order helps in improving the nation's cybersecurity and also enhances any organization's cybersecurity and software supply chain integrity.

DESCRIPTION:
Executive Order helps in improving the nation's cybersecurity and also enhances any organization's cybersecurity and software supply chain integrity, some of the  initial requirements will enable the logging which is compliant to Executive Order. This comprises command logging,  logging unauthorised access in filesystem and logging WORM events on filesystem. Also include changes to display IP address for Veritas File replication at control plane based on tunable.

RESOLUTION:
The initial requirements of EO are addressed in this release.


As per Executive order(EO) for some of the requirements it should be Tunable based.
For example IP logging where ever applicable (for VFR it should be at control plane(not for every data transfer), and this is also tunable based.
Also for logging some kernel logs, like worm events(plan is to log those to syslog) etc are tunable based.

Introduced new tunable, eo_logging_enable. There is a protocol change because of the introduction of the tunable. 
Though the changes are planned for TOT first and then will go to Update patch on 80all maint for EO release, there is impact of this protocol change for update patch.
We might need to update protocol change with middle protocol version between existing protocol version and new protocol version(introduced because of eo)

For VFR, IP addresses of source and destination are needed to be logged as part of EO.
IP addresses will be included in the log while logging Starting/Resuming a job in VFR.
Log Location: /var/VRTSvxfs/replication/log/mount_point-job_name.log

There are 2 ways to fetch the IP address of the source and target. One is to get the IP addresses stored in the link structure of a session. These IPs are obtained by resolving the source and target hostname. It may contain both IPv4 and IPv6 for a node, and we cannot speculate on which IP actual connection has happened. The second way is to get the socket descriptor from an active connection of the session. This socket descriptor can be used to fetch the source and target IP associated with it. The second method is seems  to get the actual IP addresses used for the connection between source and target. The change contains to  fetch IP addresses from socket descriptor after establishing connections.

More details on EO Logging with respective handling for initial release for VxFS
https://confluence.community.veritas.com/pages/viewpage.action?spaceKey=VES&title=EO+VxFS+Scrum+Page

Patch ID: VRTSfsadv-8.0.0.1200

* 4066092 (Tracking ID: 4057644)

SYMPTOM:
Getting a warning in the dmesg i.e. SysV service '/etc/init.d/fsdedupschd' lacks a native systemd unit file.

DESCRIPTION:
During the start of fsdedupschd service with init we are getting the waring as the init.d file will soon get depricated

RESOLUTION:
Have code changes to make fsdedupschd systemd compatible.

Patch ID: VRTSsfmh-HF0800510

* 4157270 (Tracking ID: 4157265)

SYMPTOM:
NA

DESCRIPTION:
NA

RESOLUTION:
NA

Patch ID: VRTSvxfs-8.0.0.3600

* 4163383 (Tracking ID: 4155961)

SYMPTOM:
System panic due to null i_fset in vx_rwlock().

DESCRIPTION:
Panic in vx_rwlock due to race between vx_rwlock() and vx_inode_deinit() function.
Panic stack
[exception RIP: vx_rwlock+174]
.
.
#10  __schedule
#11  vx_write
#12  vfs_write
#13  sys_pwrite64
#14  system_call_fastpath

RESOLUTION:
Code changes have been done to fix this issue.

* 4188774 (Tracking ID: 3743572)

SYMPTOM:
When the number of inodes (regular files and directories together) of a clustered file system exceeds the 1-billion limit, the CFS secondary node may hang indefinitely when trying to allocate more inodes with the following stack traces:

vx_svar_sleep_unlock
vx_event_wait
vx_async_waitmsg
vx_msg_send
llt_msgalloc
vx_cfs_getias
vx_update_ilist
vx_find_partial_au
vx_cfs_noinode
vx_noinode
vx_dircreate_tran
vx_pd_create
vx_dirlook
vx_create1_pd
vx_create1
vx_create_vp
vx_create

DESCRIPTION:
The maximum number of inodes supported by VxFS is 1 billion.
And the maximum number of inode allocation units (IAU) is 16384.
When the file system is running out of inodes, and the maximum inode 
allocation unit(IAU) limit is reached, VxFS can still create two extra IAUs 
if there is a hole in the last IAU. Because of the hole, when a CFS secondary 
requests more inodes, the CFS primary still thinks there is a hole available and 
notifies the secondary to retry. However, the secondary fails to find a slot 
since the 1 billion limit is hit, then it goes back to the primary to 
request free inodes again, and this loops infinitely, hence the hang.

RESOLUTION:
When the maximum IAU number is reached, prevent primary to 
create the extra IAUs.

* 4188801 (Tracking ID: 4164638)

SYMPTOM:
VxFS FSCK binary will consume lot of memory.

DESCRIPTION:
Not freeing thread local heap memory causes to FSCK to unnecessarily keep holding lot of user space memory, which might degrade overall system performance.

RESOLUTION:
Did code changes to fix the issue.

* 4188802 (Tracking ID: 4164888)

SYMPTOM:
Not able to create WORM checkpoint of regular checkpoint, and wrong MAXTS error information.

DESCRIPTION:
In fsckptadm createall option, larger timestamp is allowed if any fs has MAXTS set. It should be the default to allow the larger timestamp and prevent it if any fs does not have MAXTS set. Otherwise it might try (and fail) to set a timestamp beyond what is supported on one of a set of file systems as long as some other one of the set does support the feature. Also because of wrong string parsing, creation of WORM checkpoint of regular checkpoint is prohibited.

RESOLUTION:
Code changes are done to solve both the issue.

* 4188803 (Tracking ID: 4165264)

SYMPTOM:
Wastage of kspace memory.

DESCRIPTION:
we do not free the memory in case VX_CLONEFSET ioctl don't have WORM flag set but retention is non-zero.

RESOLUTION:
Fixed the leak by doing code changes.

* 4189077 (Tracking ID: 4187360)

SYMPTOM:
VxFS module failed to load on SLES15-SP6 kernel.

DESCRIPTION:
This issue occurs due to changes in the SLES15-SP6 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel.

Patch ID: VRTSvxfs-8.0.0.3100

* 4154855 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.2900

* 4092518 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4097466 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4107367 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4111457 (Tracking ID: 4117827)

SYMPTOM:
Without tunable change the logfile permission will always be 600 EO compliant

DESCRIPTION:
Tunable values and behavior:

Value                   Behavior
0 (default)          600 permissions, update existing file permissions on upgrade
1                    640 permissions, update existing file permissions on upgrade
2                    644 permissions, update existing file permissions on upgrade
3                    Inherit umask, update existing file permissions on upgrade
10                   600 permissions, dont touch existing file permissions on upgrade
11                   640 permissions, dont touch existing file permissions on upgrade
12                   644 permissions, dont touch existing file permissions on upgrade
13                   Inherit umask, dont touch existing file permissions on upgrade
--------------------------------------------------------------------------------------

Adding new tunable as part of vxtunefs command which is per-node global tunable (not per filesystem). 
For Executive Order, CPI will be having workflow to update the tunable during installation/upgrade/configuration 
which will take care of updating in all nodes.

RESOLUTION:
New tunable is added to vxtunefs command.
How to set tunable:
/opt/VRTS/bin/vxtunefs -D eo_perm=1

* 4112417 (Tracking ID: 4094326)

SYMPTOM:
mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"

DESCRIPTION:
In vx_sl_kmcache_init(), kmcache is initialized for each level (in this case it is 8) separately. For passing the cache name as an argument to kmem_cache_create(), we have used a macro.

#define VX_SL_KMCACHE_NAME(level)       "vx_sl_node_"#level
#define VX_SL_KMCACHE_CREATE(level)                                     \
                kmem_cache_create(VX_SL_KMCACHE_NAME(level),            \
                                  VX_KMEM_SIZE(VX_SL_KMCACHE_SIZE(level)),\
                                  0, NULL, NULL, NULL, NULL, NULL, 0);


While using this macro, we have passed "level" as an argument and that has been expanded as "vx_sl_node_level" for all the 8 levels in `for` loop. This is causing the cache allocation for all the 8 levels with same name.

RESOLUTION:
Passing separate variable value (as level value) to VX_SL_KMCACHE_NAME as it is done in  vx_wb_sl_kmcache_init().

* 4114621 (Tracking ID: 4113060)

SYMPTOM:
On SLES15SP4 & RHEL9, executing a binary on a vxfs mountpoint resulted in an EINVAL error. The dmesg showed error "kernel read not supported for file"

DESCRIPTION:
This was due to changes in the recent kernel that required modifications in the way we initialize the file operations vector for vxfs.

RESOLUTION:
Added code to correctly update the file operations vector to fix this issue.

* 4118795 (Tracking ID: 4100021)

SYMPTOM:
Running setfacl followed by getfacl resulting in "No such device or address" error.

DESCRIPTION:
When running setfacl command on some of the directories which have the VX_ATTR_INDIRECT type of acl attribute, it is not removing the existing acl attribute and adding a new one, which should  not happen ideally. This is resulting in the failure of getfacl with following "No such device or address" error.

RESOLUTION:
we have done the code chages to removal of VX_ATTR_INDIRECT type acl in setfacl code.

* 4119023 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error:
"ERROR: V-3-28446:  bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check  if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4119107 (Tracking ID: 4119106)

SYMPTOM:
VxFS module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

* 4123143 (Tracking ID: 4123144)

SYMPTOM:
fsck binary generating coredump

DESCRIPTION:
In internal testing we found that fsck binary generates coredump due to below mentioned assert when we try to repair corrupted file system using below command:
./fsck -o full -y /dev/vx/rdsk/testdg/vol1

ASSERT(fset >= VX_FSET_STRUCT_INDEX)

RESOLUTION:
Added code to set default (primary) fileset by scanning the fset header list.

Patch ID: VRTSvxfs-8.0.0.2600

* 4084880 (Tracking ID: 4084542)

SYMPTOM:
Enhance fsadm defrag report to display if FS is badly fragmented.

DESCRIPTION:
Enhance fsadm defrag report to display if FS is badly fragmented.

RESOLUTION:
Added method to identify if FS needs defragmentation.

* 4088079 (Tracking ID: 4087036)

SYMPTOM:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume.

DESCRIPTION:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume. Besides this, while running this utility with "-n" and either "-o metasave" or "-o dumplog", it silently ignores the latter option(s).

RESOLUTION:
Code changes have been done to resolve the above-mentioned failure and also warning messages have been added to inform users regarding mutually exclusive behavior of "-n" and either of "metasave" and "dumplog" options instead of silently ignoring them.

* 4111350 (Tracking ID: 4098085)

SYMPTOM:
The VxFS module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

* 4111910 (Tracking ID: 4090127)

SYMPTOM:
CFS hang in vx_searchau().

DESCRIPTION:
As part of SMAP transaction changes, allocator changed its logic to call mdele tryhold always when getting the emap for a particular EAU, and it passes 
nogetdele as 1 to mdele_tryhold, which suggests that mdele_tryhold should not ask for delegation when detecting a free EAU without delegation, so in our case, 
allocator finds such an EAU in device summary tree but without delegation,  and it keeps retrying but without asking for delegation, hence the forever.

RESOLUTION:
In case a FREE EAU is found without delegation, delegate it back to Primary.

Patch ID: VRTSvxfs-8.0.0.2500

* 4112919 (Tracking ID: 4110764)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.2100

* 4095889 (Tracking ID: 4095888)

SYMPTOM:
Security vulnerabilities exist in the Sqlite third-party components used by VxFS.

DESCRIPTION:
VxFS uses the Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version of this third-party components in which the security vulnerabilities have been addressed.

* 4068960 (Tracking ID: 4073203)

SYMPTOM:
Veritas file replication might generate a core while replicating the files to target when rename and unlink operation is performed on a file with FCL( file change log) mode on.

DESCRIPTION:
vxfsreplicate process of Veritas file replicator  might get a segmentation fault with File change mode on when rename and unlink operation are performed on a file.

RESOLUTION:
Addressed the issue to replicate the files, in scenarios involving rename and unlink operation with FCL mode on.

* 4071108 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4072228 (Tracking ID: 4037035)

SYMPTOM:
VxFS should have the ability to control the number of inactive processing threads.

DESCRIPTION:
VxFS may spawn a large number of worker threads that become inactive over time. As a result, heavy lock contention occurs during the removal of inactive threads on high-end servers.

RESOLUTION:
To avoid the contention, a new tunable, vx_ninact_proc_threads, is added. You can use vx_ninact_proc_threads to adjust the number of inactive processing threads based on your server configuration and workload.

* 4078520 (Tracking ID: 4058444)

SYMPTOM:
Loop mounts using files on VxFS fail on Linux systems running kernel version 4.1 or higher.

DESCRIPTION:
Starting with the 4.1 version of the Linux kernel, the driver loop.ko uses a new API for read and write requests to the file which was not previously implemented in VxFS. This causes the virtual disk reads during mount to fail while using the -o loop option , causing the mount to fail as well. The same functionality worked in older kernels (such as the version found in RHEL7).

RESOLUTION:
Implemented a new API for all regular files on VxFS, allowing usage of the loop device driver against files on VxFS as well as any other kernel drivers using the same functionality.

* 4079142 (Tracking ID: 4077766)

SYMPTOM:
VxFS kernel module might leak memory during readahead of directory blocks.

DESCRIPTION:
VxFS kernel module might leak memory during readahead of directory blocks due to missing free operation of readahead-related structures.

RESOLUTION:
Code in readahead of directory blocks is modified to free up readahead-related structures.

* 4079173 (Tracking ID: 4070217)

SYMPTOM:
Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.

DESCRIPTION:
On a disabled cluster-mounted filesystem, release of cluster reservation might fail during unmount operation resulting in a  failure of command fsck with 'cluster reservation failed for volume' message.

RESOLUTION:
Code is modified to release cluster reservation in unmount operation properly even for cluster-mounted filesystem.

* 4082260 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

* 4082865 (Tracking ID: 4079622)

SYMPTOM:
Migration uses normal read/write file operation instead of read/write iter functions. vxfs requires read/write iter functions from Linux kernel
5.14.

DESCRIPTION:
Starting with 5.14 version of the Linux kernel, vxfs uses a read/write iter file operation for migration.

RESOLUTION:
Developed a common function for read/write which get called for normal and iter read/write file operation.

* 4083335 (Tracking ID: 4076098)

SYMPTOM:
FS migration from ext4 to vxfs on Linux machines with falcon-sensor enabled, may fail

DESCRIPTION:
Falcon-sensor driver installed on test machines is tapping system calls such as close and is doing some 
additional vfs calls such as read. Due to this vxfs driver received read file - operation call from fsmigbgcp 
process context. Read operation is allowed only on special files from fsmigbgcp process context. Since 
the file in picture was not a special file, the vxfs debug code asserted.

RESOLUTION:
As a fix, we are now allowing the read on non special files from fsmigbgcp process context.

[Note:
 - There were other related issues fixed in this incident. But those are not likely to be hit in customer 
   environment as they are negative test scenarios (like trying to overwrite migration special file - deflist) 
   and may not be relevant to customer.
- I am not covering them in above

* 4085623 (Tracking ID: 4085624)

SYMPTOM:
While running fsck with -o and full -y on corrupted FS, fsck may dump core.

DESCRIPTION:
Fsck builds various in-core maps based on on-disk structural files, one such map is dotdotmap (which stores 
info about parent directory). For regular fset (like 999), the dotdotmap is initialized only for primary ilist
(inode list for regular inodes). It is skipped for attribute ilist (inode list for attribute inodes). This is because
attribute inodes do not have parent directories as is the case for regular inodes.

While attempting to resolve inconsistencies in FS metadata, fsck tries to clean up dotdotmap for attribute ilist. 
In the absence of a check, dotdotmap is re-initialized for attribute ilist causing segmentation fault.

RESOLUTION:
In the codepath where fsck attempts to reinitialize the dotdotmap, a check added to skip reinitialization of dotdotmap
for attribute ilist.

* 4085839 (Tracking ID: 4085838)

SYMPTOM:
Command fsck may generate core due to processing of zero size attribute inode.

DESCRIPTION:
Command fsck is modified to skip processing of zero size attribute inode.

RESOLUTION:
Command fsck fails due to allocation of memory and dereferencing it for zero size attribute inode.

* 4086085 (Tracking ID: 4086084)

SYMPTOM:
VxFS mount operation causes system panic when -o context is used.

DESCRIPTION:
VxFS mount operation supports context option to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. System panic observed when -o context is used.

RESOLUTION:
Required code changes are added to avoid panic.

* 4088341 (Tracking ID: 4065575)

SYMPTOM:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition

DESCRIPTION:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition due to a race between two writer threads to take read-write lock the file to do a delayed allocation operation on it.

RESOLUTION:
Code is modified to allow thread which is already holding read-write lock to complete delayed allocation operation, other thread will skip over that file.

Patch ID: VRTSvxfs-8.0.0.1700

* 4081150 (Tracking ID: 4079869)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability 
to attack on system.

RESOLUTION:
Upgrading the third party components to resolve these vulnerabilities.

* 4083948 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party component Zlib.

RESOLUTION:
Upgrading the third party component Zlib to resolve these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.1400

* 4055808 (Tracking ID: 4062971)

SYMPTOM:
All the operations like ls, create are blocked on file system

DESCRIPTION:
In WORM file system we do not allow directory rename. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming in the context of partition directory split and merge.

* 4056684 (Tracking ID: 4056682)

SYMPTOM:
New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.

DESCRIPTION:
Information about new features like WORM (Write once read many), auditlog is correctly updated with a file system mounted through the fsadm utility, but on the underlying device the new feature information is not displayed.

RESOLUTION:
Updated fsadm utility to display the new feature information correctly.

* 4062606 (Tracking ID: 4062605)

SYMPTOM:
Minimum retention time cannot be set if the maximum retention time is not set.

DESCRIPTION:
The tunable - minimum retention time cannot be set if the tunable - maximum retention time is not set. This was implemented to ensure 
that the minimum time is lower than the maximum time.

RESOLUTION:
Setting of minimum and maximum retention time is independent of each other. Minimum retention time can be set without the maximum retention time being set.

* 4065565 (Tracking ID: 4065669)

SYMPTOM:
Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.

DESCRIPTION:
Creation of non-WORM checkpoints fails as all WORM-related validations are extended to non-WORM checkpoints also.

RESOLUTION:
WORM-related validations restricted to WORM fsets only, allowing non-WORM checkpoints to be created.

* 4065651 (Tracking ID: 4065666)

SYMPTOM:
All the operations like ls, create are blocked on file system directory where there are WORM enabled files and retention period not expired

DESCRIPTION:
For WORM file system, files whose retention period is not expired can not be renamed. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming of files even if retention period is not expired in the context of partition directory split and merge.

Patch ID: VRTSvxfs-8.0.0.1300

* 4065679 (Tracking ID: 4056797)

SYMPTOM:
The VxFS module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSvxfen-8.0.0.3200

* 4173687 (Tracking ID: 4166666)

SYMPTOM:
Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guest

DESCRIPTION:
While configuring read key buffer exceeding the maximum buffer size in KVM hypervisor.

RESOLUTION:
Reduced maximum number of keys to 1022 to support read key in KVM hypervisor.

* 4176817 (Tracking ID: 4176110)

SYMPTOM:
vxfentsthdw failed to verify fencing disks compatibility in KVM environment

DESCRIPTION:
vxfentsthdw failed to read key buffer, as exceeds the maximum buffer size in KVM hypervisor.

RESOLUTION:
Added new macro with less number of keys to support in kvm environment.

* 4178096 (Tracking ID: 4164328)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.

* 4178826 (Tracking ID: 4176592)

SYMPTOM:
Continuous ERROR message in 'vxfen.log' file - "VXFEN already configured" after system startup, despite fencing working correctly.

DESCRIPTION:
The vxfen-startup script enters a loop trying to configure the vxfen driver, which is already configured, due to an incorrect exit value. This results in the 'vxfen.log' file being flooded with error messages.

RESOLUTION:
Correct the exit code to ensure the vxfen-startup script exits the loop properly, and handles vxfen already configured as a success.

* 4187379 (Tracking ID: 4180026)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 5(RHEL9.5) is now introduced.

* 4188703 (Tracking ID: 4182723)

SYMPTOM:
Veritas Infoscale does not support SLES15SP6.

DESCRIPTION:
Veritas Infoscale does not support SLES15SP6.

RESOLUTION:
Veritas Infoscale support for SLES15SP6 is now introduced.

Patch ID: VRTSvxfen-8.0.0.2500

* 4117657 (Tracking ID: 4108561)

SYMPTOM:
Vxfen print keys  internal utility was not working because of overrunning of array internally

DESCRIPTION:
Vxfen print keys  internal utility will not work if the number of keys exceed 8 will then return garbage value
Overrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8)

RESOLUTION:
Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now.

* 4124421 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSvxfen-8.0.0.2300

* 4111571 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

Patch ID: VRTSvxfen-8.0.0.1800

* 4087166 (Tracking ID: 4087134)

SYMPTOM:
The error message 'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed' appears after starting vxfen service, if parent directory path of vxfen.log is not present.

DESCRIPTION:
Typically,  if parent directory path of vxfen.log is not present, the following error message appears after starting vxfen service:
'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed'.

RESOLUTION:
Create the parent directory path for the vxfen.log file globally if the path is not present.

* 4088061 (Tracking ID: 4089052)

SYMPTOM:
On RHEL9, Node panics while running vxfenswap as a part of Online Coordination Point Replacement operation.

DESCRIPTION:
RHEL9 has introduced fortifying panic which gets triggered if kernel's static check finds out any buffer overflow. This check was wrongly identifying buffer overflow where strings are copied by using unions.

RESOLUTION:
Moved to bcopy internally for such a scenario and kernel side check skipped.

Patch ID: VRTSvxfen-8.0.0.1400

* 3951882 (Tracking ID: 4004248)

SYMPTOM:
vxfend process segfaults and coredumped

DESCRIPTION:
During fencing race, sometimes vxfend crashes and generates core dump

RESOLUTION:
Vxfend internally uses fork and exec to execute sub tasks. The new child process was using same file descriptors for logging purpose. This simultaneous read of same file using single file descriptor was resulting incorrect read and hence the process crash and coredump. This fix creates new file descriptor for child process to fix the crash

Patch ID: VRTSvcs-8.0.0.3200

* 4161822 (Tracking ID: 4129493)

SYMPTOM:
Tenable security scan kills the Notifier resource.

DESCRIPTION:
When nmap port scan performed on port 14144 (on which notifier process is listening), notifier gets killed because of connection request.

RESOLUTION:
The required code changes have done to prevent Notifier agent crash when nmap port scan is performed on notifier port 14144.

* 4162953 (Tracking ID: 4136359)

SYMPTOM:
When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated.

DESCRIPTION:
To use new types, attribute(like PanicSystemOnVGLoss). User need to copy /etc/VRTSvcs/conf/types.cf to /etc/VRTSvcs/conf/config/types.cf, this copying may fault the resource due to missing types(like HTC) from new types.cf.

RESOLUTION:
Implemented new external trigger to manually update the /etc/VRTSvcs/conf/config/types.cf. Follow the post installation instructions of VRTSvcsag rpm.

* 4188663 (Tracking ID: 4188662)

SYMPTOM:
While performing VVR rolling-upgrade from IS 7.4.2 , App group present on secondary site went into faulted state after an upgrade.

DESCRIPTION:
In types.cf of upgraded secondary nodes newly added attribute EnableSingleWriter was not getting updated because in 
trigger we were checking if any such attribute already exist by running command '$haattr -display $type | grep $attr' and if exists then we were skipping that 
command from execution. But in this case if that attribute was part of RegList then also this command was getting successful.

RESOLUTION:
added conditional check to skip if attribute exist in RegList.

Patch ID: VRTSvcs-8.0.0.2300

* 4038088 (Tracking ID: 4100720)

SYMPTOM:
HA fire drill failed because of deprecated 'netstat' command used on SLES 15 cluster.

DESCRIPTION:
Netstat command is deprecated in SLES15 from SP3 so we need to use alternate packages.

RESOLUTION:
IP command is used to overcome deprecated netstat command.

Patch ID: VRTSvcs-8.0.0.2100

* 4103077 (Tracking ID: 4103073)

SYMPTOM:
Security vulnerabilities present in existing version of Netsnmp.

DESCRIPTION:
Upgrading Netsnmp component to fix security vulnerabilities

RESOLUTION:
Upgrading Netsnmp component to fix security vulnerabilities for security patch IS 8.0U1_SP4.

Patch ID: VRTSvcs-8.0.0.1800

* 4084675 (Tracking ID: 4089059)

SYMPTOM:
File permission for gcoconfig.log is not 0600.

DESCRIPTION:
As default file permission was 0644 so it was allowing read access to groups and others so file permission needs to be updated.

RESOLUTION:
Added solution which creates file with permission 0600 so that it should be readable and writable by user.

Patch ID: VRTSvcs-8.0.0.1400

* 4065820 (Tracking ID: 4065819)

SYMPTOM:
Protocol version upgrade from Access Appliance 7.4.3.200 to 8.0 failed.

DESCRIPTION:
During rolling upgrade, IPM message 'MSG_CLUSTER_VERSION_UPDATE' is generated and as a part of it we do some validations for bumping up protocol. If validation succeeds then a broadcast message to bump up the cluster protocol is sent and immediately we send success message to haclus. Thus, the success message is sent before processing the actual updating Protocol version broadcast message. This process occurs for very short period. Also, after successful processing of the broadcast message, the Protocol version is properly updated in config files as well as command shows correct value.

RESOLUTION:
Instead of immediately returning success message, haclus CLI waits till upgrade is implemented on broadcast channel and then success message is sent.

Patch ID: VRTSvxvm-8.0.0.3300

* 4080269 (Tracking ID: 4044898)

SYMPTOM:
we were unable to see rlink tags from info records with the vxrlink listtag command.

DESCRIPTION:
Making rlinks FIPS compliant has 2nd phase in which we are dealing with disk group upgrade path, where rlink enc tags needs to be copied to info record and needs to be FIPS compliant one.  here vxdg upgrade will internally call vxrlink and vxencrypt to upgrade the rlink and rekey the rlink keys respectively.

RESOLUTION:
copied all the encryption tags for rlink to info record and when we are upgrading DG we will internally upgrade the rlink, this upgradation process will copy rlink tags to info records.

* 4105598 (Tracking ID: 4107801)

SYMPTOM:
/dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.

DESCRIPTION:
vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp .
This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3.
This folder is explicitly removed from  SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder.

RESOLUTION:
Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links".

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4111560 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [] page_fault at [exception RIP: bfq_bio_bfqg+37]

#7 [] bfq_bic_update_cgroup at 
#8 [] bfq_bio_merge at 
#9 [] blk_mq_submit_bio at 
#10 [] submit_bio_noacct at 
#11 [] submit_bio at 
#12 [] submit_bh_wbc at 
#13 [] block_read_full_page at 
#14 [] do_read_cache_page at 
#15 [] read_part_sector at 
#16 [] read_lba at 
#17 [] efi_partition at ffffffffb1e59f4d
#18 [] blk_add_partitions at ffffffffb1e54377
#19 [] bdev_disk_changed at ffffffffb1d2a8fa
#20 [] __blkdev_get at ffffffffb1d2c16c
#21 [] blkdev_get at ffffffffb1d2c2b4
#22 [] __device_add_disk at ffffffffb1e5107e
#23 [] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [] blkdev_ioctl at ffffffffb1e4ed19
#28 [] block_ioctl at ffffffffb1d2a719
#29 [] ksys_ioctl at ffffffffb1cfb262

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel >= 5.14.21-150400.24.11.1.

RESOLUTION:
Code changes have been done to fix this issue in IS-8.0 and IS-8.0.2.

* 4120194 (Tracking ID: 4120191)

SYMPTOM:
IO hang occurred after getting into DCM mode and flushing SRL to DCM.

DESCRIPTION:
After getting into DCM mode, upcoming SIOs are throttled during the SRL flush. The throttle is cleared after the SRL flush is done. But in some cases, the SRL flush can't be driven hence the IO hang.

RESOLUTION:
The code changes have been made to fix the issue.

* 4123243 (Tracking ID: 4129663)

SYMPTOM:
vxvm and aslapm rpm do not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to vxvm and aslapm rpm.

* 4162732 (Tracking ID: 4073653)

SYMPTOM:
After configuring RVGs in async mode on CVR setup with shared storage it is observed that startrep for RVG is failed and vxconfigd hangs on primary master node.

DESCRIPTION:
Logowner change is hung because DCM read is not completing.

RESOLUTION:
: 
Acquisition of mbuf_lock to be done with try method so that vxiods are not kept busy waiting for locks.

* 4162734 (Tracking ID: 4098144)

SYMPTOM:
vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume

DESCRIPTION:
vxtask remains stuck since the parent process doesn't exit. It was seen that all childs are completed, but the parent is not able to exit.
(gdb) p active_jobs
$1 = 1
Active jobs are reduced as in when childs complete. Somehow one count is pending and we don't know which child exited without decrementing count. Instrumentation messages are added to capture the issue.

RESOLUTION:
Added a code that will create a log file in /etc/vx/log/. This file will be deleted when vxrecover exists successfully. The file will be present when vxtask parent hang issue is seen.

* 4162735 (Tracking ID: 4132265)

SYMPTOM:
Machine with NVMe disks panics with following stack: 
blk_update_request
blk_mq_end_request
dmp_kernel_nvme_ioctl
dmp_dev_ioctl
dmp_send_nvme_passthru_cmd_over_node
dmp_pr_do_nvme_read
dmp_pgr_read
dmpioctl
dmp_ioctl
blkdev_ioctl
__x64_sys_ioctl
do_syscall_64

DESCRIPTION:
Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported.

RESOLUTION:
Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR.

* 4162738 (Tracking ID: 4128451)

SYMPTOM:
A hardware replicated disk group fails to be auto-imported after reboot.

DESCRIPTION:
Currently the standard diskgroup and cloned diskgroup are supported with auto-import. Hardware replicated disk group isn't supported yet.

RESOLUTION:
Code changes have been made to support hardware replicated disk groups with autoimport.

* 4162739 (Tracking ID: 4130642)

SYMPTOM:
node failed to rejoin the cluster after switched from master to slave due to the failure of the replicated diskgroup import.
The below error message could be found in /var/VRTSvcs/log/CVMCluster_A.log.
CVMCluster:cvm_clus:monitor:vxclustadm nodestate return code:[101] with output: [state: out of cluster
reason: Replicated dg record is found: retry to add a node failed]

DESCRIPTION:
The flag which shows the diskgroup was imported with usereplicatedev=only failed to be marked since the last time the diskgroup got imported. 
The missing flag caused the failure of the replicated diskgroup import, further caused node rejoin failure.

RESOLUTION:
The code changes have been done to flag the diskgroup after it got imported with usereplicatedev=only.

* 4162740 (Tracking ID: 4134023)

SYMPTOM:
vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error:
# vxconfigrestore -p LINUXSRDF
VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ......
VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]
Please refer to system log for details.
... ...
VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed.

DESCRIPTION:
H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed.

RESOLUTION:
The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore .

* 4162741 (Tracking ID: 4128351)

SYMPTOM:
System hung observed when switching log owner.

DESCRIPTION:
VVR mdship SIOs might be throttled due to reaching max allocation count,etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.

RESOLUTION:
Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.

* 4162742 (Tracking ID: 4122061)

SYMPTOM:
Observing hung after resync operation, vxconfigd was waiting for slaves' response.

DESCRIPTION:
VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 
10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.

RESOLUTION:
Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.

* 4162743 (Tracking ID: 4087628)

SYMPTOM:
When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state.

DESCRIPTION:
During Resiliency tests, performed sequence of operations as following. 
1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.
2. The low owner service groups for both the RVGs are online on a Slave node. 
3. Rebooted another Slave node where logowner is not online. 
4. After Slave node come back from reboot, it is unable to join CVM Cluster. 
5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.

RESOLUTION:
In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right 
before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.

* 4162745 (Tracking ID: 4145063)

SYMPTOM:
vxio Module fails to load post VxVM package installation.

DESCRIPTION:
Following message is seen in dmesg:
[root@dl360g10-115-v23 ~]# dmesg | grep symbol
[ 2410.561682] vxio: no symbol version for storageapi_associate_blkg

RESOLUTION:
Because of incorrectly nested IF blocks in the "src/linux/kernel/vxvm/Makefile.target", the code for the RHEL 9 block was not getting executed, because of which certain symbols were not present in the vxio.mod.c file. This in turn caused the above mentioned symbol warning to be seen in dmesg.
Fixed the improper nesting of the IF conditions.

* 4162747 (Tracking ID: 4152014)

SYMPTOM:
the excluded dmpnodes are visible after system reboot when SELinux is disabled.

DESCRIPTION:
During system reboot, disks' hardware soft links failed to be created before DMP exclusion in function, hence DMP failed to recognize the excluded dmpnodes.

RESOLUTION:
Code changes have been made to reduce the latency in creation of hardware soft links and remove tmpfs /dev/vx on an SELinux Disabled platform.

* 4162748 (Tracking ID: 4132799)

SYMPTOM:
If GLM is not loaded, start CVM fails with the following errors:
# vxclustadm -m gab startnode
VxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - 
VxVM vxclustadm ERROR V-5-1-9743 errno 3

DESCRIPTION:
The error number but the error message is printed while joining CVM fails.

RESOLUTION:
The code changes have been made to fix the issue.

* 4162749 (Tracking ID: 4134790)

SYMPTOM:
Hardware Replicated DG was marked with clone flag on SLAVEs after failover operation was done on storage side.

DESCRIPTION:
udid_mismatch are marked on the H/W replicated devices after switched the source storage with the target storage. Disk with udid_mismatch was treated as a clone device. This caused those replicated disks are treated as the cloned disks as well. With clearclone option, Master would remove this flag in the last stage of dg import, but Slaves couldn't. Hence the clone flag was observed on Slaves only.

RESOLUTION:
The code changes have been made to do extra H/W Replicated disk check.

* 4162750 (Tracking ID: 4077944)

SYMPTOM:
In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang.

DESCRIPTION:
In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases.

RESOLUTION:
Resolved the issue by making sure the application throttled I/Os get driven in all the cases.

* 4162751 (Tracking ID: 4132221)

SYMPTOM:
Supportability requirement for easier path link to dmpdr utility

DESCRIPTION:
The current paths of DMPDR utility are so long and hard to remember for the customers. So it was requested to create a symbolic link to this utility for easier access.

RESOLUTION:
Code changes are made to create a symlink to this utility for easier access

* 4162754 (Tracking ID: 4154121)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group on target node failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group on target node failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported when enable use_hw_replicatedev.

RESOLUTION:
The code is enhanced to import the replicated disk group on target node when enable use_hw_replicatedev.

* 4162756 (Tracking ID: 4159403)

SYMPTOM:
When the replicated disks are in SPLIT mode and use_hw_replicatedev is on, disks are marked as cloned disks after the hardware replicated disk group gets imported.

DESCRIPTION:
add clearclone option automatically when import the hardware replicated disk group to clear the cloned flag on disks.

RESOLUTION:
The code is enhanced to import the replicated disk group with clearclone option.

* 4162757 (Tracking ID: 4160883)

SYMPTOM:
clone_flag was set on srdf-r1 disks after reboot.

DESCRIPTION:
Clean clone got reset in case of AUTOIMPORT, which misled the clone_flag got set on the disk in the end.

RESOLUTION:
Code change has been made to correct the behavior of setting clone_flag on a disk.

* 4163989 (Tracking ID: 4162873)

SYMPTOM:
disk reclaim is slow.

DESCRIPTION:
Disk reclaim length should be decided by storage's max reclaim length. But Volume Manager split the reclaim request into smaller segments than the maximum reclaim length, which led to a performance regression.

RESOLUTION:
Code change has been made to avoid splitting the reclaim request in volume manager level.

* 4164137 (Tracking ID: 3972344)

SYMPTOM:
After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150  Volume <volume_name> does not exist' is logged.

DESCRIPTION:
In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message.
The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s]

RESOLUTION:
Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly.

* 4164248 (Tracking ID: 4162349)

SYMPTOM:
When using vxstat with -S option the values in two columns(MIN(ms) and MAX(ms)) are not printed.

DESCRIPTION:
When using vxstat with -S option the values in two columns(MIN(ms) and MAX(ms)) are not printed.
vxstat -g <dg_name> -i 5 -S -u m
                                         OPERATIONS          BYTES           AVG TIME(ms)    MIN(ms)    MAX(ms)
TYP NAME                                 READ     WRITE      READ     WRITE   READ  WRITE   READ  WRITE   READ  WRITE
gf01sxdb320p Mon Apr 22 14:07:55 2024
vol admvol                              23977     96830  523.707m 425.3496m   1.43   2.12
vol appvol                               7056     30556 254.3959m 146.1489m   0.85   2.11

RESOLUTION:
In our code we were not printing the values for last two values. Code changes have been done to fix this issue.

* 4164276 (Tracking ID: 4142772)

SYMPTOM:
In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode.

DESCRIPTION:
When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared.

RESOLUTION:
The code changes have been made to fix the issue.

* 4164312 (Tracking ID: 4133793)

SYMPTOM:
DCO experience IO Errors while doing a vxsnap restore on vxvm volumes.

DESCRIPTION:
Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error.

RESOLUTION:
Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set.

* 4164539 (Tracking ID: 4161852)

SYMPTOM:
Post InfoScale upgrade, command "vxdg upgrade" succeed but throw error "RLINK is not encypted".

DESCRIPTION:
In "vxdg upgrade" codepath we need to regenerate the encryption keys if encrypted Rlinks are present in VxVM configuration. But key regeneration code was getting called even if Rlinks are not encrypted. And so further code was throwing error that "VxVM vxencrypt ERROR V-5-1-20484 Rlink is not encrypted!"

RESOLUTION:
Necessary code changes has been made to invoke encryption key regeneration for RLinks only if it is encrypted.

* 4164693 (Tracking ID: 4149498)

SYMPTOM:
While upgrading the VxVM package, a number of warnings are seen regarding .ko files not being found for various modules.

DESCRIPTION:
These warnings are seen because all the unwanted .ko files have been removed.

RESOLUTION:
Code changes have been done to not get these warnings.

* 4167050 (Tracking ID: 4134069)

SYMPTOM:
VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.

DESCRIPTION:
Initial synchronization and DCM replay of VVR required the filesystem to be mounted locally on the logowner node as VVR did not have capability to 
fetch the required information from a remotely mounted filesystem mount point.

RESOLUTION:
VVR is updated to fetch the required SmartMove related information from a remotely mounted filesystem mount point.

* 4167712 (Tracking ID: 4166086)

SYMPTOM:
VxVM created invalid device symbolic links under root which isn't expected.
#ls -al /
...
lrwxrwxrwx.   1 root root       16 Jun 20 00:20 sdo -> /dev/vx/.dmp/sdo
lrwxrwxrwx.   1 root root       16 Jun 20 00:27 sdp -> /dev/vx/.dmp/sdp
lrwxrwxrwx.   1 root root       16 Jun 20 00:21 sdq -> /dev/vx/.dmp/sdq
lrwxrwxrwx.   1 root root       16 Jun 20 00:23 sdr -> /dev/vx/.dmp/sdr

DESCRIPTION:
Volume manager creates invalid device symbolic links by mistake under root.

RESOLUTION:
The code has been made to create device symbolic links under the right place.

* 4175213 (Tracking ID: 4153457)

SYMPTOM:
When using Dell/EMC PowerFlex ScaleIO storage, Veritas File System(VxFS) on Veritas Volume Manager(VxVM) volumes fail to mount after reboot.

DESCRIPTION:
During system boot it was seen that the ScaleIO devices are detected after VxVM has completed it auto discovery of the storage devices. 
Hence VxVM doesn't auto detect the ScaleIO devices and fail to auto-import the diskgroup and mount the filesystem.

RESOLUTION:
Appropriate code changes are done to auto discover the ScaleIO devices.

* 4178146 (Tracking ID: 4168846)

SYMPTOM:
Support VxVM on RHEL9.4

DESCRIPTION:
VxVM encountered breakages with RHEL9.4.

RESOLUTION:
Changes have been done to support VxVM on RHEL9.4

* 4178967 (Tracking ID: 4167359)

SYMPTOM:
EMC DeviceGroup missing SRDF SYMDEV. After doing disk group import, import failed with "Disk write failure" and corrupts disks headers.

DESCRIPTION:
SRDF will not make all disks read-writable (RW) on the remote side during an SRDF failover. When an SRDF SYMDEV is missing, the missing disk in pairs on the remote side remains in a write-disabled (WD) state. This leads to write errors, which can further cause disk header corruption.

RESOLUTION:
Code change has been made to fail disk group if any disks in this group are detected as WD.

* 4178977 (Tracking ID: 4173284)

SYMPTOM:
"dmpdr -o refresh" command if failing with error:
#usr/lib/vxvm/voladm.d/bin/dmpdr -o refresh
Global symbol "$mask" requires explicit package name (did you forget to declare "my $mask"?) at /usr/lib/vxvm/voladm.d/lib/Comm.pm line 186.
Global symbol "$mask" requires explicit package name (did you forget to declare "my $mask"?) at /usr/lib/vxvm/voladm.d/lib/Comm.pm line 190.
Compilation failed in require at /usr/lib/vxvm/voladm.d/bin/dmpdr line 14.
BEGIN failed--compilation aborted at /usr/lib/vxvm/voladm.d/bin/dmpdr line 14.

DESCRIPTION:
Instead of global a local variable was used in perl because of which this issue occurs

RESOLUTION:
Code change have been done to fix the problem.

* 4178982 (Tracking ID: 4158316)

SYMPTOM:
DMP failed to do thin reclaim on the array which doesn't support WRITESAME.

DESCRIPTION:
When the reclaim length is less than the array's max support length, DMP would issue WERITESAME cmd to reclaim the space by default. Which might lead the reclaim failure if the array doesn't support WERITESAME.

RESOLUTION:
Code change have been made to choose the proper command to do disk reclaim.

* 4179185 (Tracking ID: 4168665)

SYMPTOM:
use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.

DESCRIPTION:
use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.

RESOLUTION:
Reset "ret" before making another attempt of dg import.

* 4179379 (Tracking ID: 4179002)

SYMPTOM:
After dynamic LUN expansion on rhel9, resize operation failed. VxFS got corrupted.

DESCRIPTION:
While performing actions like Dynamic LUN Expansion (DLE), the OS removes partitions when it detects a disk change. The partition was not being added back by the OS because Dynamic Multipathing (DMP) didn't "revalidate" the disk during the disk open process in RHEL9 and upstream. This loss of the partition causes device open failures, resulting in corruption, as the entire device is used for I/O operations instead of the previously defined partition, which contains the file system data.

RESOLUTION:
Code change has been made to revalidate disk unconditionally.

* 4182224 (Tracking ID: 4176336)

SYMPTOM:
VVR replication pauses to due to network disconnection (VVR heartbeat timeout).

DESCRIPTION:
VVR UDP heartbeat packets are sent on port 4145, which is also used for replicating data. While the replication is done on TCP, the heartbeats are sent on UDP protocol. The UDP packets are from 4145 port in source  to 4145 port at destination. However in case a port scanner like nmap if run parallelly in VVR environment produces 0 length UDP packets from a random port to replication port 4145. VVR failed to handle these 0 length UDP packets, hence the issue.

RESOLUTION:
Code change has been made to handle 0 length rogue UDP packets.

* 4187376 (Tracking ID: 4185158)

SYMPTOM:
Support VxVM on RHEL9.5

DESCRIPTION:
VxVM encountered breakages with RHEL9.5.

RESOLUTION:
Code changes have been done to support VxVM  for RHEL9.5

* 4187887 (Tracking ID: 4183410)

SYMPTOM:
The vxvm-boot service will be in failed state and vxvm command line vxprint/vxdisk will give error "V-5-1-684 IPC failure: Configuration daemon is not accessible".

DESCRIPTION:
The vxvm-boot service will be in failed state

server101:~/8.0 # systemctl status vxvm-boot
 vxvm-boot.service - VERITAS Volume Manager Boot service
   Loaded: loaded (/usr/lib/systemd/system/vxvm-boot.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2024-11-06 11:02:59 GMT; 1min 48s ago
  Process: 18239 ExecStart=/etc/vx/vxvm-boot start (code=exited, status=1/FAILURE)
 Main PID: 15280

Nov 06 11:01:29 server101 vxvm-boot[18239]: Checking network-attached filesystems
Nov 06 11:01:29 server101 vxvm-boot[18239]: UX:vxfs mount.vxfs: ERROR: V-3-20002: Cannot access /dev/vx/dsk/datadg/datavol1: No such file or directory
Nov 06 11:01:29 server101 vxvm-boot[18239]: UX:vxfs mount.vxfs: ERROR: V-3-24996: Unable to get disk layout version
Nov 06 11:01:29 server101 vxvm-boot[18239]: ..failed
Nov 06 11:01:29 server101 systemd[1]: vxvm-boot.service: Control process exited, code=exited status=1
Nov 06 11:02:59 server101 systemd[1]: vxvm-boot.service: State 'stop-sigterm' timed out. Killing.
Nov 06 11:02:59 server101 systemd[1]: vxvm-boot.service: Killing process 18565 (vxconfigd) with signal SIGKILL.
Nov 06 11:02:59 server101 systemd[1]: Failed to start VERITAS Volume Manager Boot service.
Nov 06 11:02:59 server101 systemd[1]: vxvm-boot.service: Unit entered failed state.
Nov 06 11:02:59 server101 systemd[1]: vxvm-boot.service: Failed with result 'exit-code'.

server101:~ # vxprint
VxVM vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible

server101:~ # vxdisk list
VxVM vxdisk ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible

RESOLUTION:
Due to a bug in our code whenever the mount command failed, it used to kill the vxconfigd process as well. Code fix have been done to fix this issue.

* 4187902 (Tracking ID: 4080897)

SYMPTOM:
Observed Performance drop on raw VxVM volume in RHEL 8.x compared to RHEL7.X

DESCRIPTION:
There has been change in file_operations used for character devices from RHEL 7.X and RHEL8.X releases. In RHEL 7.X aio_read and aio_write function pointers are implemented whereas this has changed to read_iter and write_iter respectively in the latest release. In RHEL 8.X changes, VxVM code called generic_file_write_iter(). The problem here is that this function takes an inode-lock. And in the multi-thread write operation, this semaphore basically causes serial processing of IO submission leading to dropped performance.

RESOLUTION:
Use of __generic_file_write_iter() function helps to resolve the issue and vxvm_generic_write_sync() function is implemented which handles the SYNCing part of the write similar to functions like blkdev_write_iter() and generic_file_write_iter().

* 4188953 (Tracking ID: 4178449)

SYMPTOM:
vxconfigd gets abort for segfault, vold core file can see thread stack corruption.

DESCRIPTION:
In vxconfigd multi-threads mode, two threads were writing translog in parallel
using static buffer, which can be realloc for bigger buffer, resulting in a thread accessing
it but post-free.

RESOLUTION:
Use thread-safe popen() and mutex to protect static buffer from use-post-free.

* 4188954 (Tracking ID: 4178801)

SYMPTOM:
When executed vxprint as a non-root user, below error string appears on screen.

$ vxprint
VxVM vxprint ERROR V-5-1-0 Couldn't opne file /etc/vx/log/cmdlog

DESCRIPTION:
On executing vxprint command as a non-root user, the error string seen has a typo. It is probably impacting any automation customer has setup for their internal use case. The error message is not needed anyways.

RESOLUTION:
Remove the error message.

* 4188970 (Tracking ID: 4164734)

SYMPTOM:
Support for TLS1.1 is not disabled.

DESCRIPTION:
In VxVM product we have disabled support for TLS 1.0, SSLv2 and SSLv3 already. Support TLS1.1 is not disabled.TLSv1.1 has security vulnerabilities

RESOLUTION:
Make required code change to disable support for TLS1.1.

* 4189028 (Tracking ID: 4185141)

SYMPTOM:
Support VxVM on SLES15 SP6

DESCRIPTION:
VxVM encountered breakages with SLES15 SP6

RESOLUTION:
Code changes have been done to support VxVM and ASL-APM for SLES15 SP6

* 4189075 (Tracking ID: 4132774)

SYMPTOM:
Existing VxVM package fails to load on SLES15SP5

DESCRIPTION:
There are multiple changes done in this kernel related to handling of SCSI passthrough requests ,initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5.

RESOLUTION:
Required changes have been done to make VxVM compatible with SLES15SP5.

* 4189151 (Tracking ID: 4189376)

SYMPTOM:
/dev/vx/.dmp/<device> may point to invalid/stale devices link under "/dev/disk/by-path/"

DESCRIPTION:
For DELL/EMC ScaleIO devices there may stale links under "/dev/vx/.dmp/<device>" to scini devices.
While creating the entries under  "/dev/vx/.dmp" the pci attribute wasn't checked, due to this there can be invalid links to scini devices.

RESOLUTION:
Appropriate code changes are done to create correct links under "/dev/vx/.dmp".

Patch ID: VRTSaslapm 8.0.0.3300

* 4189390 (Tracking ID: 4188104)

SYMPTOM:
dummy incident for archival.

DESCRIPTION:
dummy incident for archival.

RESOLUTION:
Creating dummy incident for archival.

Patch ID: VRTSvxvm-8.0.0.2700

* 4154821 (Tracking ID: 4149248)

SYMPTOM:
Third-party components (OpenSSL, curl, and libxml) used by VxVM exhibit security vulnerabilities.

DESCRIPTION:
VxVM utilizes current versions of OpenSSL, curl, and libxml, which have been reported to have security vulnerabilities.

RESOLUTION:
Upgrades to newer versions of OpenSSL, curl, and libxml have been implemented to address the reported security vulnerabilities.

Patch ID: VRTSaslapm 8.0.0.2700

* 4154821 (Tracking ID: 4149248)

SYMPTOM:
Third-party components (OpenSSL, curl, and libxml) used by VxVM exhibit security vulnerabilities.

DESCRIPTION:
VxVM utilizes current versions of OpenSSL, curl, and libxml, which have been reported to have security vulnerabilities.

RESOLUTION:
Upgrades to newer versions of OpenSSL, curl, and libxml have been implemented to address the reported security vulnerabilities.

Patch ID: VRTSvxvm-8.0.0.2600

* 4121828 (Tracking ID: 4124457)

SYMPTOM:
VxVM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxVM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSaslapm 8.0.0.2600

* 4101808 (Tracking ID: 4101807)

SYMPTOM:
"vxdisk -e list" does not show "svol" for Hitachi ShadowImage (SI) svol devices.

DESCRIPTION:
VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.

RESOLUTION:
Hitachi ASL modified to correctly read SCSI Byte locations and recognize ShadowImage (SI) svol device.

* 4116688 (Tracking ID: 4085145)

SYMPTOM:
The issue we are discussing is with AWS environment, on-prim physical/vm host this issue does not exist.( as ioctl and sysfs is giving same values)

DESCRIPTION:
The UDID value in case of Amazon EBS devices was going beyond its limit (read from sysfs as ioctl is not supported by AWS)

RESOLUTION:
Did code changes to fetch LSN through IOCTL as we have fix for intermittent ioctl failure.

* 4117385 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

* 4121828 (Tracking ID: 4124457)

SYMPTOM:
VxVM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxVM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSvxvm-8.0.0.2300

* 4108322 (Tracking ID: 4107083)

SYMPTOM:
In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.

DESCRIPTION:
This issue is introduced due to BCV NR LUNs going into error state and the the scci inquiry succeed but the disk retry takes time as it goes into loop for each disk. This is a corner case and was not handled for BCV NR LUNs.

RESOLUTION:
Necessary code changes are done incase of BCV NR LUNs when scci inquiry succeeds, we mark disk as failed so that the vxconfigd boots quickly.

* 4111302 (Tracking ID: 4092495)

SYMPTOM:
VxVM installation fails on SLES15 SP4

DESCRIPTION:
This new OS has 5.14.21 kernel . There have been multiple changes done regarding block devices , IO handling , partition table in gendisk, etc hence VxVM code is not compatible with these new code changes done in kernel .

RESOLUTION:
Required changes has been done to make VxVM compatible with SLES15SP4.

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4111560 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel &gt;= 5.14.21-150400.24.11.1

RESOLUTION:
As of now there is no fix available. It is recommended to use mq-deadline as io scheduler. Code changes have been done to automatically change the disk io scheduler to mq-deadline.

* 4112219 (Tracking ID: 4069134)

SYMPTOM:
"vxassist maxsize alloc:array:<enclosure_name>" command may fail with below error:
VxVM vxassist ERROR V-5-1-18606 No disks match specification for Class: array, Instance: <enclosure_name>

DESCRIPTION:
If the enclosure name is greater than 16 chars then "vxassist maxsize alloc:array"  command can fail. 
This is because if the enclosure name is more than 16 chars then it gets truncated while copying from VxDMP to VxVM.
This further cause the above vxassist command to fail.

RESOLUTION:
Code changes are done to avoid the truncation of enclosure name while copying from VxDMP to VxVM.

* 4113223 (Tracking ID: 4093067)

SYMPTOM:
System panicked in the following stack:

#9  [] page_fault at  [exception RIP: bdevname+26]
#10 [] get_dip_from_device  [vxdmp]
#11 [] dmp_node_to_dip at [vxdmp]
#12 [] dmp_check_nonscsi at [vxdmp]
#13 [] dmp_probe_required at [vxdmp]
#14 [] dmp_check_disabled_policy at [vxdmp]
#15 [] dmp_initiate_restore at [vxdmp]
#16 [] dmp_daemons_loop at [vxdmp]

DESCRIPTION:
After got block_device from OS, DMP didn't do the NULL pointer check against block_device->bd_part. This NULL pointer further caused system panic when bdevname() was called.

RESOLUTION:
The code changes have been done to fix the problem.

* 4113225 (Tracking ID: 4068090)

SYMPTOM:
System panicked in the following stack:

#7 page_fault at ffffffffbce010fe
[exception RIP: vx_bio_associate_blkg+56]
#8 vx_dio_physio at ffffffffc0f913a3 [vxfs]
#9 vx_dio_rdwri at ffffffffc0e21a0a [vxfs]
#10 vx_dio_read at ffffffffc0f6acf6 [vxfs]
#11 vx_read_common_noinline at ffffffffc0f6c07e [vxfs]
#12 vx_read1 at ffffffffc0f6c96b [vxfs]
#13 vx_vop_read at ffffffffc0f4cce2 [vxfs]
#14 vx_naio_do_work at ffffffffc0f240bb [vxfs]
#15 vx_naio_worker at ffffffffc0f249c3 [vxfs]

DESCRIPTION:
To get VxVM volume's block_device from gendisk, VxVM calls bdget_disk(), which increases device inode ref count. The ref count gets decreased by bdput() call that is missed in our code, hence the inode count leaks occurs, which may cause panic in vxfs when issuing IO on VxVM volume.

RESOLUTION:
The code changes have been done to fix the problem.

* 4113328 (Tracking ID: 4102439)

SYMPTOM:
Customer observed failure When trying to run the vxencrypt rekey operation on an encrypted volume (to perform key rotation).

DESCRIPTION:
KMS token is of size 64 bytes, we are restricting the token size to 63 bytes and throw an error if the token size is more than 63.

RESOLUTION:
The issue is resolved by setting the assumption of token size to be size of KMS token, which is 64 bytes.

* 4113331 (Tracking ID: 4105565)

SYMPTOM:
In Cluster Volume Replication(CVR) environment, system panic with below stack when Veritas Volume Replicator(VVR) was doing recovery:
[] do_page_fault 
[] page_fault 
[exception RIP: volvvr_rvgrecover_nextstage+747] 
[] volvvr_rvgrecover_done [vxio]
[] voliod_iohandle[vxio]
[] voliod_loop at[vxio]

DESCRIPTION:
There might be a race condition  which caused VVR failed to trigger a DCM flush sio. VVR failed to do sanity check against this sio. Hence triggered system panic.

RESOLUTION:
Code changes have been made to do a sanity check of the DCM flush sio.

* 4113342 (Tracking ID: 4098965)

SYMPTOM:
Vxconfigd dumping Core when scanning IBM XIV Luns with following stack.

#0  0x00007fe93c8aba54 in __memset_sse2 () from /lib64/libc.so.6
#1  0x000000000061d4d2 in dmp_getenclr_ioctl ()
#2  0x00000000005c54c7 in dmp_getarraylist ()
#3  0x00000000005ba4f2 in update_attr_list ()
#4  0x00000000005bc35c in da_identify ()
#5  0x000000000053a8c9 in find_devices_in_system ()
#6  0x000000000053aab5 in mode_set ()
#7  0x0000000000476fb2 in ?? ()
#8  0x00000000004788d0 in main ()

DESCRIPTION:
This could cause 2 issues if there are more than 1 disk arrays connected:

1. If the incorrect memory address exceeds the range of valid virtual memory, it will trigger "Segmentation fault" and crash vxconfigd.
2. If  the incorrect memory address does not exceed the range of valid virtual memory, it will cause memory corruption issue but maybe not trigger vxconfigd crash issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4115475 (Tracking ID: 4017334)

SYMPTOM:
VXIO call stack trace generated in /var/log/messages

DESCRIPTION:
This issue occurs due to a limitation in the way InfoScale interacts with the RHEL8.2 kernel.
 Call Trace:
 kmsg_sys_rcv+0x16b/0x1c0 [vxio]
 nmcom_get_next_mblk+0x8e/0xf0 [vxio]
 nmcom_get_hdr_msg+0x108/0x200 [vxio]
 nmcom_get_next_msg+0x7d/0x100 [vxio]
 nmcom_wait_msg_tcp+0x97/0x160 [vxio]
 nmcom_server_main_tcp+0x4c2/0x11e0 [vxio]

RESOLUTION:
Making changes in header files for function definitions if rhel version>=8.2
This kernel warning can be safely ignore as it doesn't have any functionality impact.

Patch ID: VRTSaslapm 8.0.0.2300

* 4115481 (Tracking ID: 4098395)

SYMPTOM:
VRTSaslapm package(rpm) doesn't function correctly for SLES15SP4.

DESCRIPTION:
Due to changes in SLES15 SP4 update, there are breakages in APM(Array Policy Module) kernel modules 
present in VRTSaslapm package. Hence the currently available VRTSaslapm doesn't function with SLES15 SP4. 
The VRTSaslapm code needs to be recompiled with SLES15 SP4 kernel.

RESOLUTION:
VRTSaslapm  is recompiled with SLES15 SP4 kernel.

Patch ID: VRTSvxvm-8.0.0.1800

* 4067609 (Tracking ID: 4058464)

SYMPTOM:
vradmin resizevol fails  when FS is not mounted on master.

DESCRIPTION:
vradmin resizevol cmd resizes datavolume, FS on the primary site whereas on the secondary site it resizes only datavolume as FS is not mounted on the secondary site.

vradmin resizevol cmd ships the cmd to logowner at vradmind level and vradmind on logowner in turn tries to ship the lowlevel vxcommands to master at vradmind level and then finally cmd gets executed on master.

RESOLUTION:
Changes introduced to ship the cmd to the node on which FS is mounted. cvm nodename must be provided where FS gets mounted which is then used by vradmind to ship cmd to that respective mounted node.

* 4067635 (Tracking ID: 4059982)

SYMPTOM:
In container environment, vradmin migrate cmd fails multiple times due to rlink not in connected state.

DESCRIPTION:
In VVR, rlinks are disconnected and connected back during the process of replication lifecycle. And, in this mean time when vradmin migrate cmd gets executed it experience errors. It internally causes vradmind to make configuration changes multiple times which impact further vradmin commands getting executed.

RESOLUTION:
vradmin migrate cmd requires rlink data to be up-to-date on both primary and secondary. It internally executes low-level cmds like vxrvg makesecondary and vxrvg makeprimary to change the role of primary and secondary. These cmds doesn't depend on rlink to be in connected state.  Changes are done to remove the rlink connection handling.

* 4070098 (Tracking ID: 4071345)

SYMPTOM:
Replication is unresponsive after failed site is up.

DESCRIPTION:
Autosync and unplanned fallback synchronisation had issues in a mix of cloud and non-cloud Volumes in RVG.
After a cloud volume is found rest of the volumes were getting ignored for synchronisation

RESOLUTION:
Fixed condition to make it iterate over all Volumes.

* 4078531 (Tracking ID: 4075860)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4079345 (Tracking ID: 4069940)

SYMPTOM:
FS mount failed during Cluster configuration on 24-node physical BOM setup.

DESCRIPTION:
FS mount failed during Cluster configuration on 24-node physical BOM setup due to vxvm transactions were taking time more that vcs timeouts.

RESOLUTION:
Fix is added to reduce unnecessary transaction time on large node setup.

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4080105 (Tracking ID: 4045837)

SYMPTOM:
DCL volume subdisks doesnot relocate after node fault timeout and remains in RELOCATE state

DESCRIPTION:
If DCO has failed plexs and dco is on different disks than data, dco relocation need to be triggered explicitly as try_fss_reloc will only perform dco relocation in context of data which may not succeed if sufficient data disks not available (additional host/disks may be available where dco can relocate)

RESOLUTION:
Fix is added to relocate DCL subdisks to available spare disks

* 4080122 (Tracking ID: 4044068)

SYMPTOM:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

DESCRIPTION:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

RESOLUTION:
Fix is added to retry transaction few times if it fails with this error

* 4080269 (Tracking ID: 4044898)

SYMPTOM:
we were unable to see rlink tags from info records with the vxrlink listtag command.

DESCRIPTION:
Making rlinks FIPS compliant has 2nd phase in which we are dealing with disk group upgrade path, where rlink enc tags needs to be copied to info record and needs to be FIPS compliant one.



here vxdg upgrade will internally call vxrlink and vxencrypt to upgrade the rlink and rekey the rlink keys respectively.

RESOLUTION:
copied all the encryption tags for rlink to info record and when we are upgrading DG we will internally upgrade the rlink,
this upgradation process will copy rlink tags to info records.

* 4080276 (Tracking ID: 4065145)

SYMPTOM:
During addsec we were unable to processencrypted volume tags for multiple volumes and vsets.
Error we saw:

$ vradmin -g dg2 -encrypted addsec dg2_rvg1 10.210.182.74 10.210.182.75

Error: Duplicate tag name vxvm.attr.enckeytype provided in input.

DESCRIPTION:
The number of tags was not defined and we were processing all the tags at a time instead of processing max number of tags for a volume.

RESOLUTION:
Introduced a number of tags variable depend on the cipher method (CBC/GCM), as well fixed minor code issues.

* 4080277 (Tracking ID: 3966157)

SYMPTOM:
the feature of SRL batching was broken and we were not able to enable it as it might caused problems.

DESCRIPTION:
Batching of updates needs to be done as to get benefit of batching multiple updates and getting performance increased

RESOLUTION:
we have decided to simplify the working as we are now aligning each of the small update within a total batch to 4K size so that,

by default we will get the whole batch aligned one, and then there is no need of book keeping for last update and hence reducing the overhead of

different calculations.

we are padding individual updates to reduce overhead of book keeping things around last update in a batch,
by padding each updates to 4k, we will be having a batch of updates which is 4k aligned itself.

* 4080579 (Tracking ID: 4077876)

SYMPTOM:
When one cluster node is rebooted, EC log replay is triggered for shared EC volume.
It is seen that system is crashed during this EC log replay.

DESCRIPTION:
It is seen that two flags are assigned same value. So, system panicked during flag check.

RESOLUTION:
Changed the code flow to avoid checking values of flags having same value.

* 4080845 (Tracking ID: 4058166)

SYMPTOM:
While setting up VVR/CVR on large size data volumes (size > 3TB) with filesystems mounted on them, initial autosync operation takes a lot of time to complete.

DESCRIPTION:
While performing autosync on VVR/CVR setup for a volume with filesystem mounted, if smartmove feature is enabled, the operation does smartsync by syncing only the regions dirtied by filesystem, instead of syncing entire volume, which completes faster than normal case. However, for large size volumes (size > 3TB),

smartmove feature does not get enabled, even with filesystem mounted on them and hence autosync operation syncs entire volume.

This behaviour is due to smaller size DCM plexes allocated for such large size volumes, autosync ends up performing complete volume sync,taking lot more time to complete.

RESOLUTION:
Increase the limit of DCM plex size (loglen) beyond 2MB so that smart move feature can be utilised properly.

* 4080846 (Tracking ID: 4058437)

SYMPTOM:
Replication between 8.0 and 7.4.x fails with an error due to sector size field.

DESCRIPTION:
7.4.x branch has sectorsize set to zero which internally is indicated as 512 byte. It caused the startrep, resumerep to fail with the below error message.

Message from Primary:

VxVM VVR vxrlink ERROR V-5-1-20387  sector size mismatch, Primary is having sector size 512, Secondary is having sector size 0

RESOLUTION:
A check added to support replication between 8.0 and 7.4.x

* 4081790 (Tracking ID: 4080373)

SYMPTOM:
SFCFSHA configuration failed on RHEL 8.4 due to 'chmod -R' error.

DESCRIPTION:
Failure messages are getting logged as all log permissions are changed to 600 during the upgrade and all log files moved to '/var/log/vx'.

RESOLUTION:
Added -f option to chmod command to suppress warning and redirect errors from mv command to /dev/null.

* 4083337 (Tracking ID: 4081890)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel.

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4085619 (Tracking ID: 4086718)

SYMPTOM:
VxVM fails to install because vxdmp module fails to load on latest minor kernel of SLES15SP2.

DESCRIPTION:
VxVM modules fail to load on latest minor kernel of SLES15SP2. Following messages can be seen logged in system logs:
vxvm-boot[32069]: ERROR: No appropriate modules found.
vxvm-boot[32069]: Error in loading module "vxdmp". See documentation.
vxvm-boot[32069]: Modules not Loaded

RESOLUTION:
Code changes have been done to fix this issue.

* 4087233 (Tracking ID: 4086856)

SYMPTOM:
For Appliance FLEX product using VRTSdocker-plugin, docker.service needs to be replaced as it is not supported on RHEL8.

DESCRIPTION:
Appliance FLEX product using VRTSdocker-plugin is switching to RHEL8 on which docker.service does not exist. vxinfoscale-docker.service must stop after all container services are stopped. podman.service shuts down after all container services are stopped, so docker.service can be replaced with podman.service.

RESOLUTION:
Added platform-specific dependencies for VRTSdocker-plugin. For RHEL8 podman.service introduced.

* 4087439 (Tracking ID: 4088934)

SYMPTOM:
"dd" command on a simple volume results in kernel panic.

DESCRIPTION:
Kernel panic is observed with following stack trace:
 #0 [ffffb741c062b978] machine_kexec at ffffffffa806fe01
 #1 [ffffb741c062b9d0] __crash_kexec at ffffffffa815959d
 #2 [ffffb741c062ba98] crash_kexec at ffffffffa815a45d
 #3 [ffffb741c062bab0] oops_end at ffffffffa8036d3f
 #4 [ffffb741c062bad0] general_protection at ffffffffa8a012c2
    [exception RIP: __blk_rq_map_sg+813]
    RIP: ffffffffa84419dd  RSP: ffffb741c062bb88  RFLAGS: 00010202
    RAX: 0c2822c2621b1294  RBX: 0000000000010000  RCX: 0000000000000000
    RDX: ffffb741c062bc40  RSI: 0000000000000000  RDI: ffff8998fc947300
    RBP: fffff92f0cbe6f80   R8: ffff8998fcbb1200   R9: fffff92f0cbe0000
    R10: ffff8999bf4c9818  R11: 000000000011e000  R12: 000000000011e000
    R13: fffff92f0cbe0000  R14: 00000000000a0000  R15: 0000000000042000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #5 [ffffb741c062bc38] scsi_init_io at ffffffffc03107a2 [scsi_mod]
 #6 [ffffb741c062bc78] sd_init_command at ffffffffc056c425 [sd_mod]
 #7 [ffffb741c062bcd8] scsi_queue_rq at ffffffffc0311f6e [scsi_mod]
 #8 [ffffb741c062bd20] blk_mq_dispatch_rq_list at ffffffffa8447cfe
 #9 [ffffb741c062bdc0] __blk_mq_do_dispatch_sched at ffffffffa844cae0
#10 [ffffb741c062be28] __blk_mq_sched_dispatch_requests at ffffffffa844d152
#11 [ffffb741c062be68] blk_mq_sched_dispatch_requests at ffffffffa844d290
#12 [ffffb741c062be78] __blk_mq_run_hw_queue at ffffffffa84466a3
#13 [ffffb741c062be98] process_one_work at ffffffffa80bcd74
#14 [ffffb741c062bed8] worker_thread at ffffffffa80bcf8d
#15 [ffffb741c062bf10] kthread at ffffffffa80c30ad
#16 [ffffb741c062bf50] ret_from_fork at ffffffffa8a001ff

RESOLUTION:
Code changes have been done to fix this issue.

* 4087791 (Tracking ID: 4087770)

SYMPTOM:
Data corruption post mirror attach operation seen after complete storage fault for DCO volumes.

DESCRIPTION:
DCO (data change object) tracks delta changes for faulted mirrors. During complete storage loss of DCO volume mirrors in, DCO object will be marked as BADLOG and becomes unusable for bitmap tracking.
Post storage reconnect (such as node rejoin in FSS environments) DCO will be re-paired for subsequent tracking. During this if VxVM finds any of the mirrors detached for data volumes, those are expected to be marked for full-resync as bitmap in DCO has no valid information. Bug in repair DCO operation logic prevented marking mirror for full-resync in cases where repair DCO operation is triggered before data volume is started. This resulted into mirror getting attached without any data being copied from good mirrors and hence reads serviced from such mirrors have stale data, resulting into file-system corruption and data loss.

RESOLUTION:
Code has been added to ensure repair DCO operation is performed only if volume object is enabled so as to ensure detached mirrors are marked for full-resync appropriately.

* 4088076 (Tracking ID: 4054685)

SYMPTOM:
RVG recovery gets hung in case of reconfiguration scenarios in CVR environments leading to vx commands hung on master node.

DESCRIPTION:
As a part of rvg recovery we perform DCM, datavolume recovery. But datavolume recovery takes long time due to wrong IOD handling done in linux platforms.

RESOLUTION:
Fix the IOD handling mechanism to resolve the rvg recovery handling.

* 4088483 (Tracking ID: 4088484)

SYMPTOM:
DMP_APM module is not getting loaded and throwing following message in the dmesg logs:
Mod load failed for dmpnvme module: dependency conflict
VxVM vxdmp V-5-0-1015 DMP_APM: DEPENDENCY CONFLICT

DESCRIPTION:
NVMe module loading failed as dmpaa module dependency added in APM and system doesn't have any A/A type disk which inturn nvme module load failed.

RESOLUTION:
Removed A/A dependency from NVMe APM.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSaslapm 8.0.0.1800

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSvxvm-8.0.0.1700

* 4081684 (Tracking ID: 4082799)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1600

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4062799 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size
of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on
InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4066213 (Tracking ID: 4052580)

SYMPTOM:
Multipathing not supported for NVMe devices under VxVM.

DESCRIPTION:
NVMe devices being non-SCSI devices, are not considered for multipathing.

RESOLUTION:
Changes introduced to support multipathing for NVMe devices.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSaslapm 8.0.0.1600

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSvxvm-8.0.0.1400

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4065569 (Tracking ID: 4056156)

SYMPTOM:
VxVM package fails to load on SLES15 SP3

DESCRIPTION:
Changes introduced in SLES15 SP3 impacted VxVM block IO functionality. This included changes in block layer structures in kernel.

RESOLUTION:
Changes have been done to handle the impacted functionalities.

* 4066259 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :

#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d
 #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868
 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6
 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8
 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio]   
 #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio]
 #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio]
 #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio]
 #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio]
 #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]
#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]
#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]
#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]
#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4
#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0
#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

* 4066735 (Tracking ID: 4057526)

SYMPTOM:
Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.

DESCRIPTION:
New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it 
reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"

RESOLUTION:
Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.sh

* 4066834 (Tracking ID: 4046007)

SYMPTOM:
In FSS environment if the cluster name is changed then the private disk region gets corrupted.

DESCRIPTION:
Under some conditions, when vxconfigd tries to update the TOC (table of contents) blocks of disk private region, the allocation maps cannot be initialized in the memory. This could make allocation maps incorrect and lead to corruption of the private region on the disk.

RESOLUTION:
Code changes have been done to avoid corruption of private disk region.

* 4067237 (Tracking ID: 4058894)

SYMPTOM:
After package installation and reboot , messages regarding udev rules for ignore_device are observed in /var/log/messages .
systemd-udevd[774]: /etc/udev/rules.d/40-VxVM.rules:25 Invalid value for OPTIONS key, ignoring: 'ignore_device'

DESCRIPTION:
From SLES15 Sp3 onwards , ignore_device is deprecated from udev rules and is not available for use anymore . Hence these messages are observed in system logs .

RESOLUTION:
Required changes have been done to handle this defect.

Patch ID: VRTSaslapm 8.0.0.1400

* 4067239 (Tracking ID: 4057110)

SYMPTOM:
Support for ASLAPM on SLES15 sp3

DESCRIPTION:
The SLES15sp3 is new release and hence APM module 
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSvxvm-8.0.0.1300

* 4065628 (Tracking ID: 4065627)

SYMPTOM:
VxVM modules are not loaded after OS upgrade followed by a reboot .

DESCRIPTION:
Once the stack installation is completed with configuration , after OS upgrade vxvm directory is not formed under /lib/modules/<upgraded_kernel>/veritas/ .

RESOLUTION:
VxVM code is updated with the required changes .

Patch ID: VRTSodm-8.0.0.3600

* 4189078 (Tracking ID: 4187362)

SYMPTOM:
ODM module failed to load on SLES15-SP6 kernel.

DESCRIPTION:
This issue occurs due to changes in the SLES15-SP6 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel.

Patch ID: VRTSodm-8.0.0.3100

* 4154894 (Tracking ID: 4144269)

SYMPTOM:
After installing, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on the VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.0.2900

* 4057432 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

* 4119105 (Tracking ID: 4119104)

SYMPTOM:
ODM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSodm-8.0.0.2600

* 4111349 (Tracking ID: 4092232)

SYMPTOM:
The ODM module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSodm-8.0.0.2500

* 4114322 (Tracking ID: 4114321)

SYMPTOM:
VRTSodm driver will not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-8.0.0.1800

* 4089136 (Tracking ID: 4089135)

SYMPTOM:
VRTSodm driver does not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-8.0.0.1200

* 4065680 (Tracking ID: 4056799)

SYMPTOM:
The ODM module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSgab-8.0.0.3200

* 4187371 (Tracking ID: 4180026)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 5(RHEL9.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 5(RHEL9.5) is now introduced.

* 4188700 (Tracking ID: 4188701)

SYMPTOM:
Veritas Infoscale Availability does not support SLES15SP6.

DESCRIPTION:
Veritas Infoscale Availability does not support SLES15SP6.

RESOLUTION:
Veritas Infoscale Availability support for SLES15SP6 is now introduced.

Patch ID: VRTSgab-8.0.0.2500

* 4124420 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSgab-8.0.0.2300

* 4111469 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

* 4111618 (Tracking ID: 4106321)

SYMPTOM:
After stopping HAD on SFCFHA stack for SLES15 SP4 minor kernel(kernel version > 5.14.21-150400.24.28) panic is observed. Assert is hit during hastop called on SLES15 SP4 kernel.

DESCRIPTION:
Kernel vendors are setting TIF_NOTIFY_SIGNAL flag to break out of wait loops though there is no pending signal for that thread. signal_pending() returns as always true. This is a false alarm to GAB and GAB assumes it as a signal delivered to wait thread, causing a crash.

RESOLUTION:
To avoid this false alarm from signal_pending() api, cleaning TIF_NOTIFY_SIGNAL flag from GAB w.r.t to wait thread context.

Patch ID: VRTSgab-8.0.0.1800

* 4089723 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSgab-8.0.0.1300

* 4067091 (Tracking ID: 4056991)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP2.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP3 is
now introduced.

Patch ID: VRTScavf-8.0.0.3600

* 4162960 (Tracking ID: 4153873)

SYMPTOM:
CVM master reboot resulted in volumes disabled on the slave node

DESCRIPTION:
The Infoscale stack exhibits unpredictable behaviour during reboots, sometimes the node hangs to come online, the working node goes into the faulted state and sometimes the cvm won't start on the rebooted node.

RESOLUTION:
Now we have added the mechanism for making decisions about deport and the code has been integrated with an offline routine.

Patch ID: VRTScavf-8.0.0.2800

* 4118779 (Tracking ID: 4074274)

SYMPTOM:
DR test and failover activity might not succeed for hardware-replicated disk groups.

DESCRIPTION:
In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover.

RESOLUTION:
For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.

Sample main.cf configuration for DMP managed hardware Replicated disks.
CVMVolDg srdf_dg (
CVMDiskGroup = LINUXSRDF
CVMVolume = { scott, scripts }
CVMActivation = sw
CVMDeportOnOffline = 1
ClearClone = 1
ScanDisks = 1
DGOptions = { "-t -o usereplicatedev=only" }
)
All 4 options(CVMDeportOnOffline, ClearClone, ScanDisks and DGOptions) are recommended with hardware-replicated disk groups.

Patch ID: VRTScavf-8.0.0.2400

* 4112609 (Tracking ID: 4079285)

SYMPTOM:
CVMVolDg resource takes many minutes to online with CPS fencing.

DESCRIPTION:
When fencing is configured as CP server based but not disk based SCSI3PR, Diskgroups are still imported with SCSI3 reservations, which causes SCSI3 PR errors during import and it will take long time due to retries.

RESOLUTION:
Code changes have been done to import Diskgroup without SCSI3 reservations when SCSI3 PR is disabled.

* 4112708 (Tracking ID: 4054462)

SYMPTOM:
In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the CVMVolDg agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
This hotfix addresses the issue by providing two new resource-level attributes for the CVMVolDg agent.
- The ScanDisks attribute specifies whether to perform a selective for all the disk paths that are associated with a VxVM disk group. When ScanDisks is set to 1, the agent performs a selective devices scan. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. ScanDisks is set to 0 by default, which indicates that a selective device scan is not performed. However, even when ScanDisks is set to 0, if the disk group fails during the first import attempt, the agent checks the error string. If the string contains the text HARDWARE_MIRROR, the agent performs a selective device scan to increase the chances of a successful import.
- The DGOptions attribute specifies options to be used with the vxdg import command that is executed by the agent to bring the CVMVolDg resource online.
Sample resource configuration for hardware replicated shared disk groups:
CVMVolDg tc_dg (
    CVMDiskGroup = datadg
    CVMVolume = { vol01 }
    CVMActivation = sw
    CVMDeportOnOffline = 1
    ClearClone = 1
    ScanDisks = 1
    DGOptions = "-t -o usereplicatedev=on"
    )
NOTE: The new "-o usereplicatedev=on" vxdg option is provided with VxVM hot-fixes from 7.4.1.x onwards.

Patch ID: VRTSglm-8.0.0.3600

* 4189080 (Tracking ID: 4184620)

SYMPTOM:
GLM module failed to load on SLES15-SP6 kernel.

DESCRIPTION:
This issue occurs due to changes in the SLES15-SP6 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15-SP6 kernel.

Patch ID: VRTSglm-8.0.0.2800

* 4119113 (Tracking ID: 4119112)

SYMPTOM:
GLM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSglm-8.0.0.2400

* 4087258 (Tracking ID: 4087259)

SYMPTOM:
While upgrading CFS protocol from 90 to 135 (latest), system may panic with following stack trace.

schedule()
vxg_svar_sleep_unlock() 
vxg_create_kthread()
vxg_startthread()
vxg_thread_create()
vxg_leave_local_scopes()
vxg_recv_restart_reply()
vxg_recovery_helper()
vxg_kthread_init()
kthread()

DESCRIPTION:
In GLM (Group lock manager), while upgrading GLM protocol version from 90 to 135 (latest), GLM need to process structures for local scope functionality. GLM 
creates child threads to do this processing. The child threads are created while holding spin lock, which is causing this issue.

RESOLUTION:
Code is changed to create child threads for processing local scope structures after releasing spin lock.

* 4111341 (Tracking ID: 4092225)

SYMPTOM:
The GLM module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSglm-8.0.0.1800

* 4089163 (Tracking ID: 4089162)

SYMPTOM:
The GLM module fails to load on SLES and RHEL.

DESCRIPTION:
The GLM module fails to load on SLES and RHEL.

RESOLUTION:
GLM module is updated to load as expected on SLES and RHEL.

Patch ID: VRTSglm-8.0.0.1200

* 4065685 (Tracking ID: 4056801)

SYMPTOM:
The GLM module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSspt-8.0.0.1700

* 4117339 (Tracking ID: 4132683)

SYMPTOM:
Need a tool to collect IS product specific information from both standalone node and clustered env.

DESCRIPTION:
We don't have a tool that provides a consolidated report on all the InfoScale services/VCS information from different nodes of the cluster or from standalone server. we need to execute multiple commands separately to gather the information.

RESOLUTION:
A new tool has been added for collecting various product stats and service information that works across clustered and standalone environment. Password-less SSH should be configured among nodes from which script is running and nodes provided.

* 4124177 (Tracking ID: 4125116)

SYMPTOM:
Logs collected from different tools of VRTSspt are stored at different location.

DESCRIPTION:
Currently FirstLook generates logs inside current working directory, CfsGather generates logs inside /opt/VRTSspt/FirstLook, vxgetcore generates logs inside /var/tmp and Collect_memstats_linux generates logs inside /tmp. So, consolidating the log location for VRTSspt tools to /var/log/VRTSspt.

RESOLUTION:
Logs collected from FirstLook, CfsGather, vxgetcore and Collect_memstats_linux will be stored at /var/log/VRTSspt location.

* 4129889 (Tracking ID: 4114988)

SYMPTOM:
No Progress status reporting through long running metasave utility

DESCRIPTION:
For large filesystems, collecting metadata through metasave utility might take a long time. Currently, it is a silent command that does not show the progress status of the collection being done during command execution.

RESOLUTION:
Code changes have been done to show the progress status while metadata collection is being done. Also, it initially estimates the total space requirement for metadata collection and if the required space is not available at the given location, metasave command would fail with ENOSPC.

* 4139975 (Tracking ID: 4149462)

SYMPTOM:
New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.

DESCRIPTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

RESOLUTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

* 4146957 (Tracking ID: 4149448)

SYMPTOM:
New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.

DESCRIPTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

RESOLUTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-8.0.0.3600.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-8.0.0.3600.tar.gz to /tmp/patch
    # mkdir /tmp/patch
    # cd /tmp/patch
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-8.0.0.3600.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-8.0.0.3600.tar
3. Install the patch(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/patch
    # ./installVRTSinfoscale800P3600 [<host1> <host2>...]

You can also install this patch together with 8.0 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE


Applies to the following product releases

Update files

File name Description Version Platform Size