Sign In
Forgot Password

Don’t have an account? Create One.

RHEL8.4 Platform support on InfoScale 7.4.1

Patch

Abstract

RHEL8.4 Platform Support for InfoScale 7.4.1

Description

SORT ID: 16965

 

Fixes the following incidents:

4016080,4016081,4018971,4018972,4020914,4022224,4022225,4037954,3976693,4004182,4004927,4014718,4015824,4016077,
4016082,4016407,3983165,3998797,4001941,4001942,4002742,4002986,4005377,4006832,4010354,4011363,3986794,3989317,
3995201,3997065,3998169,3998394,4001379,3989413,4013643,4023762,4031342,4033162,4033163,4033172,4033173,4033216,
4033515,4035313,4036426,4037331,4037810,3984155,4016283,4016291,4016768,4017194,4017502,4019781,3984163,4010517,
4010996,4011027,4011097,4011105,3992902,3997906,4000388,4001399,4001736,4001745,4001746,4001748,4001750,4001752,
4001755,4001757,3984139,3984731,3988238,3988843,4037952,4019681,4002155,3990021,4026815,3995684,4012318,4021371,
4028124,3984343,4006950,4009762,4016488,4016625,4028780,4037951,4000746,4019680,4037950,4019003,4019679,4002154,
3990020,4037949,4016486,4016487,4019677,4002152,3990018,4022791,4029112,4037049,3999398,4002584,4003442,4010546,
4016483,4019676,4012089,4002151,3990017,4037955,4018202,4011973,4001381,3989416,4037956,4014715,4016408,3999030,
4001383

 

Patch ID:

VRTSamf-7.4.1.2900-RHEL8 for VRTSamf
VRTSaslapm-7.4.1.2900-RHEL8 for VRTSaslapm
VRTSdbac-7.4.1.2900-RHEL8 for VRTSdbac
VRTSgab-7.4.1.2900-RHEL8 for VRTSgab
VRTSglm-7.4.1.2900-RHEL8 for VRTSglm
VRTSllt-7.4.1.2900-RHEL8 for VRTSllt
VRTSodm-7.4.1.2900-RHEL8 for VRTSodm
VRTSvcs-7.4.1.2900-RHEL8 for VRTSvcs
VRTSvcsag-7.4.1.2900-RHEL8 for VRTSvcsag
VRTSvxfen-7.4.1.2900-RHEL8 for VRTSvxfen
VRTSvxfs-7.4.1.2900-RHEL8 for VRTSvxfs
VRTSvxvm-7.4.1.2900-RHEL8 for VRTSvxvm

                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 2800 * * *
                         Patch Date: 2021-05-18


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * SUMMARY OF KNOWN ISSUES
   * DETAILS OF KNOWN ISSUES
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 2800


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSllt
VRTSodm
VRTSvcs
VRTSvcsag
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.4.1.2900
* 4016080 (4008520) CFS hang in vx_searchau().
* 4016081 (4009728) Panic while trying to add a hard link.
* 4018971 (4012544) Unmounting of bind mounted filesystem which is bound to VxFS mount locked filesystem fails with error.
* 4018972 (4017303) Bind mount of source directory on VxFS fails to unmount, after source directory is removed. Following error is displayed.
umount.vxfs: INFO: V-3-28386:  /mnt2: is not a VxFS filesystem.
* 4020914 (4020758) Filesystem mount or fsck with -y may see hang during log replay
* 4022224 (4012049) Documented "metasave" option and added one new option in fsck binary.
* 4022225 (4012049) Man page changes to expose "metasave" and "target" options.
* 4037954 (4037420) VxFS module failed to load on RHEL8.4
Patch ID: VRTSvxfs-7.4.1.2800
* 3976693 (4016085) fsdb command "xxxiau" refers wrong device to dump information
* 4004182 (4004181) Read the value of VxFS compliance clock
* 4004927 (3983350) Secondary may falsely assume that the ilist extent is pushed and do the allocation, even if the actual push transaction failed on primary.
* 4014718 (4011596) man page changes for glmdump
* 4015824 (4015278) System panics during vx_uiomove_by _hand.
* 4016077 (4009328) In cluster filesystem, unmount hang could be observed if smap is marked bad previously.
* 4016082 (4000465) FSCK binary loops when it detects break in sequence of log ids.
* 4016407 (4018197) VxFS module failed to load on RHEL8.3
Patch ID: VRTSvxfs-7.4.1.2600
* 3983165 (3975019) Under IO load running with NFS v4 using NFS lease, may panic the server
* 3998797 (3998162) If the file system is disabled while iau allocation log replay may fail for this intermediate state.
* 4001941 (3998931) Mount fails for the target filesystem while doing migration from ext4 to vxfs.
* 4001942 (3992665) Panic occurs at vx_mig_linux_getxattr_int/vx_mig_linux_setxattr_int  due to "kernel NULL pointer dereference".
* 4002742 (4002740) Dalloc tunable gets enabled on CFS secondary.
* 4002986 (3994123) Running fsck on a system may show LCT count mismatch errors
* 4005377 (3982291) ncheck command failure to allocate memory
* 4006832 (3987533) Mount is failing because of incorrect check in solaris env.
* 4010354 (3993935) Fsck command of vxfs may hit segmentation fault.
* 4011363 (4003395) System got panicked during upgrading IS 6.2.1 to IS 7.4.2 using phased upgrade.
Patch ID: VRTSvxfs-7.4.1.2200
* 3986794 (3992718) On CFS, read on clone's overlay attribute inode results in ERROR if clone is marked for removal and its the last clone being removed case, during attribute inode removal in inode inactivation process.
* 3989317 (3989303) In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.
* 3995201 (3990257) VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface
* 3997065 (3996947) FSCK operation may behave incorrectly or hang
* 3998169 (3998168) vxresize operations results in system freeze for 8-10 mins causing application 
hangs and VCS timeouts
* 3998394 (3983958) Code changes have been done to return proper error code while performing open/read/write operations on a removed checkpoint.
* 4001379 (4001378) VxFS module failed to load on RHEL8.2
Patch ID: VRTSvxfs-7.4.1.1600
* 3989413 (3989412) VxFS module failed to load on RHEL8.1
Patch ID: VRTSvxvm-7.4.1.2900
* 4013643 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4023762 (4020046) DRL log plex gets detached unexpectedly.
* 4031342 (4031452) vxesd core dump in esd_write_fc()
* 4033162 (3968279) Vxconfigd dumping core for NVME disk setup.
* 4033163 (3959716) System may panic with sync replication with VVR configuration, when the RVG is in DCM mode.
* 4033172 (3994368) vxconfigd daemon abort cause I/O write error
* 4033173 (4021301) Data corruption issue observed in VxVM on RHEL8.
* 4033216 (3993050) vxdctl dumpmsg command gets stuck on large node cluster
* 4033515 (3984266) DCM flag in on the RVG volume may get deactivated after a master switch, which may cause excessive RVG recovery after subsequent node reboots.
* 4035313 (4037915) VxVM 7.4.1 support for RHEL 8.4 compilation errors
* 4036426 (4036423) Race condition while reading config file in docker volume plugin caused the issue in Flex Appliance.
* 4037331 (4037914) BUG: unable to handle kernel NULL pointer dereference
* 4037810 (3977101) Hitting core in write_sol_part()
Patch ID: VRTSvxvm-7.4.1.2800
* 3984155 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4016283 (3973202) A VVR primary node may panic due to accessing already freed memory.
* 4016291 (4002066) Panic and Hang seen in reclaim
* 4016768 (3989161) The system panic occurs when dealing with getting log requests from vxloggerd.
* 4017194 (4012681) If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.
* 4017502 (4020166) Vxvm Support on RHEL8 Update3
* 4019781 (4020260) Failed to activate/set tunable dmp native support for Centos 8
Patch ID: VRTSvxvm-7.4.1.2700
* 3984163 (3978216) 'Device mismatch warning' seen on boot when DMP native support is enabled with LVM snapshot of root disk present
* 4010517 (3998475) Unmapped PHYS read I/O split across stripes gives incorrect data leading to data corruption.
* 4010996 (4010040) Configuring VRTSvxvm package creates a world writable file: /etc/vx/.vxvvrstatd.lock
* 4011027 (4009107) CA chain certificate verification fails in SSL context.
* 4011097 (4010794) Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster while there were storage activities going on.
* 4011105 (3972433) IO hang might be seen while issuing heavy IO load on volumes having cache objects.
Patch ID: VRTSvxvm-7.4.1.2200
* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread
* 3997906 (3987937) VxVM command hang may happen when snapshot volume is configured.
* 4000388 (4000387) VxVM support on RHEL 8.2
* 4001399 (3995946) CVM Slave unable to join cluster - VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12
* 4001736 (4000130) System panic when DMP co-exists with EMC PP on rhel8/sles12sp4.
* 4001745 (3992053) Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex.
* 4001746 (3999520) vxconfigd may hang waiting for dmp_reconfig_write_lock when the DMP iostat tunable is disabled.
* 4001748 (3991580) Deadlock may happen if IO performed on both source and snapshot volumes.
* 4001750 (3976392) Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.
* 4001752 (3969487) Data corruption observed with layered volumes when mirror of the volume is detached and attached back.
* 4001755 (3980684) Kernel panic in voldrl_hfind_an_instant while accessing agenode.
* 4001757 (3969387) VxVM(Veritas Volume Manager) caused system panic when handle received request response in FSS environment.
Patch ID: VRTSvxvm-7.4.1.1600
* 3984139 (3965962) No option to disable auto-recovery when a slave node joins the CVM cluster.
* 3984731 (3984730) VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted
* 3988238 (3988578) Encrypted volume creation fails on RHEL 8
* 3988843 (3989796) RHEL 8.1 support for VxVM
Patch ID: VRTSdbac-7.4.1.2900
* 4037952 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSdbac-7.4.1.2800
* 4019681 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSdbac-7.4.1.2100
* 4002155 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSdbac-7.4.1.1600
* 3990021 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSvcs-7.4.1.2900
* 4026815 (4026819) When IPv6 is disabled, non-root guest users cannot run HAD CLI commands.
Patch ID: VRTSvcs-7.4.1.2800
* 3995684 (3995685) Discrepancy in engine log messages of PR and DR site in GCO configuration.
* 4012318 (4012518) The gcoconfig command does not accept "." in the interface name.
Patch ID: VRTSvcsag-7.4.1.2900
* 4021371 (4021370) The AWSIP and EBSVol resources fail to come online when IMDSv2 is used for requesting instance metadata.
* 4028124 (4027915) Processes configured for HA using the ProcessOnOnly agent get killed during shutdown or reboot, even if they are still in use.
Patch ID: VRTSvcsag-7.4.1.2800
* 3984343 (3982300) A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.
* 4006950 (4006979) When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.
* 4009762 (4009761) A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.
* 4016488 (4007764) The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.
* 4016625 (4016624) When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.
Patch ID: VRTSvxfen-7.4.1.2900
* 4028780 (4029261) An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.
* 4037951 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSvxfen-7.4.1.2800
* 4000746 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
* 4019680 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSamf-7.4.1.2900
* 4037950 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSamf-7.4.1.2800
* 4019003 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
* 4019679 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSamf-7.4.1.2100
* 4002154 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSamf-7.4.1.1600
* 3990020 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSgab-7.4.1.2900
* 4037949 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSgab-7.4.1.2800
* 4016486 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.
* 4016487 (4007726) When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.
* 4019677 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSgab-7.4.1.2100
* 4002152 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSgab-7.4.1.1600
* 3990018 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSllt-7.4.1.2900
* 4022791 (4022792) A cluster node panics during an FSS I/O transfer over LLT.
* 4029112 (4029253) LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.
* 4037049 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSllt-7.4.1.2800
* 3999398 (3989440) The dash (-) in the device name may cause the LLT link configuration to fail.
* 4002584 (3994996) Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.
* 4003442 (3983418) In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.
* 4010546 (4018581) The LLT module fails to start and the system log messages indicate missing IP address.
* 4016483 (4016484) The vxexplorer utility panics the node on which it runs if the LLT version on the node is llt-rhel8_x86_64-Patch-7.4.1.2100 or later.
* 4019676 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSllt-7.4.1.2200
* 4012089 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSllt-7.4.1.2100
* 4002151 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSllt-7.4.1.1600
* 3990017 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSodm-7.4.1.2900
* 4037955 (4037575) ODM module failed to load on RHEL8.4
Patch ID: VRTSodm-7.4.1.2800
* 4018202 (4018200) ODM module failed to load on RHEL8.3
Patch ID: VRTSodm-7.4.1.2600
* 4011973 (4012094) VRTSodm driver will not load with 7.4.1.2600 VRTSvxfs patch.
Patch ID: VRTSodm-7.4.1.2100
* 4001381 (4001380) ODM module failed to load on RHEL8.2
Patch ID: VRTSodm-7.4.1.1600
* 3989416 (3989415) ODM module failed to load on RHEL8.1
Patch ID: VRTSglm-7.4.1.2900
* 4037956 (4037621) GLM module failed to load on RHEL8.4
Patch ID: VRTSglm-7.4.1.2800
* 4014715 (4011596) man page changes for glmdump
* 4016408 (4018213) GLM module failed to load on RHEL8.3
Patch ID: VRTSglm-7.4.1.1700
* 3999030 (3999029) GLM module failed to unload because of VCS service hold.
* 4001383 (4001382) GLM module failed to load on RHEL8.2


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.4.1.2900

* 4016080 (Tracking ID: 4008520)

SYMPTOM:
CFS hang in vx_searchau().

DESCRIPTION:
As part of SMAP transaction changes, allocator changed its logic to call mdele tryhold always when getting the emap for a particular EAU, and it passes nogetdele as 1 to mdele_tryhold, which suggests that mdele_tryhold should not ask for delegation when detecting a free EAU without delegation, so in our case, allocator finds such an EAU in device summary tree but without delegation,  and it keeps retrying but without asking for delegation, hence the forever.

RESOLUTION:
Rectify the culprit EAU summary and its parent in case lost delegation EAU is found.

* 4016081 (Tracking ID: 4009728)

SYMPTOM:
System can panic, while trying to create a hard link.

DESCRIPTION:
There is a buffer overflow while trying to remove an indirect attribute inode. Indirect attribute removal happens while trying to add the hardlink as well, to over the buffer with the updated inode entry. This buffer overflow can also cause memory corruption.

RESOLUTION:
The code is modified to move to check the length of the buffer to avoid the overflow.

* 4018971 (Tracking ID: 4012544)

SYMPTOM:
Unmounting of bind mounted filesystem which is bound to VxFS mount locked filesystem fails with following error.
UX:vxfs vxumount: ERROR: V-3-26388: file system /root/access_data/fs1 has been mount locked

DESCRIPTION:
Bind mount operation executed on VxFS mount point uses all the existing mount options of the VxFS mount point for new mount point.
When unmounting operation is executed on bind mounted file system, VxFS umount command first checks whether the particular mount point has mount locked or not using mount options itself. If mntlock options exist and umount command is not provided mntlockid, then unmount operation is failed for bind mount point. If mntlockid is provided during umount operation,(as mntlockid is not associated with bind mount point) the umount operation fails while turning off mntlock option.

RESOLUTION:
VxFS umount command now checks whether actual mount lock exists for the particular mount point, before failing the umount operation.

* 4018972 (Tracking ID: 4017303)

SYMPTOM:
Bind mount of source directory on VxFS fails to unmount, after source directory is removed. Following error is displayed.
umount.vxfs: INFO: V-3-28386:  /mnt2: is not a VxFS filesystem.

DESCRIPTION:
The bind mount has vxfs in its mount options, hence command umount call umount.vxfs command internally. Command umount.vxfs gives out error regarding not being 
VxFS file system.

RESOLUTION:
Command umount.vxfs is modified to propagate bind unmount request to kernel through umount2 system call in case source directory is removed.

* 4020914 (Tracking ID: 4020758)

SYMPTOM:
Filesystem mount or fsck with -y may see hang during log replay

DESCRIPTION:
fsck utility is used to perform the log replay. This log replay is performed during mount operation or during filesystem check with -y option, if needed. In certain cases if there are lot of logs that needs to be replayed then it end up into consuming entire buffer cache. This results into out of buffer scenario and results into hang.

RESOLUTION:
Code is modified to make sure enough buffers are always available.

* 4022224 (Tracking ID: 4012049)

SYMPTOM:
"fsck" supports the "metasave" option but it was not documented anywhere.

DESCRIPTION:
"fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state.

RESOLUTION:
Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary.

* 4022225 (Tracking ID: 4012049)

SYMPTOM:
fsck_vxfs(1m) man page doesn't contain the information related to the "metasave" and "target" option.

DESCRIPTION:
fsck_vxfs(1m) man page doesn't contain the information related to the "metasave" and "target" option.

RESOLUTION:
Code changes have been done to document these two options. 1. metasave 2. target

* 4037954 (Tracking ID: 4037420)

SYMPTOM:
VxFS module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.4.

Patch ID: VRTSvxfs-7.4.1.2800

* 3976693 (Tracking ID: 4016085)

SYMPTOM:
Shows garbage values, instead of correct information

DESCRIPTION:
Dumps information of device 0 while it was asking for other device information(may be device 1, 2).

RESOLUTION:
Updated curpos pointer to point to correct device as needed.

* 4004182 (Tracking ID: 4004181)

SYMPTOM:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

DESCRIPTION:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

RESOLUTION:
Provide an API on mount point to read the Compliance clock for that filesystem

* 4004927 (Tracking ID: 3983350)

SYMPTOM:
Inodes are allocated without pushing the ilist extent

DESCRIPTION:
Multiple messages can be sent from vx_cfs_ilist_push for inodes that are in same block. On receiver side i.e primary node vx_find_iposition() may return bno VX_OVERLAY and btranid 0 until anyone actually do the push. All these will get serialize in vx_ilist_pushino() on VX_IRWLOCK_RANGE and VX_CFS_IGLOCK. First one will do the push and set the btranid to last commit id. As btranid is non null vx_recv_ilist_push_msg() will wait for vx_tranidflush() to flush the transaction to disk. Other receiver threads will not do the push and will have tranid 0 so they will return success without waiting for the transactions to be flushed to disk. Now if the file system gets disable while flushing we end up in an inconsistent state because some of the inodes have actually returned success and marked this block as pushed in-core on secondary.

RESOLUTION:
If the block is pushed or pulled and tranid is 0 again lookup for ilist extent containing the inode. This will populate the correct tranid from ilptranid and the thread will wait for transaction flush.

* 4014718 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

* 4015824 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016077 (Tracking ID: 4009328)

SYMPTOM:
In a cluster filesystem, if smap corruption is seen and the smap is marked bad then it could cause hang while unmounting the filesystem.

DESCRIPTION:
While freeing an extent in vx_extfree1() for logversion >= VX_LOGVERSION13 if we are freeing whole AUs we set VX_AU_SMAPFREE flag for those AUs. This ensures that revoke of delegation for that AU is delayed till the AU has SMAP free transaction in progress. This flag gets cleared either in post commit/undo processing of the transaction or during error handling in vx_extfree1(). In one scenario when we are trying to free a whole AU and its smap is marked bad, we do not return any error to vx_extfree1() and neither do we add the subfunction to free the extent to the transaction. So, the VX_AU_SMAPFREE flag is not cleared and remains set even if there is no SMAP free transaction in progress. This could lead to hang while unmounting the cluster filesystem.

RESOLUTION:
Code changes have been done to add error handling in vx_extfree1 to clear VX_AU_SMAPFREE flag in case where error is returned due to bad smap.

* 4016082 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

* 4016407 (Tracking ID: 4018197)

SYMPTOM:
VxFS module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.3.

Patch ID: VRTSvxfs-7.4.1.2600

* 3983165 (Tracking ID: 3975019)

SYMPTOM:
Under IO load running with NFS v4 using NFS lease, system may panic with below message.
Kernel panic - not syncing: GAB: Port h halting system due to client process failure

DESCRIPTION:
NFS v4 uses lease per file. This delegation can be taken in RD or RW mode and can be released conditionally. For CFS, we release such delegation from specific node while inode is being normalized (i.e. losing ownership). This can race with another setlease operation on this node and can end up into deadlock for ->i_lock.

RESOLUTION:
Code changes are made to disable lease.

* 3998797 (Tracking ID: 3998162)

SYMPTOM:
Log replay fails for fsck

DESCRIPTION:
It is possible for IFIAU that extent allocation was done but the write to header block failed. In that case ieshorten extop is processed and the allocated extents are freed. Log replay does not consider this case and fails as the header does not have valid magic. So while doing log replay if the iauino file has IESHORTEN set and au number is equal to number of aus in fset, iau header should have magic, fset and aun all 0 or all valid values. For any other value return error.

RESOLUTION:
Fixed the log replay code.

* 4001941 (Tracking ID: 3998931)

SYMPTOM:
Running fsmigadm command with target vxfs filesystem device gives mount error:

Mouting Target File System /dev/vx/dsk/testdg/testvol
UX:vxfs mount.vxfs: ERROR: V-3-23731: mount failed

DESCRIPTION:
When doing migration from ext4 to vxfs with SELinux enabled, vx_mig_linux_getxattr() gets called in the context of migration processes (fsmigadm, fsmigbgcp). 
But we don't serve getxattr/setxattr operations invoked through fsmigadm, fsmigbgcp commands. So, vx_mig_linux_getxattr() returns ENOTSUP and mount fails.

RESOLUTION:
Allow getxattr/setxattr operations during migration in the context of fsmig commands.

* 4001942 (Tracking ID: 3992665)

SYMPTOM:
Panics with the below stack:
vx_mig_linux_getxattr_int
vx_mig_linux_getxattr
vfs_getxattr
getxattr
sys_getxattr
system_call_fastpath

DESCRIPTION:
Panic occurs when vx_mig_linux_setxattr_int() tries to access a vx_inode immediately after fsmigadm start is issued. Because the vx_inode it tries to access is yet to be created and assigned to vnode.
The code to access vx_inode in vx_mig_linux_setxattr_int() was added to check for symbolic links.

RESOLUTION:
In the _xattr call on migration vnode, instead of trying to use vx_inode to identify the file type, get vfs attributes from source dentry and then use vattr->va_type to check the file type.

* 4002742 (Tracking ID: 4002740)

SYMPTOM:
Dalloc tunable gets enabled on CFS secondary without setting it through vxtunefs.

DESCRIPTION:
Due to a race condition between tunable setting on Primary mount and mount system call on secondary, dalloc might remain enabled on secondary node.

RESOLUTION:
Code changes have been done to fix this issue.

* 4002986 (Tracking ID: 3994123)

SYMPTOM:
Running fsck on a system may show LCT count mismatch errors

DESCRIPTION:
Multi-block merged extents in IFIAT inodes, may only process the first block of the extent, thus leaving some references unprocessed. This will lead to LCT counts not matching. Resolving the issue will require a fullfsck.

RESOLUTION:
Code changes added to process merged multi-block extents in IFIAT inodes correctly.

* 4005377 (Tracking ID: 3982291)

SYMPTOM:
ncheck command fails with error "UX:vxfs ncheck: ERROR: V-3-21531: cannot malloc inode space."

DESCRIPTION:
length passed to malloc was of type int and due to large value of maxinode, malloc length was greater than INT_MAX. This causes malloc failure

RESOLUTION:
Changed the malloc length to type size_t instead of int to fix integer overflow.

* 4006832 (Tracking ID: 3987533)

SYMPTOM:
Mount is failing when "df" thread races with it.

DESCRIPTION:
On solaris, there is race between df thread and mount thread which can fail mount with below error

UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/testdg/vol1 is already mounted, /busymnt is busy,
                 or the allowable number of mount points has been exceeded.

On solaris, Df thread increases the v_count on the vnode of mount directory. During mount, VxFS checks for v_count to be 1, to avoid nested mount. But if df thread races with mount, then v_count can become 2 which will fail mount process. By checking VROOT flag, VxFS can avoid nested mount.

RESOLUTION:
Code is modified to check only VROOT flag for nested mount condition in solaris.

* 4010354 (Tracking ID: 3993935)

SYMPTOM:
Fsck command of vxfs may hit segmentation fault with following stack.
#0  get_dotdotlst ()
#1  find_dotino ()
#2  dir_sanity ()
#3  pass2 ()
#4  iproc_do_work ()
#5  start_thread ()
#6  sysctl ()

DESCRIPTION:
TURNON_CHUNK() and TURNOFF_CHUNK() are modifying the values of arguments.

RESOLUTION:
Code has been modified to fix the issue.

* 4011363 (Tracking ID: 4003395)

SYMPTOM:
System got panicked during upgrading IS 6.2.1 to IS 7.4.2 using phased upgrade.

DESCRIPTION:
During Upgrade operation, CPI first stops the processes. In such a case the older modules get's unloaded. And, Then, the VRTSvxfs Package is uninstalled. During Uninstallation vxfs-unconfigure script is run, which leaves the module driver file,at the same location because we have checks which is conditional to whether module is loaded or not.

RESOLUTION:
Removing the module driver file unconditionally in vxfs-unconfigure, so that no remnant file remain after the Package uninstallation.

Patch ID: VRTSvxfs-7.4.1.2200

* 3986794 (Tracking ID: 3992718)

SYMPTOM:
On CFS, due to race between clone removal and inode inactivation process, read on clone's overlay attribute inode results in ERROR 
if clone is marked for removal and its the last clone being removed case during attribute inode removal in inode inactivation process.

DESCRIPTION:
Issues is seen with primary fset inode having VX_IEREMOVE extop set and its clone inode having VX_IEPTTRUNC extop set. Clone has marked for removal and  its the last clone being removed case. This clone inode is in inode inactivation process and it has overlay attribute inode associated with it. During this inode inactivation process we tried this attribute inode removal, but due to overlay attribute inode and clone marked for removal and its the last clone being removed case we return with ENOENT error from vx_cfs_iread() for such inode and end up hitting this issue. The race between vx_clone_dispose() and vx_inactive_process() is also a reason for this issue.

We have not seen this issue in case of LM as we do extop processing unconditionally during clone creation process on local mounts.
On CFS we need to handle this case.

RESOLUTION:
In CFS case, during clone creation operation do extop processing of inodes having VX_IEPTTRUNC extop set.

* 3989317 (Tracking ID: 3989303)

SYMPTOM:
In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.

DESCRIPTION:
On rhel8 and sles15, OS systemd coredump utility calls vx_statvfs().
In case of reconfig where primary node dies and cfs recovery is in process to replay log files 
of the dead nodes for which vxfsckd runs fsck on secondary node and if fsck hits coredump in between 
due to some error, the coredump utility thread gets stuck at vx_statvfs() to get wakeup by new primary 
to collect fs stats and blocking recovery thread here and we are landing into deadlock.

RESOLUTION:
To unblock recovery thread in this case we should send older fs stats to coredump utility 
when cfs recovery is in process on secondary and "vx_fsckd_process" is set which indicates fsck is in 
progress for this filesystem.

* 3995201 (Tracking ID: 3990257)

SYMPTOM:
VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface

DESCRIPTION:
In case of Userspace Input Output (UIO) interface, VxFS is not able to handle larger I/O request properly, resulting in buffer overflow.

RESOLUTION:
VxFS code is modified to limit the length of I/O request came through Userspace Input Output (UIO) interface.

* 3997065 (Tracking ID: 3996947)

SYMPTOM:
FSCK operation may behave incorrectly or hang

DESCRIPTION:
While checking filesystem with fsck utility we may see hang or undefined behavior if FSCK found specific type of corruption. Such type of 
corruption will be visible in presence of checkpoint. FSCK utility fixed any corruption as per input give (either "y" or "n"). During this for this specific 
type of corruption, due to bug it end up into unlocking mutex which is not locked.

RESOLUTION:
Code is modified to fix the bug to made sure mutex is locked before unlocking it.

* 3998169 (Tracking ID: 3998168)

SYMPTOM:
For multi-TB filesystems, vxresize operations results in system freeze for 8-10 mins
causing application hangs and VCS timeouts.

DESCRIPTION:
During resize, primary node get the delegation of all the allocation units. In case of larger filesystem,
the total time taken by delegation operation is quite large. Also, flushing the summary maps takes considerable
amount of time. This results in filesystem freeze of around 8-10 mins.

RESOLUTION:
Code changes have been done to reduce the total time taken by vxresize.

* 3998394 (Tracking ID: 3983958)

SYMPTOM:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

DESCRIPTION:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

RESOLUTION:
Code changes have been done so that, proper error code will be returned in those scenarios.

* 4001379 (Tracking ID: 4001378)

SYMPTOM:
VxFS module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.2

Patch ID: VRTSvxfs-7.4.1.1600

* 3989413 (Tracking ID: 3989412)

SYMPTOM:
VxFS module failed to load on RHEL8.1

DESCRIPTION:
The RHEL8.1 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.1.

Patch ID: VRTSvxvm-7.4.1.2900

* 4013643 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4023762 (Tracking ID: 4020046)

SYMPTOM:
The following IO errors are reported on VxVM sub-disks result in DRL log detached without any SCSI errors detected.

VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]
VxVM vxio V-5-0-145 DRL volume yyyy[log] is detached

DESCRIPTION:
DRL plexes detached as an atomic write flag (BIT_ATOMIC) was set on BIO unexpectedly. The BIT_ATOMIC flag gets set on bio only if VOLSIO_BASEFLAG_ATOMIC_WRITE flag is set on SUBDISK SIO and its parent MVWRITE SIO's sio_base_flags. When generating MVWRITE SIO,  it's sio_base_flags was copied from a gio structure, because the gio structure memory isn't initialized it may contain gabarge values, hence the issue.

RESOLUTION:
Code changes have been made to fix the issue.

* 4031342 (Tracking ID: 4031452)

SYMPTOM:
Add node operation is failing with error "Error found while invoking '' in the new node, and rollback done in both nodes"

DESCRIPTION:
Stack showed a valid address for pointer ptmap2, but still it generated core.
It suggested that it might be a double-free case. Issue lies in freeing a pointer

RESOLUTION:
Added handling for such case by doing NULL assignment to pointers wherever they are freed

* 4033162 (Tracking ID: 3968279)

SYMPTOM:
Vxconfigd dumps core with SEGFAULT/SIGABRT on boot for NVME setup.

DESCRIPTION:
For NVME setup, vxconfigd dumps core while doing device discovery as the data structure is accessed by multiple threads and can hit a race condition. For sector size other than 512, the partition size mismatch is seen because we are doing comparison with partition size from devintf_getpart() and it is in sector size of the disk. This can lead to call of NVME device discovery.

RESOLUTION:
Added mutex lock while accessing the data structure so as to prevent core. Made calculations in terms of sector size of the disk to prevent the partition size mismatch.

* 4033163 (Tracking ID: 3959716)

SYMPTOM:
System may panic with sync replication with VVR configuration, when VVR RVG is in DCM mode, with following panic stack:
volsync_wait [vxio]
voliod_iohandle [vxio]
volted_getpinfo [vxio]
voliod_loop [vxio]
voliod_kiohandle [vxio]
kthread

DESCRIPTION:
With sync replication, if ACK for data message is delayed from the secondary site, the 
primary site might incorrectly free the message from the waiting queue at primary site.
Due to incorrect handling of the message, a system panic may happen.

RESOLUTION:
Required code changes are done to resolve the panic issue.

* 4033172 (Tracking ID: 3994368)

SYMPTOM:
During node 0 shutting down, vxconfigd daemon abort on node 1, and I/O write error happened on node 1

DESCRIPTION:
Examining the vxconfigd core we found that it entered into endless sigio processing which resulted in stack overflow and hence vxconfigd core dumped.
After that vxconfigd restarted and ended up in dg disable scenario.

RESOLUTION:
We have done the appropriate code changes to handle the scenario of stack overflow.

* 4033173 (Tracking ID: 4021301)

SYMPTOM:
Data corruption issue happened with the big size IO processed by Linux kernel IO split on RHEL8.

DESCRIPTION:
On RHEL8 or as of Linux kernel 3.13, it introduces some changes in Linux kernel block layer, new item of the bio iterator structure is used to represent the start offset of bio or bio vectors after the IO processed by Linux kernel IO split functions. Also, in recent version of vxfs, it can generate bio with larger size than the size limitation defined within Linux kernel block layer and VxVM, which lead the IO from vxfs could be split by Linux kernel. For such split IOs, VxVM does not take the new item of the bio iterator into account while process them, which caused the data is written to wrong position of volume/disk. Hence, data corruption.

RESOLUTION:
Code changes have been made to bypass the Linux kernel IO split functions, which seems redundant for VxVM IO processing.

* 4033216 (Tracking ID: 3993050)

SYMPTOM:
vxdctl dumpmsg command gets stuck on large node cluster during reconfiguration

DESCRIPTION:
vxdctl dumpmsg command gets stuck on large node cluster during reconfiguration with following stack. This causes /var/adm/vx/voldctlmsg.log file
to get filled with old repeated messages in GBs consuming most of /var space.
# pstack 210460
voldctl_get_msgdump ()
do_voldmsg ()
main ()

RESOLUTION:
Code changes have been done to dump correct required messages to file

* 4033515 (Tracking ID: 3984266)

SYMPTOM:
DCM flag in on the RVG (Replicated Volume Group) volume may get deactivated after a master switch in CVR (Clustered Volume Replicator) which may cause excessive RVG recovery after subsequent node reboots.

DESCRIPTION:
After master switch, the DCM flag needs to be updated on the new CVM master node. Due to a transaction initiated in parallel with master switch, the DCM flag was getting lost. This was causing excessive RVG recovery during next node reboots as the DCM write position was NOT updated for a long time.

RESOLUTION:
The code is fixed to handle the race in updating the DCM flag during a master switch.

* 4035313 (Tracking ID: 4037915)

SYMPTOM:
Getting compilation errors due to RHEL's source code changes

DESCRIPTION:
While compiling the RHEL 8.4 kernel (4.18.0-304) the build compilation fails due to certain RH source changes.

RESOLUTION:
Following changes have been fixed to work with VxVM 7.4.1
__bdevname - depreciated
Solution: Have a struct block_device and use bdevname

blkg_tryget_closest - placed under EXPORT_SYMBOL_GPL
Solution: Locally defined the function where compilation error was hit

sync_core - implicit declaration
The implementation of function sync_core() has been moved to header file sync_core.h, so including this header file fixes the error

* 4036426 (Tracking ID: 4036423)

SYMPTOM:
Race condition while reading config file in docker volume plugin caused the issue in Flex Appliance.

DESCRIPTION:
If 2 simultaneous requests come for say MountVolume, then both of them update the global variables and it leads to wrong parameter values
in some cases.

RESOLUTION:
Fix is to read this file only once during startup in init() function. If the user wants to change default values in the config file,
then he will have to restart the vxinfoscale-docker service.

* 4037331 (Tracking ID: 4037914)

SYMPTOM:
Crash while running VxVM cert.

DESCRIPTION:
While running the VM cert, there is a panic reported and the

RESOLUTION:
Setting bio and submitting to IOD layer in our own vxvm_gen_strategy() function

* 4037810 (Tracking ID: 3977101)

SYMPTOM:
While testing on VM cert a core dump is produced, no functionality breaks were observed

DESCRIPTION:
A regression caused by read_sol_label using same return varible (ret) more than once. Added code to get sector size and used same return variable, the function was returning presence of label even if it does not exist

RESOLUTION:
Code repositioned in vxpart.c to assign only in presence of label to return value

Patch ID: VRTSvxvm-7.4.1.2800

* 3984155 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4016283 (Tracking ID: 3973202)

SYMPTOM:
A VVR primary node may panic with below stack due to accessing the freed memory:
nmcom_throttle_send()
nmcom_sender()
kthread ()
kernel_thread()

DESCRIPTION:
After sending the data to VVR (Veritas Volume Replicator) secondary site, the code was accessing some variables for which the memory was already released due to the data ACK getting processed quite early. This was a rare race condition which may happen due to accessing the freed memory.

RESOLUTION:
Code changes have been made to avoid the incorrect memory access.

* 4016291 (Tracking ID: 4002066)

SYMPTOM:
System panic with below stack when do reclaim:
__wake_up_common_lock+0x7c/0xc0
sbitmap_queue_wake_all+0x43/0x60
blk_mq_tag_wakeup_all+0x15/0x30
blk_mq_wake_waiters+0x3d/0x50
blk_set_queue_dying+0x22/0x40
blk_cleanup_queue+0x21/0xd0
vxvm_put_gendisk+0x3b/0x120 [vxio]
volsys_unset_device+0x1d/0x30 [vxio]
vol_reset_devices+0x12b/0x180 [vxio]
vol_reset_kernel+0x16c/0x220 [vxio]
volconfig_ioctl+0x866/0xdf0 [vxio]

DESCRIPTION:
With recent kernel, it is expected that kernel will return the pre-allocated sense buffer. These sense buffer pointers are supposed to be unchanged across multiple uses of a request. They are pre-allocated and expected to be unchanged until such a time as the request memory is to be freed. DMP overwrote the  original sense buffer, hence the issue.

RESOLUTION:
Code changes have been made to avoid tampering the pre-allocated sense buffer.

* 4016768 (Tracking ID: 3989161)

SYMPTOM:
The system panic occurs because of hard lockup with the following stack:

#13 [ffff9467ff603860] native_queued_spin_lock_slowpath at ffffffffb431803e
#14 [ffff9467ff603868] queued_spin_lock_slowpath at ffffffffb497a024
#15 [ffff9467ff603878] _raw_spin_lock_irqsave at ffffffffb4988757
#16 [ffff9467ff603890] vollog_logger at ffffffffc105f7fa [vxio]
#17 [ffff9467ff603918] vol_rv_update_childdone at ffffffffc11ab0b1 [vxio]
#18 [ffff9467ff6039f8] volsiodone at ffffffffc104462c [vxio]
#19 [ffff9467ff603a88] vol_subdisksio_done at ffffffffc1048eef [vxio]
#20 [ffff9467ff603ac8] volkcontext_process at ffffffffc1003152 [vxio]
#21 [ffff9467ff603b10] voldiskiodone at ffffffffc0fd741d [vxio]
#22 [ffff9467ff603c40] voldiskiodone_intr at ffffffffc0fda92b [vxio]
#23 [ffff9467ff603c80] voldmp_iodone at ffffffffc0f801d0 [vxio]
#24 [ffff9467ff603c90] bio_endio at ffffffffb448cbec
#25 [ffff9467ff603cc0] gendmpiodone at ffffffffc0e4f5ca [vxdmp]
... ...
#50 [ffff9497e99efa60] do_page_fault at ffffffffb498d975
#51 [ffff9497e99efa90] page_fault at ffffffffb4989778
#52 [ffff9497e99efb40] conv_copyout at ffffffffc10005da [vxio]
#53 [ffff9497e99efbc8] conv_copyout at ffffffffc100044e [vxio]
#54 [ffff9497e99efc50] volioctl_copyout at ffffffffc1032db3 [vxio]
#55 [ffff9497e99efc80] vol_get_logger_data at ffffffffc105e4ce [vxio]
#56 [ffff9497e99efcf8] voliot_ioctl at ffffffffc105e66b [vxio]
#57 [ffff9497e99efd78] volsioctl_real at ffffffffc10aee82 [vxio]
#58 [ffff9497e99efe50] vols_ioctl at ffffffffc0646452 [vxspec]
#59 [ffff9497e99efe70] vols_unlocked_ioctl at ffffffffc06464c1 [vxspec]
#60 [ffff9497e99efe80] do_vfs_ioctl at ffffffffb4462870
#61 [ffff9497e99eff00] sys_ioctl at ffffffffb4462b21

DESCRIPTION:
Vxio kernel sends a signal to vxloggerd to flush the log as it is almost full. Vxloggerd calls into vxio kernel to copy the log buffer out.  As vxio copy log date from kernel to user with holding a spinlock, if a page fault occurs during the copy out, hard lockup and panic occur.

RESOLUTION:
Code changes have been made the fix the problem.

* 4017194 (Tracking ID: 4012681)

SYMPTOM:
If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.

DESCRIPTION:
The RVG(Replicated Volume Group) agent of VCS(Veritas Cluster Server) restarts the vradmind process if it gets killed or terminated
due to some reason, this was not working properly on systemd enabled platforms like RHEL-7.
In the systemd enabled platforms, after the vradmind process dies, the vras-vradmind service used to stay in active/running state, due to this, even 
after the RVG agent issued a command to start the vras-vradmind service, the vradmind process was not getting started.

RESOLUTION:
The code is modified to fix the parameters for vras-vradmind service, so that the service status will change to failed/faulted if vradmind process gets killed. 
The service can be manually started later or RVG agent of VCS can start the service, which will start the vradmind process as well.

* 4017502 (Tracking ID: 4020166)

SYMPTOM:
Build issue becuase of "struct request"

error: struct request has no member named next_rq
Linux has deprecated the member next_req

DESCRIPTION:
The issue was observed due to changes in OS structure

RESOLUTION:
code changes are done in required files

* 4019781 (Tracking ID: 4020260)

SYMPTOM:
While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:

[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=on
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file 

 cl
[root@dl360g9-4-vm2 ~]#

DESCRIPTION:
The issue was observed due to missing code check-ins for CentOS 8 in the required files.

RESOLUTION:
Changes are done in required files for dmp native support in CentOS 8

Patch ID: VRTSvxvm-7.4.1.2700

* 3984163 (Tracking ID: 3978216)

SYMPTOM:
'Device mismatch warning' seen on boot when DMP native support is enabled with LVM snapshot of root disk present

DESCRIPTION:
When we enable DMP (Dynamic Multipathing) Native Support featue on the system having a LVM snapshot of root disk present, "Device mismatch" warning messages are seen on every reboot in boot.log file. The messages are coming because LVM is trying to access the LV using the information present in the lvm.cache file which is stale. Because of accessing the stale file, the warning messages are seen on reboot.

RESOLUTION:
Fix is to remove the LVM cache file during system shutdown as part of VxVM shutdown.

* 4010517 (Tracking ID: 3998475)

SYMPTOM:
Data corruption is observed and service groups went into partial state.

DESCRIPTION:
In VxVM, fsck log replay initiated read of 64 blocks, that was getting split across 2 stripes of the stripe-mirror volume. 
So, we had 2 read I/Os of 48 blocks (first split I/O) and 16 blocks (second split I/O).
Since the volume was in RWBK mode, this read I/O was stabilized. Upon completion of the read I/O at subvolume level, this I/O was unstabilized and the contents 
of the stable I/O (stablekio) were copied to the original I/O (origkio). It was observed that the data was always correct till the subvolume level but at the 
top level plex and volume level, it was incorrect (printed checksum in vxtrace output for this).

The reason for this was during unstabilization, we do volkio_to_kio_copy() which copies the contents from stable kio to orig kio (since it is a read).
As the orig kio was an unmapped PHYS I/O, in Solaris 11.4, the contents will be copied out using bp_copyout() from volkiomem_kunmap(). The volkiomem_seek() and 
volkiomem_next_segment() allocates pagesize (8K) kernel buffer (zero'ed out) where the contents will be copied to.
When the first split I/O completes unstabilization before the second split I/O, this issue was not seen. However, if the second split 
I/O completed before the first splt I/O then this issue was seen. 

Here, in the last iteration of the volkio_to_kio_copy(), the data copied was less than the allocated region size. We allocate 8K region size whereas the data 
copied from stablekio was less than 8K. Later, during kunmap(), we do a bp_copyout() of alloocated size i.e. 8K. This caused copyout of extra regions that were 
zero'ed out. Hence the data corruption.

RESOLUTION:
Now we do a bp_copyout() of the right length i.e. of the copied size instead of the allocated region size.

* 4010996 (Tracking ID: 4010040)

SYMPTOM:
Configuring VRTSvxvm package creates a world writable file: "/etc/vx/.vxvvrstatd.lock".

DESCRIPTION:
VVR statistics daemon (vxvvrstad) creates this file on startup. The umask for this daemon was not set correctly resulting in creation of the world writable file.

RESOLUTION:
VVR daemon is updated to to set the umask properly.

* 4011027 (Tracking ID: 4009107)

SYMPTOM:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. So, we get error in SSL initialization.

DESCRIPTION:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. SSL_CTX_set_verify_depth() API decides the depth of certificates (in /etc/vx/vvr/cacert file) to be verified, which is limited to count 1 in code. Thus intermediate CA certificate present first  in /etc/vx/vvr/cacert (depth 1  CA/issuer certificate for server certificate) could be obtained and verified during connection, but root CA certificate (depth 2  higher CA certificate) could not be verified while connecting and hence the error.

RESOLUTION:
Removed the call of SSL_CTX_set_verify_depth() API so as to handle the depth automatically.

* 4011097 (Tracking ID: 4010794)

SYMPTOM:
Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster with below stack while there were storage activities going on.
dmp_start_cvm_local_failover+0x118()
dmp_start_failback+0x398()
dmp_restore_node+0x2e4()
dmp_revive_paths+0x74()
gen_update_status+0x55c()
dmp_update_status+0x14()
gendmpopen+0x4a0()

DESCRIPTION:
It could happen dmpnode's current primary path became invalid when disks were attached/detached in a cluster. DMP accessed the current primary path without doing sanity check. Hence system panic due to an invalid pointer.

RESOLUTION:
Code changes have been made to avoid accessing a invalid pointer.

* 4011105 (Tracking ID: 3972433)

SYMPTOM:
IO hang might be seen while issuing heavy IO load on volumes having cache objects.

DESCRIPTION:
While issuing heavy IO on volumes having cache objects, the IO on cache volumes may stall due to locking(region lock) involved 
for overlapping IO requests on the same cache object. When appropriate locks are granted to IOs, all the IOs were getting processed 
in serial fashion through single VxVM IO daemon thread. This serial processing was causing slowness, 
resulting in a IO hang like situation and application timeouts.

RESOLUTION:
The code changes are done to properly perform multi-processing of the cache volume IOs.

Patch ID: VRTSvxvm-7.4.1.2200

* 3992902 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 3997906 (Tracking ID: 3987937)

SYMPTOM:
VxVM command hang happens when heavy IO load performed on VxVM volume with snapshot, IO memory pool full is also observed.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on volume with snapshots. When a multistep SIO A acquired ilock and it's child MV write SIO is waiting for memory pool which is full, another multistep SIO B has acquired memory and waiting for the ilock held by multistep SIO A.

RESOLUTION:
Code changes have been made to fix the issue.

* 4000388 (Tracking ID: 4000387)

SYMPTOM:
Existing VxVM module fails to load on Rhel 8.2

DESCRIPTION:
RHEL 8.2 is a new release and had few KABI changes  on which VxVM compilation breaks .

RESOLUTION:
Compiled VxVM code against 8.2 kernel and made changes to make it compatible.

* 4001399 (Tracking ID: 3995946)

SYMPTOM:
CVM Slave unable to join cluster with below error:
VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12
VxVM vxconfigd ERRORV-5-1-11467 kernel_fail_join(): Reconfiguration interrupted: Reason is retry to add a node failed (13, 0)

DESCRIPTION:
vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout are introduced in 7.4.1 U1 for Linux only. For other platforms like Solaris and AIX, it isn't supported. Due a bug in code, those two tunables were exposed,and cvm couldn't get those two tunables info from master node. Hence the issue.

RESOLUTION:
Code change has been made to hide vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout for other platforms like Solaris and AIX.

* 4001736 (Tracking ID: 4000130)

SYMPTOM:
System panic when DMP co-exists with EMC PP on rhel8/sles12sp4 with below stacks:

#6 [] do_page_fault 
#7 [] page_fault 
[exception RIP: dmp_kernel_scsi_ioctl+888]
#8 [] dmp_kernel_scsi_ioctl at [vxdmp]
#9 [] dmp_dev_ioctl at [vxdmp]
#10 [] do_passthru_ioctl at [vxdmp]
#11 [] dmp_tur_temp_pgr at [vxdmp]
#12 [] dmp_pgr_set_temp_key at [vxdmp]
#13 [] dmpioctl at [vxdmp]
#14 [] dmp_ioctl at [vxdmp]
#15 [] blkdev_ioctl 
#16 [] block_ioctl 
#17 [] do_vfs_ioctl
#18 [] ksys_ioctl 

Or

 #8 [ffff9c3404c9fb40] page_fault 
 #9 [ffff9c3404c9fbf0] dmp_kernel_scsi_ioctl at [vxdmp]
#10 [ffff9c3404c9fc30] dmp_scsi_ioctl at [vxdmp]
#11 [ffff9c3404c9fcb8] dmp_send_scsireq at [vxdmp]
#12 [ffff9c3404c9fcd0] dmp_do_scsi_gen at [vxdmp]
#13 [ffff9c3404c9fcf0] dmp_pr_send_cmd at [vxdmp]
#14 [ffff9c3404c9fd80] dmp_pr_do_read at [vxdmp]
#15 [ffff9c3404c9fdf0] dmp_pgr_read at [vxdmp]
#16 [ffff9c3404c9fe20] dmpioctl at [vxdmp]
#17 [ffff9c3404c9fe30] dmp_ioctl at [vxdmp]

DESCRIPTION:
Upwards 4.10.17, there is no such guarantee from the block layer or other drivers to ensure that the cmd pointer at least points to __cmd, when initialize a SCSI request. DMP directly accesses cmd pointer after got the SCSI request from underlayer without sanity check, hence the issue.

RESOLUTION:
Code changes have been made to do sanity check when initialize a SCSI request.

* 4001745 (Tracking ID: 3992053)

SYMPTOM:
Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex. This is due to 
inconsistent data across the plexes after attaching a plex in layered volumes.

DESCRIPTION:
When a plex is detached in a layered volume, the regions which are dirty/modified are tracked in DCO (Data change object) map.
When the plex is attached back, the data corresponding to these dirty regions is re-synced to the plex being attached.
There was a defect in the code due to which the some particular regions were NOT re-synced when a plex is attached.
This issue only happens only when the offset of the sub-volume is NOT aligned with the region size of DCO (Data change object) volume.

RESOLUTION:
The code defect is fixed to correctly copy the data for dirty regions when the sub-volume offset is NOT aligned with the DCO region size.

* 4001746 (Tracking ID: 3999520)

SYMPTOM:
VxVM commands may hang with below stack when user tries to start or stop the DMP IO statistics collection when
the DMP iostat tunable (dmp_iostats_state) was disabled earlier.

schedule()
rwsem_down_failed_common()
rwsem_down_write_failed()
call_rwsem_down_write_failed()
dmp_reconfig_write_lock()
dmp_update_reclaim_attr()
gendmpioctl()
dmpioctl()

DESCRIPTION:
When the DMP iostat tunable (dmp_iostats_state) is disabled and user tries to 
start (vxdmpadm iostat start) or stop (vxdmpadm iostat stop) the DMP iostat collection, then 
a thread which collects the IO statistics was exiting without releasing a lock. Due to this,
further VxVM commands were getting hung while waiting for the lock.

RESOLUTION:
The code is changed to correctly release the lock when the tunable 'dmp_iostats_state' is disabled.

* 4001748 (Tracking ID: 3991580)

SYMPTOM:
IO and VxVM command hang may happen if IO performed on both source and snapshot volumes.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on both source volume and snapshot volume. 
SIO (a), USER_WRITE, on snap volume, held ILOCK (a), waiting for memory(full).
SIO (b),  PUSHED_WRITE, on snap volume, waiting for ILOCK (a).
SIO (c),  parent of SIO (b), USER_WRITE, on the source volume, held ILOCK (c) and memory, waiting for SIO (b) done.

RESOLUTION:
User separate pool for IO writes on Snapshot volume to resolve the issue.

* 4001750 (Tracking ID: 3976392)

SYMPTOM:
Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.

DESCRIPTION:
During processing of plex detach request, the VxVM volume is operated in serial manner. During serialization it might happen that current thread has queued the I/O and still accessing the same. In the meantime the same I/O is picked up by one of VxVM threads for processing. The processing of the I/O is completed and the same is deleted after that. The current thread is still accessing the same memory which was already deleted which might lead to memory corruption.

RESOLUTION:
Fix is to not use the same I/O in the current thread once the I/O is queued as part of serialization and the processing is done before queuing the I/O.

* 4001752 (Tracking ID: 3969487)

SYMPTOM:
Data corruption observed with layered volumes after resynchronisation when mirror of the volume is detached and attached back.

DESCRIPTION:
In case of layered volume, if the IO fails at the underlying subvolume layer before doing the mirror detach the top volume in layered volume has to be serialized (run IO's in serial fashion). When volume is serialized IO's on the volume are directly tracked into detach map of DCO (Data Change Object). During this time period if some of the new IO's occur on the volume then those IO's would not be tracked as part of the detach map inside DCO since detach map tracking is not yet enabled by failed IO's. The new IO's which are not being tracked in detach map would be missed when the plex resynchronisation happens later which leads to corruption.

RESOLUTION:
Fix is to delay the unserialization of the volume till the point failed IO's actually detach the plex and enable detach map tracking. This would make sure new IO's are tracked as part of detach map of DCO.

* 4001755 (Tracking ID: 3980684)

SYMPTOM:
Kernel panic in voldrl_hfind_an_instant while accessing agenode with stack
[exception RIP: voldrl_hfind_an_instant+49]
#11 voldrl_find_mark_agenodes
#12 voldrl_log_internal_30
#13 voldrl_log_30
#14 volmv_log_drlfmr
#15 vol_mv_write_start
#16 volkcontext_process
#17 volkiostart
#18 vol_linux_kio_start
#19 vxiostrategy
...

DESCRIPTION:
Agenode corruption is hit in case of use of per file sequential hint. Agenode's linked list is corrupted as pointer was not set to NULL
when reusing the agenode.

RESOLUTION:
Changes are done in VxVM code to avoid Agenode list corruption.

* 4001757 (Tracking ID: 3969387)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment,system might panic with below stack:
vol_get_ioscb [vxio]
vol_ecplex_rhandle_resp [vxio]
vol_ioship_rrecv [vxio]
gab_lrrecv [gab]
vx_ioship_llt_rrecv [llt]
vx_ioship_process_frag_packets [llt]
vx_ioship_process_data [llt]
vx_ioship_recv_data [llt]

DESCRIPTION:
In certain scenario, it may happen that request got purged and response came after that. Then system might panic due to access the freed resource.

RESOLUTION:
Code changes have been made to fix the issue.

Patch ID: VRTSvxvm-7.4.1.1600

* 3984139 (Tracking ID: 3965962)

SYMPTOM:
No option to disable auto-recovery when a slave node joins the CVM cluster.

DESCRIPTION:
In a CVM environment, when the slave node joins the CVM cluster, it is possible that the plexes may not be in sync. In such a scenario auto-recovery is triggered for the plexes.  If a node is stopped using the hastop -all command when the auto-recovery is in progress, the vxrecover operation may hang. An option to disable auto-recovery is not available.

RESOLUTION:
The VxVM module is updated to allow administrators to disable auto-recovery when a slave node joins a CVM cluster.
A new tunable, auto_recover, is introduced. By default, the tunable is set to 'on' to trigger the auto-recovery. Set its value to 'off' to disable auto-recovery. Use the vxtune command to set the tunable.

* 3984731 (Tracking ID: 3984730)

SYMPTOM:
VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted.

DESCRIPTION:
VxVM logs these warnings because the  QUEUE_FLAG_REGISTERED and QUEUE_FLAG_INIT_DONE queue flags are not cleared while registering the dmpnode.
The following stack is reported after stopping/removing VxDMP for first time after every reboot:
kernel: WARNING: CPU: 28 PID: 33910 at block/blk-core.c:619 blk_cleanup_queue+0x1a3/0x1b0
kernel: CPU: 28 PID: 33910 Comm: modprobe Kdump: loaded Tainted: P OE ------------ 3.10.0-957.21.3.el7.x86_64 #1
kernel: Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 10/02/2018
kernel: Call Trace:
kernel: [<ffffffff9dd63107>] dump_stack+0x19/0x1b
kernel: [<ffffffff9d697768>] __warn+0xd8/0x100
kernel: [<ffffffff9d6978ad>] warn_slowpath_null+0x1d/0x20
kernel: [<ffffffff9d944b03>] blk_cleanup_queue+0x1a3/0x1b0
kernel: [<ffffffffc0cd1f3f>] dmp_unregister_disk+0x9f/0xd0 [vxdmp]
kernel: [<ffffffffc0cd7a08>] dmp_remove_mp_node+0x188/0x1e0 [vxdmp]
kernel: [<ffffffffc0cd7b45>] dmp_destroy_global_db+0xe5/0x2c0 [vxdmp]
kernel: [<ffffffffc0cde6cd>] dmp_unload+0x1d/0x30 [vxdmp]
kernel: [<ffffffffc0d0743a>] cleanup_module+0x5a/0xd0 [vxdmp]
kernel: [<ffffffff9d71692e>] SyS_delete_module+0x19e/0x310
kernel: [<ffffffff9dd75ddb>] system_call_fastpath+0x22/0x27
kernel: --[ end trace fd834bc7817252be ]--

RESOLUTION:
The queue flags are modified to handle this situation and not to log such warning messages.

* 3988238 (Tracking ID: 3988578)

SYMPTOM:
Encrypted volume creation fails on RHEL 8

DESCRIPTION:
On the RHEL 8 platform, python3 gets installed by default. However, the Python script that is used to create encrypted volumes and to communicate with the Key Management Service (KMS) is not compatible with python3. Additionally, an 'unsupported protocol' error is reported for the SSL protocol SSLv23 that is used in the PyKMIP library to communicate with the KMS.

RESOLUTION:
The python script is made compatible with python2 and python3. A new option ssl_version is made available in the /etc/vx/enc-kms-kmip.conf file to represent the SSL version to be used by the KMIP client. The 'unsupported protocol' error is addressed by using the protocol version PROTOCOL_TLSv1.
The following is an example of the sample configuration file:
[client]
host = kms-enterprise.example.com
port = 5696
keyfile = /etc/vx/client-key.pem
certfile = /etc/vx/client-crt.pem
cacerts = /etc/vx/cacert.pem
ssl_version = PROTOCOL_TLSv1

* 3988843 (Tracking ID: 3989796)

SYMPTOM:
Existing package failed to load on RHEL 8.1 setup.

DESCRIPTION:
RHEL 8.1 is a new release and hence VxVM module is compiled with this new kernel along with other few other changes .

RESOLUTION:
changes have been done to make VxVM compatible with RHEL 8.1

Patch ID: VRTSdbac-7.4.1.2900

* 4037952 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSdbac-7.4.1.2800

* 4019681 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSdbac-7.4.1.2100

* 4002155 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSdbac-7.4.1.1600

* 3990021 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSvcs-7.4.1.2900

* 4026815 (Tracking ID: 4026819)

SYMPTOM:
Non-root users of GuestGroup in a secure cluster cannot execute VCS commands like "hagrp -state".

DESCRIPTION:
When a non-root guest user runs a HAD CLI command, the command fails to execute and the following error is logged: "VCS ERROR V-16-1-10600 Cannot connect to VCS engine". This issue occurs when IPv6 is disabled.

RESOLUTION:
This hotfix updates the VCS module to run HAD CLI commands successfully even when IPv6 is disabled.

Patch ID: VRTSvcs-7.4.1.2800

* 3995684 (Tracking ID: 3995685)

SYMPTOM:
A discrepancy was observed between the VCS engine log messages at the primary site and those at the DR site in a GCO configuration.

DESCRIPTION:
If a resource that was online at the primary site is taken offline outside VCS control, the VCS engine logs the messages related to the unexpected change in the state of the resource[, successful clean Entry Point execution and so on]. The messages clearly indicate that the resource is faulted. However, the VCS engine does not log any debugging error messages regarding the fault on the primary site, but instead logs them at the DR site. Consequently, there is a discrepancy in the engine log messages at the primary site and those at the DR site.

RESOLUTION:
The VCS engine module is updated to log the appropriate debugging error messages at the primary site when a resource goes into the Faulted state.

FILE / VERSION:
had.exe / 7.4.10004.0
hacf.exe / 7.4.10004.0
haconf.exe  / 7.4.10004.0

* 4012318 (Tracking ID: 4012518)

SYMPTOM:
The gcoconfig command does not accept "." in the interface name.

DESCRIPTION:
The naming guidelines for network interfaces allow the "." character to be included as part of the name string. However, if this character is included, the gcoconfig command returns an error stating that the NIC name is invalid.

RESOLUTION:
This hotfix updates the gcoconfig command code to allow the inclusion of the "." character when providing interface names.

Patch ID: VRTSvcsag-7.4.1.2900

* 4021371 (Tracking ID: 4021370)

SYMPTOM:
The AWSIP and EBSVol resources fail to come online when IMDSv2 is used for requesting instance metadata.

DESCRIPTION:
By default, the AWSIP and EBSVol agents are developed to use IMDSv1 for requesting instance metadata. If the AWS cloud environment is configured to use IMDSv2, the AWSIP and EBSVol resource fail to come online and goes into UNKNOWN state.

RESOLUTION:
This hotfix updates the AWSIP and EBSVol agents to access the instance metadata based on the instance configuration for IMDS.

* 4028124 (Tracking ID: 4027915)

SYMPTOM:
Processes configured for HA using the ProcessOnOnly agent get killed during shutdown or reboot, even if they are still in use.

DESCRIPTION:
Processes that are started by the ProcessOnOnly agent do not have any dependencies on vcs.service. Such processes can therefore get killed during shutdown or reboot, even if they are being used by other VCS processes. Consequently, issues occur while bringing down Infoscale services during shutdown or reboot.

RESOLUTION:
This hotfix adresses the issue by enhancing the ProcessOnOnly agent such that the configured processes have their own systemd service files. The service file is used to set dependencies, so that the corresponding process is not killed unexpectedly during shutdown or reboot.

Patch ID: VRTSvcsag-7.4.1.2800

* 3984343 (Tracking ID: 3982300)

SYMPTOM:
A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.

DESCRIPTION:
This issue occurs because value of the Priority attribute of processes monitored by the ProcessOnOnly agent did not match the actual process priority value. As part of the Monitor function, if the priority of a process is found to be different from the value that is configured for the Priority attribute, warning messages are logged in the following scenarios:
1. The process is started outside VCS control with a different priority.
2. The priority of the process is changed after it is started by VCS.

RESOLUTION:
The ProcessOnOnly agent is updated to set the current value of the priority of a process to the Priority attribute if these values are found to be different.

* 4006950 (Tracking ID: 4006979)

SYMPTOM:
When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.

DESCRIPTION:
When an AzureDisk resource is online on one node, the status of that resource appears as UNKNOWN, instead of OFFLINE, on the other nodes in the cluster. Also, if the resource is brought online on a different node, its status on the remaining nodes appears as UNKNOWN. However, if the resource is not online on any node, its status correctly appears as OFFLINE on all the nodes.
This issue occurs when the VM name on the Azure portal does not match the local hostname of the cluster node. The monitor operation of the agent compares these two values to identify whether the VM to which the AzureDisk resource is attached is part of a cluster or not. If the values do not match, the agent incorrectly concludes that the resource is attached to a VM outside the cluster. Therefore, it displays the status of the resource as UNKNOWN.

RESOLUTION:
The AzureDisk agent is modified to compare the VM name with the appropriate attribute of the of the agent so that the status of an AzureDisk resource is reported correctly.

* 4009762 (Tracking ID: 4009761)

SYMPTOM:
A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.

DESCRIPTION:
As part of the Online operation, the NFSRestart agent copies the NFSv4 state data of clients from the shared storage to the local path. However, if the source location contains millions of files, some of which may be stale, their movement may not be completed before the operation times out.

RESOLUTION:
A new action entry point named "cleanup" is provided, which removes stale files. The usage of the entry point is as follows:
$ hares -action <resname> cleanup -actionargs <days> -sys <sys>
  <days>: number of days, deleting files that are <days> old
Example:
$ hares -action NFSRestart_L cleanup -actionargs 30 -sys <sys>
The cleanup action ensures that files older than the number of days specified in the -actionargs option are removed; the minimum expected duration is 30 days. Thus, only the relevant files to be moved remain, and the Online operation is completed in time.

* 4016488 (Tracking ID: 4007764)

SYMPTOM:
The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.

DESCRIPTION:
The smsyncd daemon used by the NFSRestart agent copies the symbolic links and the NFS locks from the /var/statmon/sm directory to a specific directory. These files and links are used to track the clients who have set a lock on the NFS mount points. If this directory already has a symbolic link with the same name that the smsyncd daemon is trying to copy, the /bin/cp commands fails and logs an error message.

RESOLUTION:
The smsyncd daemon is enhanced to copy the symbolic links even if the link with same name is present.

* 4016625 (Tracking ID: 4016624)

SYMPTOM:
When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.

DESCRIPTION:
When the ForceImport option is used, a disk group gets imported with the available disks, regardless of whether all the required disks are available or not. In such a scenario, if the ClearClone attribute is enabled, the available disks are successfully imported, but their DGIDs are updated to new values. Thus, the disks within the same disk group end up with different DGIDs, which may cause issues with the functioning of the storage configuration.

RESOLUTION:
The DiskGroup agent is updated to allow the ForceImport and the ClearClone attributes to be set to the following values as per the configuration requirements. ForceImport can be set to 0 or 1. ClearClone can be set to 0, 1, or 2. ClearClone is disabled when set to 0 and enabled when set to 1 or 2. ForceImport is disabled when set to 0 and is ignored when ClearClone is set to 1. To enable both, ClearClone and ForceImport, set ClearClone to 2 and ForceImport to 1.

Patch ID: VRTSvxfen-7.4.1.2900

* 4028780 (Tracking ID: 4029261)

SYMPTOM:
An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.

DESCRIPTION:
If a cluster node receives a RECONFIG message while a shutdown or a restart operation is in progress, it may participate in the fencing race. The node may also win the race and then proceed to shut down. If this situation occurs, the fencing module panics the nodes that lost the race, which may cause the entire cluster to go down.

RESOLUTION:
This hotfix updates the fencing module so that it stops a cluster node from joining a race, if it receives a RECONFIG message while a shutdown or a restart operation is in progress.

* 4037951 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSvxfen-7.4.1.2800

* 4000746 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

* 4019680 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSamf-7.4.1.2900

* 4037950 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSamf-7.4.1.2800

* 4019003 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

* 4019679 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSamf-7.4.1.2100

* 4002154 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSamf-7.4.1.1600

* 3990020 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSgab-7.4.1.2900

* 4037949 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSgab-7.4.1.2800

* 4016486 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.

* 4016487 (Tracking ID: 4007726)

SYMPTOM:
When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.

DESCRIPTION:
The current error message does not mention the type of the GAB message that was transferred and the port that was used to transfer the message. Thus, the error message is not useful for troubleshooting.

RESOLUTION:
This hotfix addresses the issue by enhacing the error message that is logged. It now mentions whether the message type was DIRECTED or BROADCAST and also the port number that was used to transer the GAB message.

* 4019677 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSgab-7.4.1.2100

* 4002152 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSgab-7.4.1.1600

* 3990018 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSllt-7.4.1.2900

* 4022791 (Tracking ID: 4022792)

SYMPTOM:
A cluster node panics during an FSS I/O transfer over LLT.

DESCRIPTION:
In a Flexible Storage Sharing (FSS) setup, LLT uses sockets to transfer data between nodes. If a remote node is rebooted while the FSS I/O is running on the local node, the socket that was closed as part of the reboot process may still be used. If a NULL socket is thus accidentally used by the socket selection algorithm, it results in a node panic.

RESOLUTION:
This hotfix updates the LLT module to avoid the selection of such closed sockets.

* 4029112 (Tracking ID: 4029253)

SYMPTOM:
LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.

DESCRIPTION:
On receiving the buffer advertisement after an RDMA write, LLT also waits for the hardware/OS ACK for that RDMA write. Only after the ACK is received, LLT sets the state of the buffers to free (usable). If the connection between the cluter nodes breaks after LLT receives the buffer advertisement but before receiving the ACK, the local node generates a NAK. LLT does not acknowledge this NAK, and so, that specific buffer slot remains unusable. Over time, the number of buffer slots in the unusable state increases, which sets the flow control for the LLT client. This conditions leads to an FSS I/O hang.

RESOLUTION:
This hotfix updates the LLT module to mark a buffer slot as free (usable) even when a NAK is received from the previous RDMA write.

* 4037049 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSllt-7.4.1.2800

* 3999398 (Tracking ID: 3989440)

SYMPTOM:
The dash (-) in the device name may cause the LLT link configuration to fail.

DESCRIPTION:
While configuring LLT links, if the LLT module finds a dash in the device name, it assumes that the device name is in the 'eth-<mac-address>' format and considers the string after the dash as the mac address. However, if the user specifies an interface name that includes a dash, the string after the dash is not intended to be a MAC address. In such a case, the LLT link configuration fails.

RESOLUTION:
The LLT module is updated to check for the string 'eth-' before validating the device name with the 'eth-<mac-address>' format. If the string 'eth-' is not found, LLT assumes the name to be an interface name.

* 4002584 (Tracking ID: 3994996)

SYMPTOM:
Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.

DESCRIPTION:
Adding -H miscellaneous flag, which we will use to add new functionalities in lltconfig, as very less alphabets are left to be able to assign an alphabet to each functionality.

RESOLUTION:
Inside -H flag
1. Add a tunable to allow skb alloc with SLEEP flag, in case memory is scarce.
2. Add skb_alloc failure count in lltstat output.

* 4003442 (Tracking ID: 3983418)

SYMPTOM:
In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.

DESCRIPTION:
When a node tries to join the cluster after a reboot or a panic, in a rare case, on one of the remaining nodes the port state of CVM or any other port may be in an inconsistent state with respect to LLT.

RESOLUTION:
This hotfix updates the LLT module to fix the issue by not accepting a particular type of a packet when not connected to the remote node and also adds more states to log into the LLT circular buffer.

* 4010546 (Tracking ID: 4018581)

SYMPTOM:
The LLT module fails to start and the system log messages indicate missing IP address.

DESCRIPTION:
When only the low priority LLT links are configured over UDP, UDPBurst mode must be disabled. UDPBurst mode must only be enabled when the high priority LLT links are configured over UDP. If the UDPBurst mode gets enabled while configuring the low priority links, the LLT module fails to start and logs the following error: "V-14-2-15795 missing ip address / V-14-2-15800 UDPburst:Failed to get link info".

RESOLUTION:
This hotfix updates the LLT module to not enable the UDPBurst mode when only the low priority LLT links are configured over UDP.

* 4016483 (Tracking ID: 4016484)

SYMPTOM:
The vxexplorer utility panics the node on which it runs if the LLT version on the node is llt-rhel8_x86_64-Patch-7.4.1.2100 or later.

DESCRIPTION:
The vxexplorer utility panics the node on which it runs if the LLT version on the node is llt-rhel8_x86_64-Patch-7.4.1.2100 or later.

RESOLUTION:
This hotfix addresses the issue so that vxexplorer utility does not panic nodes that run on the RHEL 8 platform.

* 4019676 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSllt-7.4.1.2200

* 4012089 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSllt-7.4.1.2100

* 4002151 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSllt-7.4.1.1600

* 3990017 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSodm-7.4.1.2900

* 4037955 (Tracking ID: 4037575)

SYMPTOM:
ODM module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.4.

Patch ID: VRTSodm-7.4.1.2800

* 4018202 (Tracking ID: 4018200)

SYMPTOM:
ODM module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.3.

Patch ID: VRTSodm-7.4.1.2600

* 4011973 (Tracking ID: 4012094)

SYMPTOM:
VRTSodm driver will not load with 7.4.1.2600 VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs 
header files due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs header files.

Patch ID: VRTSodm-7.4.1.2100

* 4001381 (Tracking ID: 4001380)

SYMPTOM:
ODM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.2

Patch ID: VRTSodm-7.4.1.1600

* 3989416 (Tracking ID: 3989415)

SYMPTOM:
ODM module failed to load on RHEL8.1

DESCRIPTION:
The RHEL8.1 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.1

Patch ID: VRTSglm-7.4.1.2900

* 4037956 (Tracking ID: 4037621)

SYMPTOM:
GLM module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.4.

Patch ID: VRTSglm-7.4.1.2800

* 4014715 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

* 4016408 (Tracking ID: 4018213)

SYMPTOM:
GLM module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.3.

Patch ID: VRTSglm-7.4.1.1700

* 3999030 (Tracking ID: 3999029)

SYMPTOM:
GLM module failed to unload because of VCS service hold.

DESCRIPTION:
GLM module was failed to unload during systemd shutdown because glm service was racing with vcs service. VCS takes hold on GLM which was resulting in failing to unload the module.

RESOLUTION:
Code is modified to add vcs service dependency in glm.service during systemd shutdown.

* 4001383 (Tracking ID: 4001382)

SYMPTOM:
GLM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.2


SUMMARY OF KNOWN ISSUES
-----------------------
4038257 (4038254)  [RHEL 8.4] AMF Registration Issue with Mount and CFSMount Agents 


DETAILS OF KNOWN ISSUES
-----------------------
* INCIDENT NO:4038257     TRACKING ID:4038254

SYMPTOM: The system panics when the Mount and the CFSMount agents register themselves with AMF. 

WORKAROUND: To prevent the system panic, disable IMF for the Mount and the CFSMount agents.
Run the following command before you upgrade the operating system:
# haimfconfig -disable -agent Mount CFSMount
This command disables IMF for the specified agents by changing the Mode value to 0 for each agent and for all the associated resources whose Mode values were 
overridden.
- If VCS is running, the command prompts you to confirm whether you want to make the configuration changes persistent. If you choose No, the command exits. If 
you choose Yes, it disables IMF and saves the update to the configuration by using the haconf -dump -makero command.
- If VCS is not running, the Mode value for the agents is modifed in the VCS configuration file. Before it makes any changes to configuration files, the command 
prompts you for confirmation. If you choose No, the command exits. If you choose Yes, VCS the configuration file is updated.

INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-7.4.1.2800.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-7.4.1.2800.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-7.4.1.2800.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-7.4.1.2800.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P2800 [<host1> <host2>...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE

Applies to the following product releases

Update files

File name Description Version Platform Size