Accedi

Se non si dispone di un account è necessario? Crearne uno.

Security Patch IS-8.0U2SP1 for SLES15

Patch

Sommario

Security Patch IS-8.0U2SP1 for SLES15

Descrizione

This patch provides a Security update on IS-8.0 Update2 patch for SLES15 platform.

In this case latest cumulative patch on IS-8.0 is 8.0 Update2 on SLES15 platform(Patch version : InfoScale 8.0.0.2900).

 

SORT ID: 20693

 

Fixes the below incidents:

4154821,4121828,4108322,4111302,4111442,4111560,4112219,4113223,4113225,4113328,4113331,4113342,4115475,4067609,
4067635,4070098,4078531,4079345,4080041,4080105,4080122,4080269,4080276,4080277,4080579,4080845,4080846,4081790,
4083337,4085619,4087233,4087439,4087791,4088076,4088483,4088762,4081684,4057420,4062799,4065841,4066213,4068407,
4065569,4066259,4066735,4066834,4067237,4065628,4154894,4057432,4119105,4111349,4114322,4089136,4065680,4154855,
4092518,4097466,4107367,4111457,4112417,4114621,4118795,4119023,4119107,4123143,4084880,4088079,4111350,4111910,
4112919,4095889,4068960,4071108,4072228,4078335,4078520,4079142,4079173,4082260,4082865,4083335,4085623,4085839,
4086085,4088341,4081150,4083948,4055808,4056684,4062606,4065565,4065651,4065679 

 

Patch IDs:

VRTSaslapm-8.0.0.2700-SLES15 for VRTSaslapm
VRTSodm-8.0.0.3100-SLES15 for VRTSodm
VRTSvxfs-8.0.0.3100-SLES15 for VRTSvxfs
VRTSvxvm-8.0.0.2700-SLES15 for VRTSvxvm 

                          * * * READ ME * * *
                       * * * InfoScale 8.0 * * *
                         * * * Patch 3100 * * *
                         Patch Date: 2024-02-19


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0 Patch 3100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSaslapm
VRTSodm
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Enterprise 8.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-8.0.0.2700
* 4154821 (4149248) Security vulnerabilities have been discovered in third-party components (OpenSSL, Curl, and libxml) employed by VxVM.
Patch ID: VRTSvxvm-8.0.0.2600
* 4121828 (4124457) VxVM Support for SLES15 Azure SP4 kernel
Patch ID: VRTSvxvm-8.0.0.2300
* 4108322 (4107083) In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.
* 4111302 (4092495) VxVM Support on SLES15SP4
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4111560 (4098391) Continuous system crash is observed during VxVM installation.
* 4112219 (4069134) "vxassist maxsize alloc:array:<enclosure_name>" command may fail.
* 4113223 (4093067) System panic occurs because of NULL pointer in block device structure.
* 4113225 (4068090) System panic occurs because of block device inode ref count leaks.
* 4113328 (4102439) Volume Manager Encryption EKM Key Rotation (vxencrypt rekey) Operation Fails on IS 7.4.2/rhel7
* 4113331 (4105565) In CVR environment, system panic due to NULL pointer when VVR was doing recovery.
* 4113342 (4098965) Crash at memset function due to invalid memory access.
* 4115475 (4017334) vxio stack trace warning message kmsg_mblk_to_msg can be seen in systemlog
Patch ID: VRTSvxvm-8.0.0.1800
* 4067609 (4058464) vradmin resizevol fails when FS is not mounted on master.
* 4067635 (4059982) vradmind need not check for rlink connect during migrate.
* 4070098 (4071345) Unplanned fallback synchronisation is unresponsive
* 4078531 (4075860) Tutil putil rebalance flag is not getting cleared during +4 or more node addition
* 4079345 (4069940) FS mount failed during Cluster configuration on 24-node physical HP BOM2 setup.
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4080105 (4045837) Sub disks are in relocate state after exceed fault slave node panic.
* 4080122 (4044068) After disc replacement, Replace Node operation failed at Configure Netbackup stage.
* 4080269 (4044898) Copy rlink tags from reprec to info rec, through vxdg upgrade path.
* 4080276 (4065145) multivolume and vset not able to overwrite encryption tags on secondary.
* 4080277 (3966157) SRL batching feature is broken
* 4080579 (4077876) System is crashed when EC log replay is in progress after node reboot.
* 4080845 (4058166) Increase DCM log size based on volume size without exceeding region size limit of 4mb.
* 4080846 (4058437) Replication between 8.0 and 7.4.x fails due to sector size field.
* 4081790 (4080373) SFCFSHA configuration failed on RHEL 8.4.
* 4083337 (4081890) On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4.
* 4085619 (4086718) VxVM modules fail to load with latest minor kernel of SLES15SP2
* 4087233 (4086856) For Appliance FLEX product using VRTSdocker-plugin, need to add platform-specific dependencies service ( docker.service and podman.service ) change.
* 4087439 (4088934) Kernel Panic while running LM/CFS CONFORMANCE - variant (SLES15SP3)
* 4087791 (4087770) NBFS: Data corruption due to skipped full-resync of detached mirrors of volume after DCO repair operation
* 4088076 (4054685) In case of CVR environment, RVG recovery gets hung in linux platforms.
* 4088483 (4088484) Failed to load DMP_APM NVME modules
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSvxvm-8.0.0.1700
* 4081684 (4082799) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1600
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4062799 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4066213 (4052580) Supporting multipathing for NVMe devices under VxVM.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSvxvm-8.0.0.1400
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4065569 (4056156) VxVM Support for SLES15 Sp3
* 4066259 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
* 4066735 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.
* 4066834 (4046007) The private disk region gets corrupted if the cluster name is changed in FSS environment.
* 4067237 (4058894) Messages in /var/log/messages regarding "ignore_device".
Patch ID: VRTSvxvm-8.0.0.1300
* 4065628 (4065627) VxVM Modules failed to load after OS or kernel upgrade  .
Patch ID: VRTSodm-8.0.0.3100
* 4154894 (4144269) After installing VRTSvxfs ODM fails to start.
Patch ID: VRTSodm-8.0.0.2900
* 4057432 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
* 4119105 (4119104) ODM support for SLES15-SP4 azure kernel.
Patch ID: VRTSodm-8.0.0.2600
* 4111349 (4092232) ODM support for SLES15 SP4.
Patch ID: VRTSodm-8.0.0.2500
* 4114322 (4114321) VRTSodm driver will not load with VRTSvxfs 8.0.0.2500 patch.
Patch ID: VRTSodm-8.0.0.1800
* 4089136 (4089135) VRTSodm driver does not load with VRTSvxfs patch.
Patch ID: VRTSodm-8.0.0.1200
* 4065680 (4056799) ODM support for SLES15 SP3.
Patch ID: VRTSvxfs-8.0.0.3100
* 4154855 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.2900
* 4092518 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4097466 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4107367 (4108955) VFR job hangs on source if thread creation fails on target.
* 4111457 (4117827) For EO compliance, there is a requirement to support 3 types of log file permissions  600, 640 and 644 with 600 being default 
new eo_perm tunable is added in vxtunefs command to manage the log file permissions.
* 4112417 (4094326) mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"
* 4114621 (4113060) On newer linux kernels, executing a binary on a vxfs mountpoint resulted in an EINVAL error.
* 4118795 (4100021) Running setfacl followed by getfacl resulting in "No such device or address" error.
* 4119023 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4119107 (4119106) VxFS support for SLES15-SP4 azure kernel.
* 4123143 (4123144) fsck binary generating coredump
Patch ID: VRTSvxfs-8.0.0.2600
* 4084880 (4084542) Enhance fsadm defrag report to display if FS is badly fragmented.
* 4088079 (4087036) The fsck binary has been updated to fix a failure while running with the "-o metasave" option on a shared volume.
* 4111350 (4098085) VxFS support for SLES15 SP4.
* 4111910 (4090127) CFS hang in vx_searchau().
Patch ID: VRTSvxfs-8.0.0.2500
* 4112919 (4110764) Security Vulnerability observed in Zlib a third party component used by VxFS .
Patch ID: VRTSvxfs-8.0.0.2100
* 4095889 (4095888) Security vulnerabilities exist in the Sqlite third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.1800
* 4068960 (4073203) Veritas file replication might generate a core while replicating the files to target.
* 4071108 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4072228 (4037035) VxFS should have the ability to control the number of inactive processing threads.
* 4078335 (4076412) Addressing Executive Order (EO) 14028,  initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents.
* 4078520 (4058444) Loop mounts using files on VxFS fail on Linux systems.
* 4079142 (4077766) VxFS kernel module might leak memory during readahead of directory blocks.
* 4079173 (4070217) Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.
* 4082260 (4070814) Security Vulnerability observed in Zlib a third party component VxFS uses.
* 4082865 (4079622) Existing migration read/write iter operation handling is not fully functional as vxfs uses normal read/write file operation only.
* 4083335 (4076098) Fix migration issues seen with falcon-sensor.
* 4085623 (4085624) While running fsck, fsck might dump core.
* 4085839 (4085838) Command fsck may generate core due to processing of zero size attribute inode.
* 4086085 (4086084) VxFS mount operation causes system panic.
* 4088341 (4065575) Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition
Patch ID: VRTSvxfs-8.0.0.1700
* 4081150 (4079869) Security Vulnerability in VxFS third party components
* 4083948 (4070814) Security Vulnerability in VxFS third party component Zlib
Patch ID: VRTSvxfs-8.0.0.1400
* 4055808 (4062971) Enable partition directory on WORM file system
* 4056684 (4056682) New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.
* 4062606 (4062605) Minimum retention time cannot be set if the maximum retention time is not set.
* 4065565 (4065669) Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.
* 4065651 (4065666) Enable partition directory on WORM file system having WORM enabled on files with retention period not expired.
Patch ID: VRTSvxfs-8.0.0.1300
* 4065679 (4056797) VxFS support for SLES15 SP3.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-8.0.0.2700

* 4154821 (Tracking ID: 4149248)

SYMPTOM:
Third-party components (OpenSSL, curl, and libxml) used by VxVM exhibit security vulnerabilities.

DESCRIPTION:
VxVM utilizes current versions of OpenSSL, curl, and libxml, which have been reported to have security vulnerabilities.

RESOLUTION:
Upgrades to newer versions of OpenSSL, curl, and libxml have been implemented to address the reported security vulnerabilities.

Patch ID: VRTSvxvm-8.0.0.2600

* 4121828 (Tracking ID: 4124457)

SYMPTOM:
VxVM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxVM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSvxvm-8.0.0.2300

* 4108322 (Tracking ID: 4107083)

SYMPTOM:
In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.

DESCRIPTION:
This issue is introduced due to BCV NR LUNs going into error state and the the scci inquiry succeed but the disk retry takes time as it goes into loop for each disk. This is a corner case and was not handled for BCV NR LUNs.

RESOLUTION:
Necessary code changes are done incase of BCV NR LUNs when scci inquiry succeeds, we mark disk as failed so that the vxconfigd boots quickly.

* 4111302 (Tracking ID: 4092495)

SYMPTOM:
VxVM installation fails on SLES15 SP4

DESCRIPTION:
This new OS has 5.14.21 kernel . There have been multiple changes done regarding block devices , IO handling , partition table in gendisk, etc hence VxVM code is not compatible with these new code changes done in kernel .

RESOLUTION:
Required changes has been done to make VxVM compatible with SLES15SP4.

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4111560 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel &gt;= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel &gt;= 5.14.21-150400.24.11.1

RESOLUTION:
As of now there is no fix available. It is recommended to use mq-deadline as io scheduler. Code changes have been done to automatically change the disk io scheduler to mq-deadline.

* 4112219 (Tracking ID: 4069134)

SYMPTOM:
"vxassist maxsize alloc:array:<enclosure_name>" command may fail with below error:
VxVM vxassist ERROR V-5-1-18606 No disks match specification for Class: array, Instance: <enclosure_name>

DESCRIPTION:
If the enclosure name is greater than 16 chars then "vxassist maxsize alloc:array"  command can fail. 
This is because if the enclosure name is more than 16 chars then it gets truncated while copying from VxDMP to VxVM.
This further cause the above vxassist command to fail.

RESOLUTION:
Code changes are done to avoid the truncation of enclosure name while copying from VxDMP to VxVM.

* 4113223 (Tracking ID: 4093067)

SYMPTOM:
System panicked in the following stack:

#9  [] page_fault at  [exception RIP: bdevname+26]
#10 [] get_dip_from_device  [vxdmp]
#11 [] dmp_node_to_dip at [vxdmp]
#12 [] dmp_check_nonscsi at [vxdmp]
#13 [] dmp_probe_required at [vxdmp]
#14 [] dmp_check_disabled_policy at [vxdmp]
#15 [] dmp_initiate_restore at [vxdmp]
#16 [] dmp_daemons_loop at [vxdmp]

DESCRIPTION:
After got block_device from OS, DMP didn't do the NULL pointer check against block_device->bd_part. This NULL pointer further caused system panic when bdevname() was called.

RESOLUTION:
The code changes have been done to fix the problem.

* 4113225 (Tracking ID: 4068090)

SYMPTOM:
System panicked in the following stack:

#7 page_fault at ffffffffbce010fe
[exception RIP: vx_bio_associate_blkg+56]
#8 vx_dio_physio at ffffffffc0f913a3 [vxfs]
#9 vx_dio_rdwri at ffffffffc0e21a0a [vxfs]
#10 vx_dio_read at ffffffffc0f6acf6 [vxfs]
#11 vx_read_common_noinline at ffffffffc0f6c07e [vxfs]
#12 vx_read1 at ffffffffc0f6c96b [vxfs]
#13 vx_vop_read at ffffffffc0f4cce2 [vxfs]
#14 vx_naio_do_work at ffffffffc0f240bb [vxfs]
#15 vx_naio_worker at ffffffffc0f249c3 [vxfs]

DESCRIPTION:
To get VxVM volume's block_device from gendisk, VxVM calls bdget_disk(), which increases device inode ref count. The ref count gets decreased by bdput() call that is missed in our code, hence the inode count leaks occurs, which may cause panic in vxfs when issuing IO on VxVM volume.

RESOLUTION:
The code changes have been done to fix the problem.

* 4113328 (Tracking ID: 4102439)

SYMPTOM:
Customer observed failure When trying to run the vxencrypt rekey operation on an encrypted volume (to perform key rotation).

DESCRIPTION:
KMS token is of size 64 bytes, we are restricting the token size to 63 bytes and throw an error if the token size is more than 63.

RESOLUTION:
The issue is resolved by setting the assumption of token size to be size of KMS token, which is 64 bytes.

* 4113331 (Tracking ID: 4105565)

SYMPTOM:
In Cluster Volume Replication(CVR) environment, system panic with below stack when Veritas Volume Replicator(VVR) was doing recovery:
[] do_page_fault 
[] page_fault 
[exception RIP: volvvr_rvgrecover_nextstage+747] 
[] volvvr_rvgrecover_done [vxio]
[] voliod_iohandle[vxio]
[] voliod_loop at[vxio]

DESCRIPTION:
There might be a race condition  which caused VVR failed to trigger a DCM flush sio. VVR failed to do sanity check against this sio. Hence triggered system panic.

RESOLUTION:
Code changes have been made to do a sanity check of the DCM flush sio.

* 4113342 (Tracking ID: 4098965)

SYMPTOM:
Vxconfigd dumping Core when scanning IBM XIV Luns with following stack.

#0  0x00007fe93c8aba54 in __memset_sse2 () from /lib64/libc.so.6
#1  0x000000000061d4d2 in dmp_getenclr_ioctl ()
#2  0x00000000005c54c7 in dmp_getarraylist ()
#3  0x00000000005ba4f2 in update_attr_list ()
#4  0x00000000005bc35c in da_identify ()
#5  0x000000000053a8c9 in find_devices_in_system ()
#6  0x000000000053aab5 in mode_set ()
#7  0x0000000000476fb2 in ?? ()
#8  0x00000000004788d0 in main ()

DESCRIPTION:
This could cause 2 issues if there are more than 1 disk arrays connected:

1. If the incorrect memory address exceeds the range of valid virtual memory, it will trigger "Segmentation fault" and crash vxconfigd.
2. If  the incorrect memory address does not exceed the range of valid virtual memory, it will cause memory corruption issue but maybe not trigger vxconfigd crash issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4115475 (Tracking ID: 4017334)

SYMPTOM:
VXIO call stack trace generated in /var/log/messages

DESCRIPTION:
This issue occurs due to a limitation in the way InfoScale interacts with the RHEL8.2 kernel.
 Call Trace:
 kmsg_sys_rcv+0x16b/0x1c0 [vxio]
 nmcom_get_next_mblk+0x8e/0xf0 [vxio]
 nmcom_get_hdr_msg+0x108/0x200 [vxio]
 nmcom_get_next_msg+0x7d/0x100 [vxio]
 nmcom_wait_msg_tcp+0x97/0x160 [vxio]
 nmcom_server_main_tcp+0x4c2/0x11e0 [vxio]

RESOLUTION:
Making changes in header files for function definitions if rhel version>=8.2
This kernel warning can be safely ignore as it doesn't have any functionality impact.

Patch ID: VRTSvxvm-8.0.0.1800

* 4067609 (Tracking ID: 4058464)

SYMPTOM:
vradmin resizevol fails  when FS is not mounted on master.

DESCRIPTION:
vradmin resizevol cmd resizes datavolume, FS on the primary site whereas on the secondary site it resizes only datavolume as FS is not mounted on the secondary site.

vradmin resizevol cmd ships the cmd to logowner at vradmind level and vradmind on logowner in turn tries to ship the lowlevel vxcommands to master at vradmind level and then finally cmd gets executed on master.

RESOLUTION:
Changes introduced to ship the cmd to the node on which FS is mounted. cvm nodename must be provided where FS gets mounted which is then used by vradmind to ship cmd to that respective mounted node.

* 4067635 (Tracking ID: 4059982)

SYMPTOM:
In container environment, vradmin migrate cmd fails multiple times due to rlink not in connected state.

DESCRIPTION:
In VVR, rlinks are disconnected and connected back during the process of replication lifecycle. And, in this mean time when vradmin migrate cmd gets executed it experience errors. It internally causes vradmind to make configuration changes multiple times which impact further vradmin commands getting executed.

RESOLUTION:
vradmin migrate cmd requires rlink data to be up-to-date on both primary and secondary. It internally executes low-level cmds like vxrvg makesecondary and vxrvg makeprimary to change the role of primary and secondary. These cmds doesn't depend on rlink to be in connected state.  Changes are done to remove the rlink connection handling.

* 4070098 (Tracking ID: 4071345)

SYMPTOM:
Replication is unresponsive after failed site is up.

DESCRIPTION:
Autosync and unplanned fallback synchronisation had issues in a mix of cloud and non-cloud Volumes in RVG.
After a cloud volume is found rest of the volumes were getting ignored for synchronisation

RESOLUTION:
Fixed condition to make it iterate over all Volumes.

* 4078531 (Tracking ID: 4075860)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4079345 (Tracking ID: 4069940)

SYMPTOM:
FS mount failed during Cluster configuration on 24-node physical BOM setup.

DESCRIPTION:
FS mount failed during Cluster configuration on 24-node physical BOM setup due to vxvm transactions were taking time more that vcs timeouts.

RESOLUTION:
Fix is added to reduce unnecessary transaction time on large node setup.

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4080105 (Tracking ID: 4045837)

SYMPTOM:
DCL volume subdisks doesnot relocate after node fault timeout and remains in RELOCATE state

DESCRIPTION:
If DCO has failed plexs and dco is on different disks than data, dco relocation need to be triggered explicitly as try_fss_reloc will only perform dco relocation in context of data which may not succeed if sufficient data disks not available (additional host/disks may be available where dco can relocate)

RESOLUTION:
Fix is added to relocate DCL subdisks to available spare disks

* 4080122 (Tracking ID: 4044068)

SYMPTOM:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

DESCRIPTION:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

RESOLUTION:
Fix is added to retry transaction few times if it fails with this error

* 4080269 (Tracking ID: 4044898)

SYMPTOM:
we were unable to see rlink tags from info records with the vxrlink listtag command.

DESCRIPTION:
Making rlinks FIPS compliant has 2nd phase in which we are dealing with disk group upgrade path, where rlink enc tags needs to be copied to info record and needs to be FIPS compliant one.



here vxdg upgrade will internally call vxrlink and vxencrypt to upgrade the rlink and rekey the rlink keys respectively.

RESOLUTION:
copied all the encryption tags for rlink to info record and when we are upgrading DG we will internally upgrade the rlink,
this upgradation process will copy rlink tags to info records.

* 4080276 (Tracking ID: 4065145)

SYMPTOM:
During addsec we were unable to processencrypted volume tags for multiple volumes and vsets.
Error we saw:

$ vradmin -g dg2 -encrypted addsec dg2_rvg1 10.210.182.74 10.210.182.75

Error: Duplicate tag name vxvm.attr.enckeytype provided in input.

DESCRIPTION:
The number of tags was not defined and we were processing all the tags at a time instead of processing max number of tags for a volume.

RESOLUTION:
Introduced a number of tags variable depend on the cipher method (CBC/GCM), as well fixed minor code issues.

* 4080277 (Tracking ID: 3966157)

SYMPTOM:
the feature of SRL batching was broken and we were not able to enable it as it might caused problems.

DESCRIPTION:
Batching of updates needs to be done as to get benefit of batching multiple updates and getting performance increased

RESOLUTION:
we have decided to simplify the working as we are now aligning each of the small update within a total batch to 4K size so that,

by default we will get the whole batch aligned one, and then there is no need of book keeping for last update and hence reducing the overhead of

different calculations.

we are padding individual updates to reduce overhead of book keeping things around last update in a batch,
by padding each updates to 4k, we will be having a batch of updates which is 4k aligned itself.

* 4080579 (Tracking ID: 4077876)

SYMPTOM:
When one cluster node is rebooted, EC log replay is triggered for shared EC volume.
It is seen that system is crashed during this EC log replay.

DESCRIPTION:
It is seen that two flags are assigned same value. So, system panicked during flag check.

RESOLUTION:
Changed the code flow to avoid checking values of flags having same value.

* 4080845 (Tracking ID: 4058166)

SYMPTOM:
While setting up VVR/CVR on large size data volumes (size > 3TB) with filesystems mounted on them, initial autosync operation takes a lot of time to complete.

DESCRIPTION:
While performing autosync on VVR/CVR setup for a volume with filesystem mounted, if smartmove feature is enabled, the operation does smartsync by syncing only the regions dirtied by filesystem, instead of syncing entire volume, which completes faster than normal case. However, for large size volumes (size > 3TB),

smartmove feature does not get enabled, even with filesystem mounted on them and hence autosync operation syncs entire volume.

This behaviour is due to smaller size DCM plexes allocated for such large size volumes, autosync ends up performing complete volume sync,taking lot more time to complete.

RESOLUTION:
Increase the limit of DCM plex size (loglen) beyond 2MB so that smart move feature can be utilised properly.

* 4080846 (Tracking ID: 4058437)

SYMPTOM:
Replication between 8.0 and 7.4.x fails with an error due to sector size field.

DESCRIPTION:
7.4.x branch has sectorsize set to zero which internally is indicated as 512 byte. It caused the startrep, resumerep to fail with the below error message.

Message from Primary:

VxVM VVR vxrlink ERROR V-5-1-20387  sector size mismatch, Primary is having sector size 512, Secondary is having sector size 0

RESOLUTION:
A check added to support replication between 8.0 and 7.4.x

* 4081790 (Tracking ID: 4080373)

SYMPTOM:
SFCFSHA configuration failed on RHEL 8.4 due to 'chmod -R' error.

DESCRIPTION:
Failure messages are getting logged as all log permissions are changed to 600 during the upgrade and all log files moved to '/var/log/vx'.

RESOLUTION:
Added -f option to chmod command to suppress warning and redirect errors from mv command to /dev/null.

* 4083337 (Tracking ID: 4081890)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel.

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4085619 (Tracking ID: 4086718)

SYMPTOM:
VxVM fails to install because vxdmp module fails to load on latest minor kernel of SLES15SP2.

DESCRIPTION:
VxVM modules fail to load on latest minor kernel of SLES15SP2. Following messages can be seen logged in system logs:
vxvm-boot[32069]: ERROR: No appropriate modules found.
vxvm-boot[32069]: Error in loading module "vxdmp". See documentation.
vxvm-boot[32069]: Modules not Loaded

RESOLUTION:
Code changes have been done to fix this issue.

* 4087233 (Tracking ID: 4086856)

SYMPTOM:
For Appliance FLEX product using VRTSdocker-plugin, docker.service needs to be replaced as it is not supported on RHEL8.

DESCRIPTION:
Appliance FLEX product using VRTSdocker-plugin is switching to RHEL8 on which docker.service does not exist. vxinfoscale-docker.service must stop after all container services are stopped. podman.service shuts down after all container services are stopped, so docker.service can be replaced with podman.service.

RESOLUTION:
Added platform-specific dependencies for VRTSdocker-plugin. For RHEL8 podman.service introduced.

* 4087439 (Tracking ID: 4088934)

SYMPTOM:
"dd" command on a simple volume results in kernel panic.

DESCRIPTION:
Kernel panic is observed with following stack trace:
 #0 [ffffb741c062b978] machine_kexec at ffffffffa806fe01
 #1 [ffffb741c062b9d0] __crash_kexec at ffffffffa815959d
 #2 [ffffb741c062ba98] crash_kexec at ffffffffa815a45d
 #3 [ffffb741c062bab0] oops_end at ffffffffa8036d3f
 #4 [ffffb741c062bad0] general_protection at ffffffffa8a012c2
    [exception RIP: __blk_rq_map_sg+813]
    RIP: ffffffffa84419dd  RSP: ffffb741c062bb88  RFLAGS: 00010202
    RAX: 0c2822c2621b1294  RBX: 0000000000010000  RCX: 0000000000000000
    RDX: ffffb741c062bc40  RSI: 0000000000000000  RDI: ffff8998fc947300
    RBP: fffff92f0cbe6f80   R8: ffff8998fcbb1200   R9: fffff92f0cbe0000
    R10: ffff8999bf4c9818  R11: 000000000011e000  R12: 000000000011e000
    R13: fffff92f0cbe0000  R14: 00000000000a0000  R15: 0000000000042000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #5 [ffffb741c062bc38] scsi_init_io at ffffffffc03107a2 [scsi_mod]
 #6 [ffffb741c062bc78] sd_init_command at ffffffffc056c425 [sd_mod]
 #7 [ffffb741c062bcd8] scsi_queue_rq at ffffffffc0311f6e [scsi_mod]
 #8 [ffffb741c062bd20] blk_mq_dispatch_rq_list at ffffffffa8447cfe
 #9 [ffffb741c062bdc0] __blk_mq_do_dispatch_sched at ffffffffa844cae0
#10 [ffffb741c062be28] __blk_mq_sched_dispatch_requests at ffffffffa844d152
#11 [ffffb741c062be68] blk_mq_sched_dispatch_requests at ffffffffa844d290
#12 [ffffb741c062be78] __blk_mq_run_hw_queue at ffffffffa84466a3
#13 [ffffb741c062be98] process_one_work at ffffffffa80bcd74
#14 [ffffb741c062bed8] worker_thread at ffffffffa80bcf8d
#15 [ffffb741c062bf10] kthread at ffffffffa80c30ad
#16 [ffffb741c062bf50] ret_from_fork at ffffffffa8a001ff

RESOLUTION:
Code changes have been done to fix this issue.

* 4087791 (Tracking ID: 4087770)

SYMPTOM:
Data corruption post mirror attach operation seen after complete storage fault for DCO volumes.

DESCRIPTION:
DCO (data change object) tracks delta changes for faulted mirrors. During complete storage loss of DCO volume mirrors in, DCO object will be marked as BADLOG and becomes unusable for bitmap tracking.
Post storage reconnect (such as node rejoin in FSS environments) DCO will be re-paired for subsequent tracking. During this if VxVM finds any of the mirrors detached for data volumes, those are expected to be marked for full-resync as bitmap in DCO has no valid information. Bug in repair DCO operation logic prevented marking mirror for full-resync in cases where repair DCO operation is triggered before data volume is started. This resulted into mirror getting attached without any data being copied from good mirrors and hence reads serviced from such mirrors have stale data, resulting into file-system corruption and data loss.

RESOLUTION:
Code has been added to ensure repair DCO operation is performed only if volume object is enabled so as to ensure detached mirrors are marked for full-resync appropriately.

* 4088076 (Tracking ID: 4054685)

SYMPTOM:
RVG recovery gets hung in case of reconfiguration scenarios in CVR environments leading to vx commands hung on master node.

DESCRIPTION:
As a part of rvg recovery we perform DCM, datavolume recovery. But datavolume recovery takes long time due to wrong IOD handling done in linux platforms.

RESOLUTION:
Fix the IOD handling mechanism to resolve the rvg recovery handling.

* 4088483 (Tracking ID: 4088484)

SYMPTOM:
DMP_APM module is not getting loaded and throwing following message in the dmesg logs:
Mod load failed for dmpnvme module: dependency conflict
VxVM vxdmp V-5-0-1015 DMP_APM: DEPENDENCY CONFLICT

DESCRIPTION:
NVMe module loading failed as dmpaa module dependency added in APM and system doesn't have any A/A type disk which inturn nvme module load failed.

RESOLUTION:
Removed A/A dependency from NVMe APM.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSvxvm-8.0.0.1700

* 4081684 (Tracking ID: 4082799)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1600

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4062799 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size
of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on
InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4066213 (Tracking ID: 4052580)

SYMPTOM:
Multipathing not supported for NVMe devices under VxVM.

DESCRIPTION:
NVMe devices being non-SCSI devices, are not considered for multipathing.

RESOLUTION:
Changes introduced to support multipathing for NVMe devices.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSvxvm-8.0.0.1400

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4065569 (Tracking ID: 4056156)

SYMPTOM:
VxVM package fails to load on SLES15 SP3

DESCRIPTION:
Changes introduced in SLES15 SP3 impacted VxVM block IO functionality. This included changes in block layer structures in kernel.

RESOLUTION:
Changes have been done to handle the impacted functionalities.

* 4066259 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :

#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d
 #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868
 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6
 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8
 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio]   
 #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio]
 #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio]
 #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio]
 #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio]
 #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]
#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]
#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]
#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]
#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4
#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0
#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

* 4066735 (Tracking ID: 4057526)

SYMPTOM:
Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.

DESCRIPTION:
New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it 
reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"

RESOLUTION:
Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.sh

* 4066834 (Tracking ID: 4046007)

SYMPTOM:
In FSS environment if the cluster name is changed then the private disk region gets corrupted.

DESCRIPTION:
Under some conditions, when vxconfigd tries to update the TOC (table of contents) blocks of disk private region, the allocation maps cannot be initialized in the memory. This could make allocation maps incorrect and lead to corruption of the private region on the disk.

RESOLUTION:
Code changes have been done to avoid corruption of private disk region.

* 4067237 (Tracking ID: 4058894)

SYMPTOM:
After package installation and reboot , messages regarding udev rules for ignore_device are observed in /var/log/messages .
systemd-udevd[774]: /etc/udev/rules.d/40-VxVM.rules:25 Invalid value for OPTIONS key, ignoring: 'ignore_device'

DESCRIPTION:
From SLES15 Sp3 onwards , ignore_device is deprecated from udev rules and is not available for use anymore . Hence these messages are observed in system logs .

RESOLUTION:
Required changes have been done to handle this defect.

Patch ID: VRTSvxvm-8.0.0.1300

* 4065628 (Tracking ID: 4065627)

SYMPTOM:
VxVM modules are not loaded after OS upgrade followed by a reboot .

DESCRIPTION:
Once the stack installation is completed with configuration , after OS upgrade vxvm directory is not formed under /lib/modules/<upgraded_kernel>/veritas/ .

RESOLUTION:
VxVM code is updated with the required changes .

Patch ID: VRTSodm-8.0.0.3100

* 4154894 (Tracking ID: 4144269)

SYMPTOM:
After installing, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on the VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.0.2900

* 4057432 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

* 4119105 (Tracking ID: 4119104)

SYMPTOM:
ODM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSodm-8.0.0.2600

* 4111349 (Tracking ID: 4092232)

SYMPTOM:
The ODM module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSodm-8.0.0.2500

* 4114322 (Tracking ID: 4114321)

SYMPTOM:
VRTSodm driver will not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-8.0.0.1800

* 4089136 (Tracking ID: 4089135)

SYMPTOM:
VRTSodm driver does not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-8.0.0.1200

* 4065680 (Tracking ID: 4056799)

SYMPTOM:
The ODM module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSvxfs-8.0.0.3100

* 4154855 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.2900

* 4092518 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4097466 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4107367 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4111457 (Tracking ID: 4117827)

SYMPTOM:
Without tunable change the logfile permission will always be 600 EO compliant

DESCRIPTION:
Tunable values and behavior:

Value                   Behavior
0 (default)          600 permissions, update existing file permissions on upgrade
1                    640 permissions, update existing file permissions on upgrade
2                    644 permissions, update existing file permissions on upgrade
3                    Inherit umask, update existing file permissions on upgrade
10                   600 permissions, dont touch existing file permissions on upgrade
11                   640 permissions, dont touch existing file permissions on upgrade
12                   644 permissions, dont touch existing file permissions on upgrade
13                   Inherit umask, dont touch existing file permissions on upgrade
--------------------------------------------------------------------------------------

Adding new tunable as part of vxtunefs command which is per-node global tunable (not per filesystem). 
For Executive Order, CPI will be having workflow to update the tunable during installation/upgrade/configuration 
which will take care of updating in all nodes.

RESOLUTION:
New tunable is added to vxtunefs command.
How to set tunable:
/opt/VRTS/bin/vxtunefs -D eo_perm=1

* 4112417 (Tracking ID: 4094326)

SYMPTOM:
mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"

DESCRIPTION:
In vx_sl_kmcache_init(), kmcache is initialized for each level (in this case it is 8) separately. For passing the cache name as an argument to kmem_cache_create(), we have used a macro.

#define VX_SL_KMCACHE_NAME(level)       "vx_sl_node_"#level
#define VX_SL_KMCACHE_CREATE(level)                                     \
                kmem_cache_create(VX_SL_KMCACHE_NAME(level),            \
                                  VX_KMEM_SIZE(VX_SL_KMCACHE_SIZE(level)),\
                                  0, NULL, NULL, NULL, NULL, NULL, 0);


While using this macro, we have passed "level" as an argument and that has been expanded as "vx_sl_node_level" for all the 8 levels in `for` loop. This is causing the cache allocation for all the 8 levels with same name.

RESOLUTION:
Passing separate variable value (as level value) to VX_SL_KMCACHE_NAME as it is done in  vx_wb_sl_kmcache_init().

* 4114621 (Tracking ID: 4113060)

SYMPTOM:
On SLES15SP4 & RHEL9, executing a binary on a vxfs mountpoint resulted in an EINVAL error. The dmesg showed error "kernel read not supported for file"

DESCRIPTION:
This was due to changes in the recent kernel that required modifications in the way we initialize the file operations vector for vxfs.

RESOLUTION:
Added code to correctly update the file operations vector to fix this issue.

* 4118795 (Tracking ID: 4100021)

SYMPTOM:
Running setfacl followed by getfacl resulting in "No such device or address" error.

DESCRIPTION:
When running setfacl command on some of the directories which have the VX_ATTR_INDIRECT type of acl attribute, it is not removing the existing acl attribute and adding a new one, which should  not happen ideally. This is resulting in the failure of getfacl with following "No such device or address" error.

RESOLUTION:
we have done the code chages to removal of VX_ATTR_INDIRECT type acl in setfacl code.

* 4119023 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error:
"ERROR: V-3-28446:  bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check  if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4119107 (Tracking ID: 4119106)

SYMPTOM:
VxFS module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

* 4123143 (Tracking ID: 4123144)

SYMPTOM:
fsck binary generating coredump

DESCRIPTION:
In internal testing we found that fsck binary generates coredump due to below mentioned assert when we try to repair corrupted file system using below command:
./fsck -o full -y /dev/vx/rdsk/testdg/vol1

ASSERT(fset >= VX_FSET_STRUCT_INDEX)

RESOLUTION:
Added code to set default (primary) fileset by scanning the fset header list.

Patch ID: VRTSvxfs-8.0.0.2600

* 4084880 (Tracking ID: 4084542)

SYMPTOM:
Enhance fsadm defrag report to display if FS is badly fragmented.

DESCRIPTION:
Enhance fsadm defrag report to display if FS is badly fragmented.

RESOLUTION:
Added method to identify if FS needs defragmentation.

* 4088079 (Tracking ID: 4087036)

SYMPTOM:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume.

DESCRIPTION:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume. Besides this, while running this utility with "-n" and either "-o metasave" or "-o dumplog", it silently ignores the latter option(s).

RESOLUTION:
Code changes have been done to resolve the above-mentioned failure and also warning messages have been added to inform users regarding mutually exclusive behavior of "-n" and either of "metasave" and "dumplog" options instead of silently ignoring them.

* 4111350 (Tracking ID: 4098085)

SYMPTOM:
The VxFS module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

* 4111910 (Tracking ID: 4090127)

SYMPTOM:
CFS hang in vx_searchau().

DESCRIPTION:
As part of SMAP transaction changes, allocator changed its logic to call mdele tryhold always when getting the emap for a particular EAU, and it passes 
nogetdele as 1 to mdele_tryhold, which suggests that mdele_tryhold should not ask for delegation when detecting a free EAU without delegation, so in our case, 
allocator finds such an EAU in device summary tree but without delegation,  and it keeps retrying but without asking for delegation, hence the forever.

RESOLUTION:
In case a FREE EAU is found without delegation, delegate it back to Primary.

Patch ID: VRTSvxfs-8.0.0.2500

* 4112919 (Tracking ID: 4110764)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.2100

* 4095889 (Tracking ID: 4095888)

SYMPTOM:
Security vulnerabilities exist in the Sqlite third-party components used by VxFS.

DESCRIPTION:
VxFS uses the Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.1800

* 4068960 (Tracking ID: 4073203)

SYMPTOM:
Veritas file replication might generate a core while replicating the files to target when rename and unlink operation is performed on a file with FCL( file change log) mode on.

DESCRIPTION:
vxfsreplicate process of Veritas file replicator  might get a segmentation fault with File change mode on when rename and unlink operation are performed on a file.

RESOLUTION:
Addressed the issue to replicate the files, in scenarios involving rename and unlink operation with FCL mode on.

* 4071108 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4072228 (Tracking ID: 4037035)

SYMPTOM:
VxFS should have the ability to control the number of inactive processing threads.

DESCRIPTION:
VxFS may spawn a large number of worker threads that become inactive over time. As a result, heavy lock contention occurs during the removal of inactive threads on high-end servers.

RESOLUTION:
To avoid the contention, a new tunable, vx_ninact_proc_threads, is added. You can use vx_ninact_proc_threads to adjust the number of inactive processing threads based on your server configuration and workload.

* 4078335 (Tracking ID: 4076412)

SYMPTOM:
Addressing Executive Order (EO) 14028,  initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents. Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity.

DESCRIPTION:
Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity, some of the  initial requirements will enable the logging which is compliant to Executive Order. This comprises of command logging,  logging unauthorised access in filesystem and logging WORM events on filesystem. Also include changes to display IP address for Veritas File replication at control plane based on tunable.

RESOLUTION:
The initial requirements of EO are addressed in this release.


As per Executive order(EO) for some of the requirements it should be Tunable based.
For example IP logging where ever applicable (for VFR it should be at control plane(not for every data transfer), and this is also tunable based.
Also for logging some kernel logs, like worm events(plan is to log those to syslog) etc are tunable based.

Introduced new tunable, eo_logging_enable. There is a protocol change because of the introduction of the tunable. 
Though the changes are planned for TOT first and then will go to Update patch on 80all maint for EO release, there is impact of this protocol change for update patch.
We might need to update protocol change with middle protocol version between existing protocol version and new protocol version(introduced because of eo)

For VFR, IP addresses of source and destination are needed to be logged as part of EO.
IP addresses will be included in the log while logging Starting/Resuming a job in VFR.
Log Location: /var/VRTSvxfs/replication/log/mount_point-job_name.log

There are 2 ways to fetch the IP address of the source and target. One is to get the IP addresses stored in the link structure of a session. These IPs are obtained by resolving the source and target hostname. It may contain both IPv4 and IPv6 for a node, and we cannot speculate on which IP actual connection has happened. The second way is to get the socket descriptor from an active connection of the session. This socket descriptor can be used to fetch the source and target IP associated with it. The second method is seems  to get the actual IP addresses used for the connection between source and target. The change contains to  fetch IP addresses from socket descriptor after establishing connections.

More details on EO Logging with respective handling for initial release for VxFS
https://confluence.community.veritas.com/pages/viewpage.action?spaceKey=VES&title=EO+VxFS+Scrum+Page

* 4078520 (Tracking ID: 4058444)

SYMPTOM:
Loop mounts using files on VxFS fail on Linux systems running kernel version 4.1 or higher.

DESCRIPTION:
Starting with the 4.1 version of the Linux kernel, the driver loop.ko uses a new API for read and write requests to the file which was not previously implemented in VxFS. This causes the virtual disk reads during mount to fail while using the -o loop option , causing the mount to fail as well. The same functionality worked in older kernels (such as the version found in RHEL7).

RESOLUTION:
Implemented a new API for all regular files on VxFS, allowing usage of the loop device driver against files on VxFS as well as any other kernel drivers using the same functionality.

* 4079142 (Tracking ID: 4077766)

SYMPTOM:
VxFS kernel module might leak memory during readahead of directory blocks.

DESCRIPTION:
VxFS kernel module might leak memory during readahead of directory blocks due to missing free operation of readahead-related structures.

RESOLUTION:
Code in readahead of directory blocks is modified to free up readahead-related structures.

* 4079173 (Tracking ID: 4070217)

SYMPTOM:
Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.

DESCRIPTION:
On a disabled cluster-mounted filesystem, release of cluster reservation might fail during unmount operation resulting in a  failure of command fsck with 'cluster reservation failed for volume' message.

RESOLUTION:
Code is modified to release cluster reservation in unmount operation properly even for cluster-mounted filesystem.

* 4082260 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

* 4082865 (Tracking ID: 4079622)

SYMPTOM:
Migration uses normal read/write file operation instead of read/write iter functions. vxfs requires read/write iter functions from Linux kernel
5.14.

DESCRIPTION:
Starting with 5.14 version of the Linux kernel, vxfs uses a read/write iter file operation for migration.

RESOLUTION:
Developed a common function for read/write which get called for normal and iter read/write file operation.

* 4083335 (Tracking ID: 4076098)

SYMPTOM:
FS migration from ext4 to vxfs on Linux machines with falcon-sensor enabled, may fail

DESCRIPTION:
Falcon-sensor driver installed on test machines is tapping system calls such as close and is doing some 
additional vfs calls such as read. Due to this vxfs driver received read file - operation call from fsmigbgcp 
process context. Read operation is allowed only on special files from fsmigbgcp process context. Since 
the file in picture was not a special file, the vxfs debug code asserted.

RESOLUTION:
As a fix, we are now allowing the read on non special files from fsmigbgcp process context.

[Note:
 - There were other related issues fixed in this incident. But those are not likely to be hit in customer 
   environment as they are negative test scenarios (like trying to overwrite migration special file - deflist) 
   and may not be relevant to customer.
- I am not covering them in above

* 4085623 (Tracking ID: 4085624)

SYMPTOM:
While running fsck with -o and full -y on corrupted FS, fsck may dump core.

DESCRIPTION:
Fsck builds various in-core maps based on on-disk structural files, one such map is dotdotmap (which stores 
info about parent directory). For regular fset (like 999), the dotdotmap is initialized only for primary ilist
(inode list for regular inodes). It is skipped for attribute ilist (inode list for attribute inodes). This is because
attribute inodes do not have parent directories as is the case for regular inodes.

While attempting to resolve inconsistencies in FS metadata, fsck tries to clean up dotdotmap for attribute ilist. 
In the absence of a check, dotdotmap is re-initialized for attribute ilist causing segmentation fault.

RESOLUTION:
In the codepath where fsck attempts to reinitialize the dotdotmap, a check added to skip reinitialization of dotdotmap
for attribute ilist.

* 4085839 (Tracking ID: 4085838)

SYMPTOM:
Command fsck may generate core due to processing of zero size attribute inode.

DESCRIPTION:
Command fsck is modified to skip processing of zero size attribute inode.

RESOLUTION:
Command fsck fails due to allocation of memory and dereferencing it for zero size attribute inode.

* 4086085 (Tracking ID: 4086084)

SYMPTOM:
VxFS mount operation causes system panic when -o context is used.

DESCRIPTION:
VxFS mount operation supports context option to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. System panic observed when -o context is used.

RESOLUTION:
Required code changes are added to avoid panic.

* 4088341 (Tracking ID: 4065575)

SYMPTOM:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition

DESCRIPTION:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition due to a race between two writer threads to take read-write lock the file to do a delayed allocation operation on it.

RESOLUTION:
Code is modified to allow thread which is already holding read-write lock to complete delayed allocation operation, other thread will skip over that file.

Patch ID: VRTSvxfs-8.0.0.1700

* 4081150 (Tracking ID: 4079869)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability 
to attack on system.

RESOLUTION:
Upgrading the third party components to resolve these vulnerabilities.

* 4083948 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party component Zlib.

RESOLUTION:
Upgrading the third party component Zlib to resolve these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.1400

* 4055808 (Tracking ID: 4062971)

SYMPTOM:
All the operations like ls, create are blocked on file system

DESCRIPTION:
In WORM file system we do not allow directory rename. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming in the context of partition directory split and merge.

* 4056684 (Tracking ID: 4056682)

SYMPTOM:
New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.

DESCRIPTION:
Information about new features like WORM (Write once read many), auditlog is correctly updated with a file system mounted through the fsadm utility, but on the underlying device the new feature information is not displayed.

RESOLUTION:
Updated fsadm utility to display the new feature information correctly.

* 4062606 (Tracking ID: 4062605)

SYMPTOM:
Minimum retention time cannot be set if the maximum retention time is not set.

DESCRIPTION:
The tunable - minimum retention time cannot be set if the tunable - maximum retention time is not set. This was implemented to ensure 
that the minimum time is lower than the maximum time.

RESOLUTION:
Setting of minimum and maximum retention time is independent of each other. Minimum retention time can be set without the maximum retention time being set.

* 4065565 (Tracking ID: 4065669)

SYMPTOM:
Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.

DESCRIPTION:
Creation of non-WORM checkpoints fails as all WORM-related validations are extended to non-WORM checkpoints also.

RESOLUTION:
WORM-related validations restricted to WORM fsets only, allowing non-WORM checkpoints to be created.

* 4065651 (Tracking ID: 4065666)

SYMPTOM:
All the operations like ls, create are blocked on file system directory where there are WORM enabled files and retention period not expired

DESCRIPTION:
For WORM file system, files whose retention period is not expired can not be renamed. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming of files even if retention period is not expired in the context of partition directory split and merge.

Patch ID: VRTSvxfs-8.0.0.1300

* 4065679 (Tracking ID: 4056797)

SYMPTOM:
The VxFS module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-8.0.0.3100.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-8.0.0.3100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-8.0.0.3100.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-8.0.0.3100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale800P3100 [<host1> <host2>...]

You can also install this patch together with 8.0 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
Fixed Vulnerabilities: 
CVE-2022-37434 (BDSA-2022-2183),CVE-2023-38545 (BDSA-2023-2697),CVE-2023-28319 (BDSA-2023-1234),CVE-
2023-38039 (BDSA-2023-2428),CVE-2023-0464 (BDSA-2023-0610),CVE-2023-28484 (BDSA-2023-0813),CVE-2023-
29469 (BDSA-2023-0811),CVE-2023-2650 (BDSA-2023-1337),CVE-2023-28321 (BDSA-2023-1237),CVE-2023-28320 
(BDSA-2023-1233),BDSA-2022-0284,CVE-2024-0727 (BDSA-2024-0202),CVE-2023-5678 (BDSA-2023-3046),CVE-
2023-0466,CVE-2023-3817 (BDSA-2023-1972),CVE-2023-0465,BDSA-2023-1866,CVE-2023-38546 (BDSA-2023-
2699),CVE-2023-28322 (BDSA-2023-1238).


OTHERS
------
NONE


Si applica alle seguenti versioni del prodotto

Questo aggiornamento richiede

InfoScale 8.0 Update 2 Cumulative Patch on SLES15 Platform


File dell'aggiornamento

Nome file Descrizione Versione Piattaforma Dimensione