Sign In
Forgot Password

Don’t have an account? Create One.

7.4.1 Update4 SLES15 patch release

Patch

Abstract

7.4.1 Update4 SLES15 patch release

Description

SORT ID: 16303

 

Fixes the following incidents:

3976693,3983165,4004182,4004927,4014718,4015824,4016077,4016082,3991386,4020130,3991264,3984155,4016283,4016291,
4016768,4017194,3991538,3992902,3984343,4006950,4009762,4016488,4016625,3995684,4012318,4000746,4019003,3992092,
4016487,3992091,3999398,4002584,4003442,3992045,4014715,3991390,4020803,3991388

 

Patch ID:

VRTSamf-7.4.1.2800-SLES15 for VRTSamf
  VRTSaslapm-7.4.1.2800-SLES15 for VRTSaslapm
  VRTSgab-7.4.1.2800-SLES15 for VRTSgab
  VRTSglm-7.4.1.2800-SLES15 for VRTSglm
  VRTSllt-7.4.1.2800-SLES15 for VRTSllt
  VRTSodm-7.4.1.2800-SLES15 for VRTSodm
  VRTSsfmh-7.4.0.1500-0 for VRTSsfmh
  VRTSvcs-7.4.1.2800-SLES15 for VRTSvcs
  VRTSvcsag-7.4.1.2800-SLES15 for VRTSvcsag
  VRTSvlic-4.01.74.005-SLES for VRTSvlic
  VRTSvxfen-7.4.1.2800-SLES15 for VRTSvxfen
  VRTSvxfs-7.4.1.2800-SLES15 for VRTSvxfs
  VRTSvxvm-7.4.1.2800-SLES15 for VRTSvxvm

                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 2700 * * *
                         Patch Date: 2020-12-08


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 2700


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSgab
VRTSglm
VRTSllt
VRTSodm
VRTSsfmh
VRTSvcs
VRTSvcsag
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.4.1.2800
* 3976693 (4016085) fsdb command "xxxiau" refers wrong device to dump information
* 3983165 (3975019) Under IO load running with NFS v4 using NFS lease, may panic the server
* 4004182 (4004181) Read the value of VxFS compliance clock
* 4004927 (3983350) Secondary may falsely assume that the ilist extent is pushed and do the allocation, even if the actual push transaction failed on primary.
* 4014718 (4011596) man page changes for glmdump
* 4015824 (4015278) System panics during vx_uiomove_by _hand.
* 4016077 (4009328) In cluster filesystem, unmount hang could be observed if smap is marked bad previously.
* 4016082 (4000465) FSCK binary loops when it detects break in sequence of log ids.
Patch ID: VRTSvxfs-7.4.1.1700
* 3991386 (3991385) VxFS module failed to load on SLES15 SP1
Patch ID: VRTSsfmh-vom-HF0741500
* 4020130 (4020129) VIOM Agent for InfoScale 7.4.1 Update 4
Patch ID: VRTSvlic-4.01.74.005
* 3991264 (3991265) Java version upgrade support (SDSCPE-600) and Remove /opt/Veritas dependancy (STESC-5159)
Patch ID: VRTSvxvm-7.4.1.2800
* 3984155 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4016283 (3973202) A VVR primary node may panic due to accessing already freed memory.
* 4016291 (4002066) Panic and Hang seen in reclaim
* 4016768 (3989161) The system panic occurs when dealing with getting log requests from vxloggerd.
* 4017194 (4012681) If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.
Patch ID: VRTSvxvm-7.4.1.1800
* 3991538 (3989949) SLES15 SP1 support to VxVM
* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread
Patch ID: VRTSvcsag-7.4.1.2800
* 3984343 (3982300) A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.
* 4006950 (4006979) When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.
* 4009762 (4009761) A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.
* 4016488 (4007764) The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.
* 4016625 (4016624) When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.
Patch ID: VRTSvcs-7.4.1.2800
* 3995684 (3995685) Discrepancy in engine log messages of PR and DR site in GCO configuration.
* 4012318 (4012518) The gcoconfig command does not accept "." in the interface name.
Patch ID: VRTSvxfen-7.4.1.2800
* 4000746 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
Patch ID: VRTSamf-7.4.1.2800
* 4019003 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
Patch ID: VRTSamf-7.4.1.1800
* 3992092 (3992044) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).
Patch ID: VRTSgab-7.4.1.2800
* 4016487 (4007726) When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.
Patch ID: VRTSgab-7.4.1.1800
* 3992091 (3992044) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).
Patch ID: VRTSllt-7.4.1.2800
* 3999398 (3989440) The dash (-) in the device name may cause the LLT link configuration to fail.
* 4002584 (3994996) Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.
* 4003442 (3983418) In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.
Patch ID: VRTSllt-7.4.1.1800
* 3992045 (3992044) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).
Patch ID: VRTSglm-7.4.1.2800
* 4014715 (4011596) Multiple issues were observed during glmdump using hacli for communication
Patch ID: VRTSglm-7.4.1.1600
* 3991390 (3991389) GLM module failed to load on SLES15 SP1
Patch ID: VRTSodm-7.4.1.2800
* 4020803 (4020800) VRTSodm-7.4.1 module unable to load on SLES15SP1.
Patch ID: VRTSodm-7.4.1.1700
* 3991388 (3991387) ODM module failed to load on SLES15 SP1


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.4.1.2800

* 3976693 (Tracking ID: 4016085)

SYMPTOM:
Shows garbage values, instead of correct information

DESCRIPTION:
Dumps information of device 0 while it was asking for other device information(may be device 1, 2).

RESOLUTION:
Updated curpos pointer to point to correct device as needed.

* 3983165 (Tracking ID: 3975019)

SYMPTOM:
Under IO load running with NFS v4 using NFS lease, system may panic with below message.
Kernel panic - not syncing: GAB: Port h halting system due to client process failure

DESCRIPTION:
NFS v4 uses lease per file. This delegation can be taken in RD or RW mode and can be released conditionally. For CFS, we release such delegation from specific node while inode is being normalized (i.e. losing ownership). This can race with another setlease operation on this node and can end up into deadlock for ->i_lock.

RESOLUTION:
Code changes are made to disable lease.

* 4004182 (Tracking ID: 4004181)

SYMPTOM:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

DESCRIPTION:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

RESOLUTION:
Provide an API on mount point to read the Compliance clock for that filesystem

* 4004927 (Tracking ID: 3983350)

SYMPTOM:
Inodes are allocated without pushing the ilist extent

DESCRIPTION:
Multiple messages can be sent from vx_cfs_ilist_push for inodes that are in same block. On receiver side i.e primary node vx_find_iposition() may return bno VX_OVERLAY and btranid 0 until anyone actually do the push. All these will get serialize in vx_ilist_pushino() on VX_IRWLOCK_RANGE and VX_CFS_IGLOCK. First one will do the push and set the btranid to last commit id. As btranid is non null vx_recv_ilist_push_msg() will wait for vx_tranidflush() to flush the transaction to disk. Other receiver threads will not do the push and will have tranid 0 so they will return success without waiting for the transactions to be flushed to disk. Now if the file system gets disable while flushing we end up in an inconsistent state because some of the inodes have actually returned success and marked this block as pushed in-core on secondary.

RESOLUTION:
If the block is pushed or pulled and tranid is 0 again lookup for ilist extent containing the inode. This will populate the correct tranid from ilptranid and the thread will wait for transaction flush.

* 4014718 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

* 4015824 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016077 (Tracking ID: 4009328)

SYMPTOM:
In a cluster filesystem, if smap corruption is seen and the smap is marked bad then it could cause hang while unmounting the filesystem.

DESCRIPTION:
While freeing an extent in vx_extfree1() for logversion >= VX_LOGVERSION13 if we are freeing whole AUs we set VX_AU_SMAPFREE flag for those AUs. This ensures that revoke of delegation for that AU is delayed till the AU has SMAP free transaction in progress. This flag gets cleared either in post commit/undo processing of the transaction or during error handling in vx_extfree1(). In one scenario when we are trying to free a whole AU and its smap is marked bad, we do not return any error to vx_extfree1() and neither do we add the subfunction to free the extent to the transaction. So, the VX_AU_SMAPFREE flag is not cleared and remains set even if there is no SMAP free transaction in progress. This could lead to hang while unmounting the cluster filesystem.

RESOLUTION:
Code changes have been done to add error handling in vx_extfree1 to clear VX_AU_SMAPFREE flag in case where error is returned due to bad smap.

* 4016082 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

Patch ID: VRTSvxfs-7.4.1.1700

* 3991386 (Tracking ID: 3991385)

SYMPTOM:
VxFS module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on SLES15 SP1

Patch ID: VRTSsfmh-vom-HF0741500

* 4020130 (Tracking ID: 4020129)

SYMPTOM:
N/A

DESCRIPTION:
VIOM Agent for InfoScale 7.4.1 Update 4

RESOLUTION:
N/A

Patch ID: VRTSvlic-4.01.74.005

* 3991264 (Tracking ID: 3991265)

SYMPTOM:
Security vulnerabilities in old version of Java

DESCRIPTION:
Bundled the latest JRE version to 1.8.0_271 in the VRTSvlic package.

RESOLUTION:
Providing Patch of VRTSvlic for Java upgrade in IS 7.4.1 U4 patch release.

Patch ID: VRTSvxvm-7.4.1.2800

* 3984155 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4016283 (Tracking ID: 3973202)

SYMPTOM:
A VVR primary node may panic with below stack due to accessing the freed memory:
nmcom_throttle_send()
nmcom_sender()
kthread ()
kernel_thread()

DESCRIPTION:
After sending the data to VVR (Veritas Volume Replicator) secondary site, the code was accessing some variables for which the memory was already released due to the data ACK getting processed quite early. This was a rare race condition which may happen due to accessing the freed memory.

RESOLUTION:
Code changes have been made to avoid the incorrect memory access.

* 4016291 (Tracking ID: 4002066)

SYMPTOM:
System panic with below stack when do reclaim:
__wake_up_common_lock+0x7c/0xc0
sbitmap_queue_wake_all+0x43/0x60
blk_mq_tag_wakeup_all+0x15/0x30
blk_mq_wake_waiters+0x3d/0x50
blk_set_queue_dying+0x22/0x40
blk_cleanup_queue+0x21/0xd0
vxvm_put_gendisk+0x3b/0x120 [vxio]
volsys_unset_device+0x1d/0x30 [vxio]
vol_reset_devices+0x12b/0x180 [vxio]
vol_reset_kernel+0x16c/0x220 [vxio]
volconfig_ioctl+0x866/0xdf0 [vxio]

DESCRIPTION:
With recent kernel, it is expected that kernel will return the pre-allocated sense buffer. These sense buffer pointers are supposed to be unchanged across multiple uses of a request. They are pre-allocated and expected to be unchanged until such a time as the request memory is to be freed. DMP overwrote the  original sense buffer, hence the issue.

RESOLUTION:
Code changes have been made to avoid tampering the pre-allocated sense buffer.

* 4016768 (Tracking ID: 3989161)

SYMPTOM:
The system panic occurs because of hard lockup with the following stack:

#13 [ffff9467ff603860] native_queued_spin_lock_slowpath at ffffffffb431803e
#14 [ffff9467ff603868] queued_spin_lock_slowpath at ffffffffb497a024
#15 [ffff9467ff603878] _raw_spin_lock_irqsave at ffffffffb4988757
#16 [ffff9467ff603890] vollog_logger at ffffffffc105f7fa [vxio]
#17 [ffff9467ff603918] vol_rv_update_childdone at ffffffffc11ab0b1 [vxio]
#18 [ffff9467ff6039f8] volsiodone at ffffffffc104462c [vxio]
#19 [ffff9467ff603a88] vol_subdisksio_done at ffffffffc1048eef [vxio]
#20 [ffff9467ff603ac8] volkcontext_process at ffffffffc1003152 [vxio]
#21 [ffff9467ff603b10] voldiskiodone at ffffffffc0fd741d [vxio]
#22 [ffff9467ff603c40] voldiskiodone_intr at ffffffffc0fda92b [vxio]
#23 [ffff9467ff603c80] voldmp_iodone at ffffffffc0f801d0 [vxio]
#24 [ffff9467ff603c90] bio_endio at ffffffffb448cbec
#25 [ffff9467ff603cc0] gendmpiodone at ffffffffc0e4f5ca [vxdmp]
... ...
#50 [ffff9497e99efa60] do_page_fault at ffffffffb498d975
#51 [ffff9497e99efa90] page_fault at ffffffffb4989778
#52 [ffff9497e99efb40] conv_copyout at ffffffffc10005da [vxio]
#53 [ffff9497e99efbc8] conv_copyout at ffffffffc100044e [vxio]
#54 [ffff9497e99efc50] volioctl_copyout at ffffffffc1032db3 [vxio]
#55 [ffff9497e99efc80] vol_get_logger_data at ffffffffc105e4ce [vxio]
#56 [ffff9497e99efcf8] voliot_ioctl at ffffffffc105e66b [vxio]
#57 [ffff9497e99efd78] volsioctl_real at ffffffffc10aee82 [vxio]
#58 [ffff9497e99efe50] vols_ioctl at ffffffffc0646452 [vxspec]
#59 [ffff9497e99efe70] vols_unlocked_ioctl at ffffffffc06464c1 [vxspec]
#60 [ffff9497e99efe80] do_vfs_ioctl at ffffffffb4462870
#61 [ffff9497e99eff00] sys_ioctl at ffffffffb4462b21

DESCRIPTION:
Vxio kernel sends a signal to vxloggerd to flush the log as it is almost full. Vxloggerd calls into vxio kernel to copy the log buffer out.  As vxio copy log date from kernel to user with holding a spinlock, if a page fault occurs during the copy out, hard lockup and panic occur.

RESOLUTION:
Code changes have been made the fix the problem.

* 4017194 (Tracking ID: 4012681)

SYMPTOM:
If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.

DESCRIPTION:
The RVG(Replicated Volume Group) agent of VCS(Veritas Cluster Server) restarts the vradmind process if it gets killed or terminated
due to some reason, this was not working properly on systemd enabled platforms like RHEL-7.
In the systemd enabled platforms, after the vradmind process dies, the vras-vradmind service used to stay in active/running state, due to this, even 
after the RVG agent issued a command to start the vras-vradmind service, the vradmind process was not getting started.

RESOLUTION:
The code is modified to fix the parameters for vras-vradmind service, so that the service status will change to failed/faulted if vradmind process gets killed. 
The service can be manually started later or RVG agent of VCS can start the service, which will start the vradmind process as well.

Patch ID: VRTSvxvm-7.4.1.1800

* 3991538 (Tracking ID: 3989949)

SYMPTOM:
Existing package failed to load on SLES 15 Sp1 server.

DESCRIPTION:
SLES15 SP1 is a new release and hence VxVM module is compiled with this new kernel along with other few MQ changes .

RESOLUTION:
changes have been done to keep MQ code under single flag .

* 3992902 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

Patch ID: VRTSvcsag-7.4.1.2800

* 3984343 (Tracking ID: 3982300)

SYMPTOM:
A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.

DESCRIPTION:
This issue occurs because value of the Priority attribute of processes monitored by the ProcessOnOnly agent did not match the actual process priority value. As part of the Monitor function, if the priority of a process is found to be different from the value that is configured for the Priority attribute, warning messages are logged in the following scenarios:
1. The process is started outside VCS control with a different priority.
2. The priority of the process is changed after it is started by VCS.

RESOLUTION:
The ProcessOnOnly agent is updated to set the current value of the priority of a process to the Priority attribute if these values are found to be different.

* 4006950 (Tracking ID: 4006979)

SYMPTOM:
When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.

DESCRIPTION:
When an AzureDisk resource is online on one node, the status of that resource appears as UNKNOWN, instead of OFFLINE, on the other nodes in the cluster. Also, if the resource is brought online on a different node, its status on the remaining nodes appears as UNKNOWN. However, if the resource is not online on any node, its status correctly appears as OFFLINE on all the nodes.
This issue occurs when the VM name on the Azure portal does not match the local hostname of the cluster node. The monitor operation of the agent compares these two values to identify whether the VM to which the AzureDisk resource is attached is part of a cluster or not. If the values do not match, the agent incorrectly concludes that the resource is attached to a VM outside the cluster. Therefore, it displays the status of the resource as UNKNOWN.

RESOLUTION:
The AzureDisk agent is modified to compare the VM name with the appropriate attribute of the of the agent so that the status of an AzureDisk resource is reported correctly.

* 4009762 (Tracking ID: 4009761)

SYMPTOM:
A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.

DESCRIPTION:
As part of the Online operation, the NFSRestart agent copies the NFSv4 state data of clients from the shared storage to the local path. However, if the source location contains millions of files, some of which may be stale, their movement may not be completed before the operation times out.

RESOLUTION:
A new action entry point named "cleanup" is provided, which removes stale files. The usage of the entry point is as follows:
$ hares -action <resname> cleanup -actionargs <days> -sys <sys>
  <days>: number of days, deleting files that are <days> old
Example:
$ hares -action NFSRestart_L cleanup -actionargs 30 -sys <sys>
The cleanup action ensures that files older than the number of days specified in the -actionargs option are removed; the minimum expected duration is 30 days. Thus, only the relevant files to be moved remain, and the Online operation is completed in time.

* 4016488 (Tracking ID: 4007764)

SYMPTOM:
The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.

DESCRIPTION:
The smsyncd daemon used by the NFSRestart agent copies the symbolic links and the NFS locks from the /var/statmon/sm directory to a specific directory. These files and links are used to track the clients who have set a lock on the NFS mount points. If this directory already has a symbolic link with the same name that the smsyncd daemon is trying to copy, the /bin/cp commands fails and logs an error message.

RESOLUTION:
The smsyncd daemon is enhanced to copy the symbolic links even if the link with same name is present.

* 4016625 (Tracking ID: 4016624)

SYMPTOM:
When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.

DESCRIPTION:
When the ForceImport option is used, a disk group gets imported with the available disks, regardless of whether all the required disks are available or not. In such a scenario, if the ClearClone attribute is enabled, the available disks are successfully imported, but their DGIDs are updated to new values. Thus, the disks within the same disk group end up with different DGIDs, which may cause issues with the functioning of the storage configuration.

RESOLUTION:
The DiskGroup agent is updated to allow the ForceImport and the ClearClone attributes to be set to the following values as per the configuration requirements. ForceImport can be set to 0 or 1. ClearClone can be set to 0, 1, or 2. ClearClone is disabled when set to 0 and enabled when set to 1 or 2. ForceImport is disabled when set to 0 and is ignored when ClearClone is set to 1. To enable both, ClearClone and ForceImport, set ClearClone to 2 and ForceImport to 1.

Patch ID: VRTSvcs-7.4.1.2800

* 3995684 (Tracking ID: 3995685)

SYMPTOM:
A discrepancy was observed between the VCS engine log messages at the primary site and those at the DR site in a GCO configuration.

DESCRIPTION:
If a resource that was online at the primary site is taken offline outside VCS control, the VCS engine logs the messages related to the unexpected change in the state of the resource[, successful clean Entry Point execution and so on]. The messages clearly indicate that the resource is faulted. However, the VCS engine does not log any debugging error messages regarding the fault on the primary site, but instead logs them at the DR site. Consequently, there is a discrepancy in the engine log messages at the primary site and those at the DR site.

RESOLUTION:
The VCS engine module is updated to log the appropriate debugging error messages at the primary site when a resource goes into the Faulted state.

FILE / VERSION:
had.exe / 7.4.10004.0
hacf.exe / 7.4.10004.0
haconf.exe  / 7.4.10004.0

* 4012318 (Tracking ID: 4012518)

SYMPTOM:
The gcoconfig command does not accept "." in the interface name.

DESCRIPTION:
The naming guidelines for network interfaces allow the "." character to be included as part of the name string. However, if this character is included, the gcoconfig command returns an error stating that the NIC name is invalid.

RESOLUTION:
This hotfix updates the gcoconfig command code to allow the inclusion of the "." character when providing interface names.

Patch ID: VRTSvxfen-7.4.1.2800

* 4000746 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

Patch ID: VRTSamf-7.4.1.2800

* 4019003 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

Patch ID: VRTSamf-7.4.1.1800

* 3992092 (Tracking ID: 3992044)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP1 is
now introduced.

Patch ID: VRTSgab-7.4.1.2800

* 4016487 (Tracking ID: 4007726)

SYMPTOM:
When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.

DESCRIPTION:
The current error message does not mention the type of the GAB message that was transferred and the port that was used to transfer the message. Thus, the error message is not useful for troubleshooting.

RESOLUTION:
This hotfix addresses the issue by enhacing the error message that is logged. It now mentions whether the message type was DIRECTED or BROADCAST and also the port number that was used to transer the GAB message.

Patch ID: VRTSgab-7.4.1.1800

* 3992091 (Tracking ID: 3992044)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP1 is
now introduced.

Patch ID: VRTSllt-7.4.1.2800

* 3999398 (Tracking ID: 3989440)

SYMPTOM:
The dash (-) in the device name may cause the LLT link configuration to fail.

DESCRIPTION:
While configuring LLT links, if the LLT module finds a dash in the device name, it assumes that the device name is in the 'eth-<mac-address>' format and considers the string after the dash as the mac address. However, if the user specifies an interface name that includes a dash, the string after the dash is not intended to be a MAC address. In such a case, the LLT link configuration fails.

RESOLUTION:
The LLT module is updated to check for the string 'eth-' before validating the device name with the 'eth-<mac-address>' format. If the string 'eth-' is not found, LLT assumes the name to be an interface name.

* 4002584 (Tracking ID: 3994996)

SYMPTOM:
Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.

DESCRIPTION:
Adding -H miscellaneous flag, which we will use to add new functionalities in lltconfig, as very less alphabets are left to be able to assign an alphabet to each functionality.

RESOLUTION:
Inside -H flag
1. Add a tunable to allow skb alloc with SLEEP flag, in case memory is scarce.
2. Add skb_alloc failure count in lltstat output.

* 4003442 (Tracking ID: 3983418)

SYMPTOM:
In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.

DESCRIPTION:
When a node tries to join the cluster after a reboot or a panic, in a rare case, on one of the remaining nodes the port state of CVM or any other port may be in an inconsistent state with respect to LLT.

RESOLUTION:
This hotfix updates the LLT module to fix the issue by not accepting a particular type of a packet when not connected to the remote node and also adds more states to log into the LLT circular buffer.

Patch ID: VRTSllt-7.4.1.1800

* 3992045 (Tracking ID: 3992044)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP1 is
now introduced.

Patch ID: VRTSglm-7.4.1.2800

* 4014715 (Tracking ID: 4011596)

SYMPTOM:
It throws error saying "No such file or directory present"

DESCRIPTION:
Bug observed during parallel communication between all the nodes. Some required temp files were not present on other nodes.

RESOLUTION:
Fixed to have consistency maintained while parallel node communication. Using hacp for transferring temp files.

Patch ID: VRTSglm-7.4.1.1600

* 3991390 (Tracking ID: 3991389)

SYMPTOM:
GLM module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on SLES15 SP1

Patch ID: VRTSodm-7.4.1.2800

* 4020803 (Tracking ID: 4020800)

SYMPTOM:
VRTSodm-7.4.1 module unable to load on SLES15SP1.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs 
header files due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs header files.

Patch ID: VRTSodm-7.4.1.1700

* 3991388 (Tracking ID: 3991387)

SYMPTOM:
ODM module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on SLES15 SP1



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-7.4.1.2700.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-7.4.1.2700.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-7.4.1.2700.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-7.4.1.2700.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P2700 [<host1> <host2>...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE


Applies to the following product releases

Update files

File name Description Version Platform Size