Sign In
Forgot Password

Don’t have an account? Create One.

8.0.2 U2 Component Patch for Solaris

Patch

Abstract

Component Patch IS-8.0.2U2 for Solaris platform

Description

This patch provides a VxVm patch on IS-8.0.2 Update2 patch for Solaris platform. This patch should be installed on IS-8.0.2 GA + latest UNIX cumulative patch released on IS-8.0.2
 

In this case latest UNIX cumulative patch on IS-8.0.2 is IS-8.0.2 Update2 (Unix) on Solaris11_sparc platform (Patch version : InfoScale 8.0.2.1800).

 

SORT ID:  21584

 

Patch IDs:

VRTSvxvm-8.0.2.1800 for VRTSvxvm

 

                          * * * READ ME * * *
                * * * Veritas Volume Manager 8.0.2 * * *
                         * * * Patch 1800 * * *
                         Patch Date: 2024-10-10


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
Veritas Volume Manager 8.0.2 Patch 1800


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 11 SPARC


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Enterprise 8.0.2
   * InfoScale Foundation 8.0.2
   * InfoScale Storage 8.0.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 8.0.2.1800
* 4177399 (4154817) Upgrade of Infoscale failing on solaris 11.4 with error "Failed to turn off dmp_native_support".
* 4177400 (4173284) dmpdr command failing
* 4177791 (4167359) EMC DeviceGroup missing SRDF SYMDEV leads to DG corruption.
* 4177793 (4168665) use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.
* 4137995 (4117350)  Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing . 
* 4153377 (4152445)  Replication failed to start due to vxnetd threads not running 
* 4153566 (4090410)  VVR secondary node panics during replication. 
* 4153570 (4134305)  Collecting ilock stats for admin SIO causes buffer overrun. 
* 4153597 (4146424)  CVM Failed to join after power off and Power on from ILO 
* 4153874 (4010288)  [Cosmote][NBFS]ECV:DR:Replace Node on Primary failed with error"Rebuild data for faulted node failed" 
* 4154104 (4142772)  Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again. 
* 4154107 (3995831)  System hung: A large number of SIOs got queued in FMR. 
* 4155091 (4118510)  Volume manager tunable to control log file permissions 
* 4155719 (4154921)  system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on. 
* 4157012 (4145715)  Secondary SRL log error while reading data from log 
* 4157643 (4159198)  vxfmrmap coredump. 
* 4158517 (4159199)  AIX 7.3 TL2 - Memory fault(coredump) while running "./scripts/admin/vxtune/vxdefault.tc" 
* 4158920 (4159680)  set_proc_oom_score: not found while /usr/lib/vxvm/bin/vxconfigbackupd gets executed 
* 4161646 (4149528)  Cluster wide hang after faulting nodes one by one 
* 4162053 (4132221)  Supportability requirement for easier path link to dmpdr utility 
* 4162055 (4116024)  machine panic due to access illegal address. 
* 4162058 (4046560)  vxconfigd aborts on Solaris if device's hardware path is too long. 
* 4162917 (4139166)  Enable VVR Bunker feature for shared diskgroups. 
* 4162966 (4146885)  Restarting syncrvg after termination will start sync from start 
* 4164114 (4162873)  disk reclaim is slow. 
* 4164250 (4154121)  add a new tunable use_hw_replicatedev to enable Volume Manager to import the hardware replicated disk group. 
* 4164252 (4159403)  add clearclone option automatically when import the hardware replicated disk group. 
* 4164254 (4160883)  clone_flag was set on srdf-r1 disks after reboot. 
* 4166881 (4164734)  Disable support for TLS1.1 
* 4166882 (4161852)  Post InfoScale upgrade, command "vxdg upgrade" succeeds but generates apparent error "RLINK is not encypted" 
* 4172377 (4172033)  Data corruption due to stale DRL agenodes 
* 4173722 (4158303)  System panic at dmpsvc_da_analyze_error+417 

Patch ID: 8.0.2.1500
* 4119267 (4113582) In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.
* 4123065 (4113138) 'vradmin repstatus' invoked on the secondary site shows stale information
* 4123069 (4116609) VVR Secondary logowner change is not reflected with virtual hostnames.
* 4123080 (4111789) VVR does not utilize the network provisioned for it.
* 4124291 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4124794 (4114952) With virtual hostnames, pause replication operation fails.
* 4124796 (4108913) Vradmind dumps core because of memory corruption.
* 4124889 (4090828) Enhancement to track plex att/recovery data synced in past to debug corruption
* 4125392 (4114193) 'vradmin repstatus' incorrectly shows replication status as inconsistent.
* 4125811 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4128127 (4132265) Machine attached with NVMe devices may panic.
* 4128835 (4127555) Unable to configure replication using diskgroup id.
* 4129664 (4129663) Generate and add changelog in vxvm and aslapm rpm
* 4129765 (4111978) Replication failed to start due to vxnetd threads not running on secondary site.
* 4129766 (4128380) With virtual hostnames, 'vradmin resync' command may fail if invoked from DR site.
* 4130858 (4128351) System hung observed when switching log owner.
* 4130861 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response
* 4130947 (4124725) With virtual hostnames, 'vradmin delpri' command may hang.
* 4133930 (4100646) Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks
* 4133946 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150'  Volume <vol_name> not found'
* 4135127 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed.
* 4135388 (4131202) In VVR environment, changeip command may fail.
* 4136419 (4089696) In FSS environment, with DCO log attached to VVR SRL volume, reboot of the cluster may result into panic on the CVM master node.
* 4136428 (4131449) In CVR environment, the restriction of four RVGs per diskgroup has been removed.
* 4136429 (4077944) In VVR environment, application I/O operation may get hung.
* 4136859 (4117568) vradmind dumps core due to invalid memory access.
* 4136866 (4090476) SRL is not draining to secondary.
* 4136868 (4120068) A standard disk was added to a cloned diskgroup successfully which is not expected.
* 4136870 (4117957) During a phased reboot of a two node Veritas Access cluster, mounts would hang.
* 4137508 (4066310) Added BLK-MQ feature for DMP driver
* 4137615 (4087628) CVM goes into faulted state when slave node of primary is rebooted .
* 4137753 (4128271) In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place.
* 4137757 (4136458) In CVR environment, the DCM resync may hang with 0% sync remaining.
* 4137986 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes.
* 4138051 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.
* 4138069 (4139703) Panic due to wrong use of OS API (HUNZA issue)
* 4138075 (4129873) In CVR environment, if CVM slave node is acting as logowner, then I/Os may hang when data volume is grown.
* 4138224 (4129489) With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output.
* 4138236 (4134069) VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.
* 4138237 (4113240) In CVR environment, with hostname binding configured, the Rlink on VVR secondary may have incorrect VVR primary IP.
* 4138251 (4132799) No detailed error messages while joining CVM fail.
* 4138348 (4121564) Memory leak for volcred_t could be observed in vxio.
* 4138537 (4098144) vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume
* 4138538 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.
* 4140598 (4141590) Some incidents do not appear in changelog because their cross-references are not properly processed
* 4143580 (4142054) primary master got panicked with ted assert during the run.
* 4146550 (4108235) System wide hang due to memory leak in VVR vxio kernel module
* 4150459 (4150160) Panic due to less memory allocation than required
* 4153597 (4146424) CVM Failed to join after power off and Power on from ILO
* 4158517 (4159199) AIX 7.3 TL2 - Memory fault(coredump) while running "./scripts/admin/vxtune/vxdefault.tc"
* 4158920 (4159680) set_proc_oom_score: not found while /usr/lib/vxvm/bin/vxconfigbackupd gets executed
* 4159564 (4160533) SOL 11.4 Sparc - LDOM is unbootable after enabling Dmp_native_support.
* 4161646 (4149528) Cluster wide hang after faulting nodes one by one
Patch ID: VRTSaslapm 8.0.2.1500
* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
* 4133009 (4133010) Generate and add changelog in aslapm rpm
* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 8.0.2.1800

* 4177399 (Tracking ID: 4154817)

SYMPTOM:
While upgrading infoscale in solaris 11.4 we see following error while performing prestop tasks 

The following errors were discovered on the systems:
CPI ERROR V-9-40-4024 Failed to turn off dmp_native_support tunable on pnhgsls703-ldm19. Refer to Dynamic Multi-Pathing Administrator's guide to determine the
action.
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more zpools

VxVM vxdmpadm ERROR V-5-1-15686 The following zpool(s) could not be migrated as they are not healthy -

DESCRIPTION:
The zpool status command output has been changed in sol 11.4 due to which the vxnative script is unable to correctly parse it.

RESOLUTION:
Code change have been done to fix the problem.

* 4177400 (Tracking ID: 4173284)

SYMPTOM:
"dmpdr -o refresh" command if failing with error:
#usr/lib/vxvm/voladm.d/bin/dmpdr -o refresh
Global symbol "$mask" requires explicit package name (did you forget to declare "my $mask"?) at /usr/lib/vxvm/voladm.d/lib/Comm.pm line 186.
Global symbol "$mask" requires explicit package name (did you forget to declare "my $mask"?) at /usr/lib/vxvm/voladm.d/lib/Comm.pm line 190.
Compilation failed in require at /usr/lib/vxvm/voladm.d/bin/dmpdr line 14.
BEGIN failed--compilation aborted at /usr/lib/vxvm/voladm.d/bin/dmpdr line 14.

DESCRIPTION:
Instead of global a local variable was used in perl because of which this issue occurs

RESOLUTION:
Code change have been done to fix the problem.

* 4177791 (Tracking ID: 4167359)

SYMPTOM:
EMC DeviceGroup missing SRDF SYMDEV. After doing disk group import, import failed with "Disk write failure" and corrupts disks headers.

DESCRIPTION:
SRDF will not make all disks read-writable (RW) on the remote side during an SRDF failover. When an SRDF SYMDEV is missing, the missing disk in pairs on the remote side remains in a write-disabled (WD) state. This leads to write errors, which can further cause disk header corruption.

RESOLUTION:
Code change has been made to fail disk group if any disks in this group are detected as WD.

* 4177793 (Tracking ID: 4168665)

SYMPTOM:
use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.

DESCRIPTION:
use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.

RESOLUTION:
Reset "ret" before making another attempt of dg import.

 * INCIDENT NO:4137995     TRACKING ID:4117350

SYMPTOM: Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details. 

DESCRIPTION: REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed . 

RESOLUTION: REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks. 

 * INCIDENT NO:4153377     TRACKING ID:4152445

SYMPTOM: Replication failed to start due to vxnetd threads not running on secondary site. 

DESCRIPTION: Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck 
in a dead loop till max retry reached. 

RESOLUTION: Code changes have been made to add lock protection to avoid the race condition. 

 * INCIDENT NO:4153566     TRACKING ID:4090410

SYMPTOM: PID: 19769  TASK: ffff8fd2f619b180  CPU: 31  COMMAND: "vxiod"
 #0 [ffff8fcef196bbf0] machine_kexec at ffffffffbb2662f4
 #1 [ffff8fcef196bc50] __crash_kexec at ffffffffbb322a32
 #2 [ffff8fcef196bd20] panic at ffffffffbb9802cc
 #3 [ffff8fcef196bda0] volrv_seclog_bulk_cleanup_verification at ffffffffc09f099a [vxio]
 #4 [ffff8fcef196be18] volrv_seclog_write1_done at ffffffffc09f0a41 [vxio]
 #5 [ffff8fcef196be48] voliod_iohandle at ffffffffc0827688 [vxio]
 #6 [ffff8fcef196be88] voliod_loop at ffffffffc082787c [vxio]
 #7 [ffff8fcef196bec8] kthread at ffffffffbb2c5e61 

DESCRIPTION: This panic on secondary node is explicitly triggered when unexpected data is detected during data verification process. This is due to incorrect data sent by 
primary for a specific network failure scenario. 

RESOLUTION: The source has been changed to fix this problem on primary. 

 * INCIDENT NO:4153570     TRACKING ID:4134305

SYMPTOM: Illegal memory access is detected when an admin SIO is trying to lock a volume. 

DESCRIPTION: While locking a volume, an admin SIO is converted to an incompatible SIO, on which collecting ilock stats causes memory overrun. 

RESOLUTION: The code changes have been made to fix the problem. 

 * INCIDENT NO:4153597     TRACKING ID:4146424

SYMPTOM: CVM node join operation may hang with vxconfigd on master node stuck in following code path.
 
ioctl ()
 kernel_ioctl ()
 kernel_get_cvminfo_all ()
 send_slaves ()
 master_send_dg_diskids ()
 dg_balance_copies ()
 client_abort_records ()
 client_abort ()
 dg_trans_abort ()
 dg_check_kernel ()
 vold_check_signal ()
 request_loop ()
 main () 

DESCRIPTION: During vxconfigd level communication between master and slave nodes, if GAB returns EAGAIN,
vxconfigd code does a poll on the GAB fd. In normal circumstances, the GAB will return the poll call
with appropriate return value. If however, the poll timeout occurs (poll returning 0), it was 
erroneously treated as success and the caller assumes that message was sent, when in fact it
had failed. This resulted in the hang in the message exchange between master and slave
vxconfigd. 

RESOLUTION: Fix is to retry the send operation on GAB fd after some delay if the poll times out in the context
of EAGAIN or ENOMEM error. The fix is applicable on both master and slave side functions 

 * INCIDENT NO:4153874     TRACKING ID:4010288

SYMPTOM: On setup Replace node fails due to DCM log plex not getting recovered. 

DESCRIPTION: This is happening because dcm log plex kstate is going enabled with state RECOVER and stale flag set on it. Plex attach expect plex kstate to be not enabled to allow attach operation which fails in this case. Due to some race, plex state of log dcm plex is getting set to enabled. 

RESOLUTION: Changes done to detect such problematic dcm plex state and correct it and then normal plex attach transactions are triggered. 

 * INCIDENT NO:4154104     TRACKING ID:4142772

SYMPTOM: In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode. 

DESCRIPTION: When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared. 

RESOLUTION: The code changes have been made to fix the issue. 

 * INCIDENT NO:4154107     TRACKING ID:3995831

SYMPTOM: System hung: A large number of SIOs got queued in FMR. 

DESCRIPTION: When IO load is high, there may be not enough chunks available. In that case, DRL flushsio needs to drive fwait queue which may get some available chunks. Due a race condition and a bug inside DRL, DRL may queue the flushsio and fail to trigger flushsio again, then DRL ends in a permanent hung situation, not able to flush the dirty regions. The queued SIOs fails to be driven further hence system hung. 

RESOLUTION: Code changes have been made to drive SIOs which got queued in FMR. 

 * INCIDENT NO:4155091     TRACKING ID:4118510

SYMPTOM: Volume manager tunable to control log file permissions 

DESCRIPTION: With US President Executive Order 14028 compliance changes, all product log file permissions changed to 600. Introduced tunable "log_file_permissions" to control the log file permissions to 600 (default), 640 or 644. The tunable can be changed at install time or any time with reboot. 

RESOLUTION: Added the log_file_permissions tunable. 

 * INCIDENT NO:4155719     TRACKING ID:4154921

SYMPTOM: system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on. 

DESCRIPTION: Due to the different reasons, DMP might disable its subpaths. In a particular scenario, DMP might fail to reset IO QUIESCES flag on its subpaths, which caused IOs got queued in DMP defer queue. In case the upper layer, like zfs, kept waiting for IOs to complete, this bug might cause whole system hang. 

RESOLUTION: Code changes have been made to reset IO quiesce flag properly after disabled dmp path. 

 * INCIDENT NO:4157012     TRACKING ID:4145715

SYMPTOM: Replication disconnect 

DESCRIPTION: There was issue with dummy update handling on secondary side when temp logging is enabled.
It was observed that update next to dummy update is not found on secondary site. Dummy update
was getting written with incorrect metadata about size of VVR update. 

RESOLUTION: Fixed dummy update size metadata getting written on disk. 

 * INCIDENT NO:4157643     TRACKING ID:4159198

SYMPTOM: vxfmrmap utility generated coredump in solaris due to missing id in pfmt 

DESCRIPTION: The coredump was seen due to missing id in pfmt. 

RESOLUTION: Added id in pfmt() statement. 

 * INCIDENT NO:4158517     TRACKING ID:4159199

SYMPTOM: coredump was being generated while running the TC "./scripts/admin/vxtune/vxdefault.tc" on AIX 7.3 TL2
gettimeofday(??, ??) at 0xd02a7dfc
get_exttime(), line 532 in "vm_utils.c"
cbr_cmdlog(argc = 2, argv = 0x2ff224e0, a_client_id = 0), line 275 in "cbr_cmdlog.c"
main(argc = 2, argv = 0x2ff224e0), line 296 in "vxtune.c" 

DESCRIPTION: Passing NULL parameter to gettimeofday function was causing coredump creation 

RESOLUTION: Code changes have been made to pass timeval parameter instead of NULL to gettimeofday function. 

 * INCIDENT NO:4158920     TRACKING ID:4159680

SYMPTOM: 0 Fri Apr  5 20:32:30 IST 2024 + read bd_dg bd_dgid
         0 Fri Apr  5 20:32:30 IST 2024 +          0 Fri Apr  5 20:32:30 IST 2024 first_time=1
+ clean_tempdir
         0 Fri Apr  5 20:32:30 IST 2024 + whence -v set_proc_oom_score
         0 Fri Apr  5 20:32:30 IST 2024 set_proc_oom_score not found
         0 Fri Apr  5 20:32:30 IST 2024 +          0 Fri Apr  5 20:32:30 IST 2024 1> /dev/null
+ set_proc_oom_score 17695012
         0 Fri Apr  5 20:32:30 IST 2024 /usr/lib/vxvm/bin/vxconfigbackupd[295]: set_proc_oom_score:  not found
         0 Fri Apr  5 20:32:30 IST 2024 + vxnotify 

DESCRIPTION: type set_proc_oom_score &>/dev/null && set_proc_oom_score $$

Here the stdout and stderr stream is not getting redirected to /dev/null. This is because "&>" is incompatible with POSIX.
>out 2>&1 is a POSIX-compliant way to redirect both standard output and standard error to out. It also works in pre-POSIX Bourne shells. 

RESOLUTION: The code changes have been done to fix the problem. 

 * INCIDENT NO:4161646     TRACKING ID:4149528

SYMPTOM: ----------
Vxconfigd and vx commands hang. The vxconfigd stack is seen as follows.

        volsync_wait
        volsiowait
        voldco_read_dco_toc
        voldco_await_shared_tocflush
        volcvm_ktrans_fmr_cleanup
        vol_ktrans_commit
        volconfig_ioctl
        volsioctl_real
        vols_ioctl
        vols_unlocked_ioctl
        do_vfs_ioctl
        ksys_ioctl
        __x64_sys_ioctl
        do_syscall_64
        entry_SYSCALL_64_after_hwframe 

DESCRIPTION: ------------
There is a hang in CVM reconfig and DCO-TOC protocol. This results in vxconfigd and vxvm commands to hang. 
In case overlapping reconfigs, it is possible that rebuild seqno on master and slave end up having different values.
At this point if some DCO-TOC protocol is also in progress, the protocol gets hung due to difference in the rebuild
seqno (messages are dropped).

One can find messages similar to following in the /etc/vx/log/logger.txt on master node. We can see the mismatch in 
the rebuild seqno in the two messages.  Look at the strings -  "rbld_seq: 1" "fsio-rbld_seqno: 0". The seqno received
from slave is 1 and the one present on master is 0.

    Jan 16 11:57:56:329170 1705386476329170 38ee  FMR dco_toc_req: mv: masterfsvol1-1  rcvd req withold_seq: 0  rbld_seq: 1
    Jan 16 11:57:56:329171 1705386476329171 38ee  FMR dco_toc_req: mv: masterfsvol1-1  pend rbld, retry rbld_seq: 1  fsio-rbld_seqno: 0  old: 0  cur: 3  new: 3 
flag: 0xc10d  st 

RESOLUTION: ----------
Instead of using rebuild seqno to determine whether the DCO TOC protocol is running the same reconfig, using 
reconfig seqno as a rebuild seqno. Since the reconfig seqno on all nodes in the cluster is same, the DCO TCO
protocol will find consistent rebuild seqno during CVM reconfig and will not result in some node dropping
the DCO TOC protocol messages.
Added CVM protocol version check while using reconfig seqno as rebuild seqno. Thus new functionality will 
come into effect only if CVM protocol version is >= 300. 

 * INCIDENT NO:4162053     TRACKING ID:4132221

SYMPTOM: Supportability requirement for easier path link to dmpdr utility 

DESCRIPTION: The current paths of DMPDR utility are so long and hard to remember for the customers. So it was requested to create a symbolic link to this utility for easier access. 

RESOLUTION: Code changes are made to create a symlink to this utility for easier access 

 * INCIDENT NO:4162055     TRACKING ID:4116024

SYMPTOM: kernel panicked at gab_ifreemsg with following stack:
gab_ifreemsg
gab_freemsg
kmsg_gab_send
vol_kmsg_sendmsg
vol_kmsg_sender 

DESCRIPTION: In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k. 

RESOLUTION: Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics. 

 * INCIDENT NO:4162058     TRACKING ID:4046560

SYMPTOM: vxconfigd aborts on Solaris if device's hardware path is more than 128 characters. 

DESCRIPTION: When vxconfigd started, it claims the devices exist on the node and updates VxVM device
database. During this process, devices which are excluded from vxvm gets excluded from VxVM device database.
To check if device to be excluded, we consider device's hardware full path. If hardware path length is
more than 128 characters, vxconfigd gets aborted. This issue occurred as code is unable to handle hardware
path string beyond 128 characters. 

RESOLUTION: Required code changes has been done to handle long hardware path string. 

 * INCIDENT NO:4162917     TRACKING ID:4139166

SYMPTOM: Enable VVR Bunker feature for shared diskgroups. 

DESCRIPTION: VVR Bunker feature was not supported for shared diskgroup configurations. 

RESOLUTION: Enable VVR Bunker feature for shared diskgroups. 

 * INCIDENT NO:4162966     TRACKING ID:4146885

SYMPTOM: Restarting syncrvg after termination will start sync from start 

DESCRIPTION: vradmin syncrvg would terminate after 2 minutes of inactivity like network error. If run again, it would restart from scratch 

RESOLUTION: Continue vradmin syncrvg operation from where it was terminated 

 * INCIDENT NO:4164114     TRACKING ID:4162873

SYMPTOM: disk reclaim is slow. 

DESCRIPTION: Disk reclaim length should be decided by storage's max reclaim length. But Volume Manager split the reclaim request into smaller segments than the maximum reclaim length, which led to a performance regression. 

RESOLUTION: Code change has been made to avoid splitting the reclaim request in volume manager level. 

 * INCIDENT NO:4164250     TRACKING ID:4154121

SYMPTOM: When the replicated disks are in SPLIT mode, importing its disk group on target node failed with "Device is a hardware mirror". 

DESCRIPTION: When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group on target node failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported when enable use_hw_replicatedev. 

RESOLUTION: The code is enhanced to import the replicated disk group on target node when enable use_hw_replicatedev. 

 * INCIDENT NO:4164252     TRACKING ID:4159403

SYMPTOM: When the replicated disks are in SPLIT mode and use_hw_replicatedev is on, disks are marked as cloned disks after the hardware replicated disk group gets imported. 

DESCRIPTION: add clearclone option automatically when import the hardware replicated disk group to clear the cloned flag on disks. 

RESOLUTION: The code is enhanced to import the replicated disk group with clearclone option. 

 * INCIDENT NO:4164254     TRACKING ID:4160883

SYMPTOM: clone_flag was set on srdf-r1 disks after reboot. 

DESCRIPTION: Clean clone got reset in case of AUTOIMPORT, which misled the clone_flag got set on the disk in the end. 

RESOLUTION: Code change has been made to correct the behavior of setting clone_flag on a disk. 

 * INCIDENT NO:4166881     TRACKING ID:4164734

SYMPTOM: Support for TLS1.1 is not disabled. 

DESCRIPTION: In VxVM product we have disabled support for TLS 1.0, SSLv2 and SSLv3 already. Support TLS1.1 is not disabled.TLSv1.1 has security vulnerabilities 

RESOLUTION: Make required code change to disable support for TLS1.1. 

 * INCIDENT NO:4166882     TRACKING ID:4161852

SYMPTOM: Post InfoScale upgrade, command "vxdg upgrade" succeed but throw error "RLINK is not encypted". 

DESCRIPTION: In "vxdg upgrade" codepath we need to regenerate the encryption keys if encrypted Rlinks are present in VxVM configuration. But key regeneration code was getting called even if Rlinks are not encrypted. And so further code was throwing error that "VxVM vxencrypt ERROR V-5-1-20484 Rlink is not encrypted!" 

RESOLUTION: Necessary code changes has been made to invoke encryption key regeneration for RLinks only if it is encrypted. 

 * INCIDENT NO:4172377     TRACKING ID:4172033

SYMPTOM: Data corruption after recovery of volume 

DESCRIPTION: When disabled / detached volume gets started after storage coming back it was leaving 
stale agenodes in memory which was causing detach tracking not happening for
subsequent IOs on same region as stale agenode. 

RESOLUTION: Cleaned up stale agendas at appropriate stage. 

 * INCIDENT NO:4173722     TRACKING ID:4158303

SYMPTOM: NULL pointer dereference at dmp_error_analysis_callback with below stack.

#5 [] __bad_area_nosemaphore 
#6 [] do_page_fault 
#7 [] page_fault 
    [exception RIP: dmpsvc_da_analyze_error+417]
#8 [] dmp_error_analysis_callback at [vxdmp]
#9 [] dmp_daemons_loop at [vxdmp] 

DESCRIPTION: BLK-MQ code processes IO in request failed to deal with the bio. The bio was a dummy bio which might be added just for compatibility, and result in system panic. 

RESOLUTION: Code change has been made to check whether IO is request based, if yes, handle it differently.

Patch ID: 8.0.2.1500

* 4119267 (Tracking ID: 4113582)

SYMPTOM:
In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.

DESCRIPTION:
Reboot of primary nodes resulted in missing write completions of updates on the primary SRL volume. After the node came up, last update received by VVR secondary was incorrectly compared with the missing updates.

RESOLUTION:
Fixed the check to correctly compare the last received update by VVR secondary.

* 4123065 (Tracking ID: 4113138)

SYMPTOM:
In CVR environments configured with virtual hostname, after node reboots on VVR Primary and Secondary, 'vradmin repstatus' invoked on the secondary site shows stale information with following  warning message:
VxVM VVR vradmin INFO V-5-52-1205 Primary is unreachable or RDS has configuration error. Displayed status information is from Secondary and can be out-of-date.

DESCRIPTION:
This issue occurs when there is a explicit RVG logowner set on the CVM master due to which the old connection of vradmind with its remote peer disconnects and new connection is not formed.

RESOLUTION:
Fixed the issue with the vradmind connection with its remote peer.

* 4123069 (Tracking ID: 4116609)

SYMPTOM:
In CVR environments where replication is configured using virtual hostnames, vradmind on VVR primary loses connection with its remote peer after a planned RVG logowner change on the VVR secondary site.

DESCRIPTION:
vradmind on VVR primary was unable to detect a RVG logowner change on the VVR secondary site.

RESOLUTION:
Enabled primary vradmind to detect RVG logowner change on the VVR secondary site.

* 4123080 (Tracking ID: 4111789)

SYMPTOM:
In VVR/CVR environments, VVR would use any IP/NIC/network to replicate the data and may not utilize the high performance NIC/network configured for VVR.

DESCRIPTION:
The default value of tunable was set to 'any_ip'.

RESOLUTION:
The default value of tunable is set to 'replication_ip'.

* 4124291 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4124794 (Tracking ID: 4114952)

SYMPTOM:
With VVR configured with a virtual hostname, after node reboots on DR site, 'vradmin pauserep' command failed with following error:
VxVM VVR vradmin ERROR V-5-52-421 vradmind server on host <host> not responding or hostname cannot be resolved.

DESCRIPTION:
The virtual host mapped to multiple IP addresses, and vradmind was using incorrectly mapped IP address.

RESOLUTION:
Fixed by using the correct mapping of IP address from the virtual host.

* 4124796 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4124889 (Tracking ID: 4090828)

SYMPTOM:
Dumped fmrmap data for better debuggability for corruption issues

DESCRIPTION:
vxplex att/vxvol recover cli will internally fetch fmrmaps from kernel using existing ioctl before starting attach operation and get data in binary 
format and dump to file and store it with specific format like volname_taskid_date.

RESOLUTION:
Changes done now dumps the fmrmap data into a binary file.

* 4125392 (Tracking ID: 4114193)

SYMPTOM:
'vradmin repstatus' command showed replication data status incorrectly as 'inconsistent'.

DESCRIPTION:
vradmind was relying on replication data status from both primary as well as DR site.

RESOLUTION:
Fixed replication data status to rely on the primary data status.

* 4125811 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4128127 (Tracking ID: 4132265)

SYMPTOM:
Machine with NVMe disks panics with following stack: 
blk_update_request
blk_mq_end_request
dmp_kernel_nvme_ioctl
dmp_dev_ioctl
dmp_send_nvme_passthru_cmd_over_node
dmp_pr_do_nvme_read
dmp_pgr_read
dmpioctl
dmp_ioctl
blkdev_ioctl
__x64_sys_ioctl
do_syscall_64

DESCRIPTION:
Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported.

RESOLUTION:
Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR.

* 4128835 (Tracking ID: 4127555)

SYMPTOM:
While adding secondary site using the 'vradmin addsec' command, the command fails with following error if diskgroup id is used in place of diskgroup name:
VxVM vxmake ERROR V-5-1-627 Error in field remote_dg=<dgid>: name is too long

DESCRIPTION:
Diskgroup names can be 32 characters long where as diskgroup ids can be 64 characters long. This was not handled by vradmin commands.

RESOLUTION:
Fix vradmin commands to handle the case where longer diskgroup ids can be used in place of diskgroup names.

* 4129664 (Tracking ID: 4129663)

SYMPTOM:
vxvm and aslapm rpm do not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to vxvm and aslapm rpm.

* 4129765 (Tracking ID: 4111978)

SYMPTOM:
Replication failed to start due to vxnetd threads not running on secondary site.

DESCRIPTION:
Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached.

RESOLUTION:
Code changes have been made to add lock protection to avoid the race condition.

* 4129766 (Tracking ID: 4128380)

SYMPTOM:
If VVR is configured using virtual hostname and 'vradmin resync' command is invoked from a DR site node, it fails with following error:
VxVM VVR vradmin ERROR V-5-52-405 Primary vradmind server disconnected.

DESCRIPTION:
In case of virtual hostname maps to multiple IPs, vradmind service on the DR site was not able to reach the VVR logowner node on the primary site due to incorrect IP address mapping used.

RESOLUTION:
Fixed vradmind to use correct mapped IP address of the primary vradmind.

* 4130858 (Tracking ID: 4128351)

SYMPTOM:
System hung observed when switching log owner.

DESCRIPTION:
VVR mdship SIOs might be throttled due to reaching max allocation count,etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.

RESOLUTION:
Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.

* 4130861 (Tracking ID: 4122061)

SYMPTOM:
Observing hung after resync operation, vxconfigd was waiting for slaves' response.

DESCRIPTION:
VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 
10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.

RESOLUTION:
Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.

* 4130947 (Tracking ID: 4124725)

SYMPTOM:
With VVR configured using virtual hostnames, 'vradmin delpri' command could hang after doing the RVG cleanup.

DESCRIPTION:
'vradmin delsec' command used prior to 'vradmin delpri' command had left the cleanup in an incomplete state resulting in next cleanup command to hang.

RESOLUTION:
Fixed to make sure that 'vradmin delsec' command executes its workflow correctly.

* 4133930 (Tracking ID: 4100646)

SYMPTOM:
Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks

DESCRIPTION:
Due to multiple reason stale tutil may remain stamped on dcl subdisks which may cause next vxrecover instances
not able to recover dcl plex.

RESOLUTION:
Issue is resolved by vxattachd daemon intelligently detecting these stale tutils and clearing+triggering recoveries after 10 min interval.

* 4133946 (Tracking ID: 3972344)

SYMPTOM:
After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150  Volume <volume_name> does not exist' is logged.

DESCRIPTION:
In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message.
The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s]

RESOLUTION:
Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly.

* 4135127 (Tracking ID: 4134023)

SYMPTOM:
vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error:
# vxconfigrestore -p LINUXSRDF
VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ......
VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]
Please refer to system log for details.
... ...
VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed.

DESCRIPTION:
H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed.

RESOLUTION:
The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore .

* 4135388 (Tracking ID: 4131202)

SYMPTOM:
In VVR environment, 'vradmin changeip' would fail with following error message:
VxVM VVR vradmin ERROR V-5-52-479 Host <host> not reachable.

DESCRIPTION:
Existing heartbeat to new secondary host is assumed, whereas it starts after the changeip operation.

RESOLUTION:
Heartbeat assumption is fixed.

* 4136419 (Tracking ID: 4089696)

SYMPTOM:
In FSS environment, with DCO log attached to VVR SRL volume, the reboot of the cluster may result into panic on the CVM master node as follows: 

voldco_get_mapid
voldco_get_detach_mapid
voldco_get_detmap_offset
voldco_recover_detach_map
volmv_recover_dcovol
vol_mv_fmr_precommit
vol_mv_precommit
vol_ktrans_precommit_parallel
volobj_ktrans_sio_start
voliod_iohandle
voliod_loop

DESCRIPTION:
If DCO is configured with SRL volume, and both SRL volume plexes and DCO plexes get I/O error, this panic occurs in the recovery path.

RESOLUTION:
Recovery path is fixed to manage this condition.

* 4136428 (Tracking ID: 4131449)

SYMPTOM:
In CVR environments, there was a restriction to configure up to four RVGs per diskgroup as more RVGs resulted in degradation of I/O performance in case 
of VxVM transactions.

DESCRIPTION:
In CVR environments, VxVM transactions on an RVG also impacted I/O operations on other RVGs in the same diskgroup resulting in I/O performance 
degradation in case of higher number of RVGs configured in a diskgroup.

RESOLUTION:
VxVM transaction impact has been isolated to each RVG resulting in the ability to scale beyond four RVGs in a diskgroup.

* 4136429 (Tracking ID: 4077944)

SYMPTOM:
In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang.

DESCRIPTION:
In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases.

RESOLUTION:
Resolved the issue by making sure the application throttled I/Os get driven in all the cases.

* 4136859 (Tracking ID: 4117568)

SYMPTOM:
Vradmind dumps core with the following stack:

#1  std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x7ffdc380d810,
    __str=<error reading variable: Cannot access memory at address 0x3736656436303563>)
#2  0x000000000040e02b in ClientMgr::closeStatsSession
#3  0x000000000040d0d7 in ClientMgr::client_ipm_close
#4  0x000000000058328e in IpmHandle::~IpmHandle
#5  0x000000000057c509 in IpmHandle::events
#6  0x0000000000409f5d in main

DESCRIPTION:
After terminating vrstat, the StatSession in vradmind was closed and the corresponding Client object was deleted. When closing the IPM object of vrstat, try to access the removed Client, hence the core dump.

RESOLUTION:
Core changes have been made to fix the issue.

* 4136866 (Tracking ID: 4090476)

SYMPTOM:
Storage Replicator Log (SRL) is not draining to secondary. rlink status shows the outstanding writes never got reduced in several hours.

VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL

DESCRIPTION:
In poor network environment, VVR seems not syncing. Another reconfigure happened before VVR state became clean, VVR atomic window got set to a large size. VVR couldnt complete all the atomic updates before the next reconfigure. VVR ended kept sending atomic updates from VVR pending position. Hence VVR appears to be stuck.

RESOLUTION:
Code changes have been made to update VVR pending position accordingly.

* 4136868 (Tracking ID: 4120068)

SYMPTOM:
A standard disk was added to a cloned diskgroup successfully which is not expected.

DESCRIPTION:
When add a disk to a disk group, a pre-check will be made to avoid ending up with a mixed diskgroup. In a cluster, the local node might fail to use the 
latest record to do the pre-check, which caused a mixed diskgroup in the cluster, further caused node join failure.

RESOLUTION:
Code changes have been made to use latest record to do a mixed diskgroup pre-check.

* 4136870 (Tracking ID: 4117957)

SYMPTOM:
During a phased reboot of a two node Veritas Access cluster, mounts would hang. Transaction aborted waiting for io drain.
VxVM vxio V-5-3-1576 commit: Timedout waiting for Cache XXXX to quiesce, iocount XX msg 0

DESCRIPTION:
Transaction on Cache object getting failed since there are IOs waiting on the cache object. Those queued IOs couldn't be 
proceeded due to the missing flag VOLOBJ_CACHE_RECOVERED on the cache object. A transact might kicked in when the old cache was doing
recovery, therefore the new cache object might fail to inherit VOLOBJ_CACHE_RECOVERED, further caused IO hung.

RESOLUTION:
Code changes have been to fail the new cache creation if the old cache is doing recovery.

* 4137508 (Tracking ID: 4066310)

SYMPTOM:
New feature for performance improvement

DESCRIPTION:
Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP.

RESOLUTION:
resolved

* 4137615 (Tracking ID: 4087628)

SYMPTOM:
When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state .

DESCRIPTION:
During Resiliency tests, performed sequence of operations as following. 
1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.
2. The low owner service groups for both the RVGs are online on a Slave node. 
3. Rebooted another Slave node where logowner is not online. 
4. After Slave node come back from reboot, it is unable to join CVM Cluster. 
5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.

RESOLUTION:
In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right 
before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.

* 4137753 (Tracking ID: 4128271)

SYMPTOM:
In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place.

DESCRIPTION:
If there has been an SRL overflow, then RVG recovery takes more time as it was loaded with more work than required because the recovery related metadata was not updated.

RESOLUTION:
Updated the metadata correctly to reduce the RVG recovery time.

* 4137757 (Tracking ID: 4136458)

SYMPTOM:
In CVR environment, if CVM slave node is acting as logowner, the DCM resync issues after snapshot restore may hang showing 0% sync is remaining.

DESCRIPTION:
The DCM resync completion is not correctly communicated to CVM master resulting into hang.

RESOLUTION:
The DCM resync operation is enhanced to correctly communicate resync completion to CVM master.

* 4137986 (Tracking ID: 4133793)

SYMPTOM:
DCO experience IO Errors while doing a vxsnap restore on vxvm volumes.

DESCRIPTION:
Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error.

RESOLUTION:
Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set.

* 4138051 (Tracking ID: 4090943)

SYMPTOM:
On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog:
  VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.

DESCRIPTION:
When RVG logowner node panic, RVG recovery happens in 3 phases.
At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect
and during this time if there is logowner change then Rlink won't get connected.

RESOLUTION:
Handled in-memory and on-disk SRL positions correctly.

* 4138069 (Tracking ID: 4139703)

SYMPTOM:
System gets panicked on RHEL9.2 AWS environment while registering the pgr key.

DESCRIPTION:
On RHEL 9.2, Observing panic while reading PGR keys on AWS VM.

2)	Reproduction steps: Run "/etc/vx/diag.d/vxdmppr read /dev/vx/dmp/ip-10-20-2-49_nvme4_0" on AWS nvme 9.2 setup.

3)	 Build details: ga8_0_2_all_maint

4)	Test Bed details: AWS VM with RHEL 9.2 

Nodes:

Access details(login)

Console details:

4) OS and Kernel details: 5.14.0-284.11.1.el9_2.x86_64

5). Crash dump and core dump location with Binary

6) Failure signature: 
PID: 8250     TASK: ffffa0e882ca1c80  CPU: 1    COMMAND: "vxdmppr"
 #0 [ffffbf3c4039f8e0] machine_kexec at ffffffffb626c237
 #1 [ffffbf3c4039f938] __crash_kexec at ffffffffb63c3c9a
 #2 [ffffbf3c4039f9f8] crash_kexec at ffffffffb63c4e58
 #3 [ffffbf3c4039fa00] oops_end at ffffffffb62291db
 #4 [ffffbf3c4039fa20] do_trap at ffffffffb622596e
 #5 [ffffbf3c4039fa70] do_error_trap at ffffffffb6225a25
 #6 [ffffbf3c4039fab0] exc_invalid_op at ffffffffb6d256be
 #7 [ffffbf3c4039fad0] asm_exc_invalid_op at ffffffffb6e00af6
    [exception RIP: kfree+1074]
    RIP: ffffffffb6578e32  RSP: ffffbf3c4039fb88  RFLAGS: 00010246
    RAX: ffffa0e7984e9c00  RBX: ffffa0e7984e9c00  RCX: ffffa0e7984e9c60
    RDX: 000000001bc22001  RSI: ffffffffb6729dfd  RDI: ffffa0e7984e9c00
    RBP: ffffa0e880042800   R8: ffffa0e8b572b678   R9: ffffa0e8b572b678
    R10: 0000000000005aca  R11: 00000000000000e0  R12: fffff20e00613a40
    R13: fffff20e00613a40  R14: ffffffffb6729dfd  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #8 [ffffbf3c4039fbc0] blk_update_request at ffffffffb6729dfd
 #9 [ffffbf3c4039fc18] blk_mq_end_request at ffffffffb672a11a
#10 [ffffbf3c4039fc30] dmp_kernel_nvme_ioctl at ffffffffc09f2647 [vxdmp]
#11 [ffffbf3c4039fd00] dmp_dev_ioctl at ffffffffc09a3b93 [vxdmp]
#12 [ffffbf3c4039fd10] dmp_send_nvme_passthru_cmd_over_node at ffffffffc09f1497 [vxdmp]
#13 [ffffbf3c4039fd60] dmp_pr_do_nvme_read.constprop.0 at ffffffffc09b78e1 [vxdmp]
#14 [ffffbf3c4039fe00] dmp_pr_read at ffffffffc09e40be [vxdmp]
#15 [ffffbf3c4039fe78] dmpioctl at ffffffffc09b09c3 [vxdmp]
#16 [ffffbf3c4039fe88] dmp_ioctl at ffffffffc09d7a1c [vxdmp]
#17 [ffffbf3c4039fea0] blkdev_ioctl at ffffffffb6732b81
#18 [ffffbf3c4039fef0] __x64_sys_ioctl at ffffffffb65df1ba
#19 [ffffbf3c4039ff20] do_syscall_64 at ffffffffb6d2515c
#20 [ffffbf3c4039ff50] entry_SYSCALL_64_after_hwframe at ffffffffb6e0009b
    RIP: 00007fef03c3ec6b  RSP: 00007ffd1acad8a8  RFLAGS: 00000202
    RAX: ffffffffffffffda  RBX: 00000000444d5061  RCX: 00007fef03c3ec6b
    RDX: 00007ffd1acad990  RSI: 00000000444d5061  RDI: 0000000000000003
    RBP: 0000000000000003   R8: 0000000001cbba20   R9: 0000000000000000
    R10: 00007fef03c11d78  R11: 0000000000000202  R12: 00007ffd1acad990
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000000002
    ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

RESOLUTION:
resolved

* 4138075 (Tracking ID: 4129873)

SYMPTOM:
In CVR environment, the application I/O may hang if CVM slave node is acting as RVG logowner and a data volume grow operation is triggered followed by a logclient node leaving the cluster.

DESCRIPTION:
When logowner is not CVM master, and data volume grow operation is taking place, the CVM master controls the region locking for IO operations. In case, 
a logclient node leaves the cluster, the I/O operations initiated by it are not cleaned up correctly due to lack of co-ordination between CVM master and RVG logowner node.

RESOLUTION:
Co-ordination between CVM master and RVG logowner node is fixed to manage the I/O cleanup correctly.

* 4138224 (Tracking ID: 4129489)

SYMPTOM:
With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output.

DESCRIPTION:
There was an issue with disk discovery at OS and DDL layer.

RESOLUTION:
Integration issue with disk discovery was resolved.

* 4138236 (Tracking ID: 4134069)

SYMPTOM:
VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.

DESCRIPTION:
Initial synchronization and DCM replay of VVR required the filesystem to be mounted locally on the logowner node as VVR did not have capability to 
fetch the required information from a remotely mounted filesystem mount point.

RESOLUTION:
VVR is updated to fetch the required SmartMove related information from a remotely mounted filesystem mount point.

* 4138237 (Tracking ID: 4113240)

SYMPTOM:
In CVR environment, with hostname binding configured, the Rlink on VVR secondary may have incorrect VVR primary IP.

DESCRIPTION:
The VVR Secondary Rlink picks up a wrong IP randomly as the replication is configured using virtual host which maps to multiple IPs.

RESOLUTION:
Correct the VVR Primary IP on the VVR Secondary Rlink.

* 4138251 (Tracking ID: 4132799)

SYMPTOM:
If GLM is not loaded, start CVM fails with the following errors:
# vxclustadm -m gab startnode
VxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - 
VxVM vxclustadm ERROR V-5-1-9743 errno 3

DESCRIPTION:
The error number but the error message is printed while joining CVM fails.

RESOLUTION:
The code changes have been made to fix the issue.

* 4138348 (Tracking ID: 4121564)

SYMPTOM:
Memory leak for volcred_t could be observed in vxio.

DESCRIPTION:
Memory leak could occur if some private region IOs hang on a disk and there are duplicate entries for the disk in vxio.

RESOLUTION:
Code has been changed to avoid memory leak.

* 4138537 (Tracking ID: 4098144)

SYMPTOM:
vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume

DESCRIPTION:
vxtask remains stuck since the parent process doesn't exit. It was seen that all childs are completed, but the parent is not able to exit.
(gdb) p active_jobs
$1 = 1
Active jobs are reduced as in when childs complete. Somehow one count is pending and we don't know which child exited without decrementing count. Instrumentation messages are added to capture the issue.

RESOLUTION:
Added a code that will create a log file in /etc/vx/log/. This file will be deleted when vxrecover exists successfully. The file will be present when vxtask parent hang issue is seen.

* 4138538 (Tracking ID: 4085404)

SYMPTOM:
Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.

DESCRIPTION:
The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.

RESOLUTION:
The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.

* 4140598 (Tracking ID: 4141590)

SYMPTOM:
Some incidents do not appear in changelog because their cross-references are not properly processed

DESCRIPTION:
Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.

RESOLUTION:
All cross-references are traversed to find parent-child only if it present and then find top.

* 4143580 (Tracking ID: 4142054)

SYMPTOM:
System panicked in the following stack:

[ 9543.195915] Call Trace:
[ 9543.195938]  dump_stack+0x41/0x60
[ 9543.195954]  panic+0xe7/0x2ac
[ 9543.195974]  vol_rv_inactive+0x59/0x790 [vxio]
[ 9543.196578]  vol_rvdcm_flush_done+0x159/0x300 [vxio]
[ 9543.196955]  voliod_iohandle+0x294/0xa40 [vxio]
[ 9543.197327]  ? volted_getpinfo+0x15/0xe0 [vxio]
[ 9543.197694]  voliod_loop+0x4b6/0x950 [vxio]
[ 9543.198003]  ? voliod_kiohandle+0x70/0x70 [vxio]
[ 9543.198364]  kthread+0x10a/0x120
[ 9543.198385]  ? set_kthread_struct+0x40/0x40
[ 9543.198389]  ret_from_fork+0x1f/0x40

DESCRIPTION:
- From the SIO stack, we can see that it is a case of done being called twice. 
- Looking at vol_rvdcm_flush_start(), we can see that when child sio is created, it is being directly added to the the global SIO queue. 
- This can cause child SIO to start while vol_rvdcm_flush_start() is still in process of generating other child SIOs. 
- It means that, say the first child SIO gets done, it can find the children count going to zero and calls done.
- The next child SIO, also independently find children count to be zero and call done.

RESOLUTION:
The code changes have been done to fix the problem.

* 4146550 (Tracking ID: 4108235)

SYMPTOM:
System wide hang causing all application and config IOs hang

DESCRIPTION:
Memory pools are used in vxio driver for managing kernel memory for different purposes. One of the pool called 'NMCOM pool' used on VVR secondary was causing memory leak. Memory leak was not getting detected from pool stats as metadata referring to pool itself was getting freed.

RESOLUTION:
Bug causing memory leak is fixed. There was race condition in VxVM transaction code path on secondary side of VVR where memory was not getting freed when certain conditions was hit.

* 4150459 (Tracking ID: 4150160)

SYMPTOM:
System gets panicked in dmp code path

DESCRIPTION:
CMDS-fsmigadm test hits "Oops: 0003 [#1] PREEMPT SMP PTI"

2)	Reproduction steps: Running cmds-fsmigadm test.

3)	 Build details:

# rpm -qi VRTSvxvm
Name        : VRTSvxvm
Version     : 8.0.3.0000
Release     : 0716_RHEL9
Architecture: x86_64
Install Date: Wed 10 Jan 2024 11:46:24 AM IST
Group       : Applications/System
Size        : 414813743
License     : Veritas Proprietary
Signature   : RSA/SHA256, Thu 04 Jan 2024 04:24:23 PM IST, Key ID 4e84af75cc633953
Source RPM  : VRTSvxvm-8.0.3.0000-0716_RHEL9.src.rpm
Build Date  : Thu 04 Jan 2024 06:35:01 AM IST
Build Host  : vmrsvrhel9bld.rsv.ven.veritas.com
Packager    : enterprise_technical_support@veritas.com
Vendor      : Veritas Technologies LLC
URL         : www.veritas.com/support
Summary     : Veritas Volume Manager

RESOLUTION:
removed buggy code and fixed it.

* 4153597 (Tracking ID: 4146424)

SYMPTOM:
CVM node join operation may hang with vxconfigd on master node stuck in following code path.
 
ioctl ()
 kernel_ioctl ()
 kernel_get_cvminfo_all ()
 send_slaves ()
 master_send_dg_diskids ()
 dg_balance_copies ()
 client_abort_records ()
 client_abort ()
 dg_trans_abort ()
 dg_check_kernel ()
 vold_check_signal ()
 request_loop ()
 main ()

DESCRIPTION:
During vxconfigd level communication between master and slave nodes, if GAB returns EAGAIN,
vxconfigd code does a poll on the GAB fd. In normal circumstances, the GAB will return the poll call
with appropriate return value. If however, the poll timeout occurs (poll returning 0), it was 
erroneously treated as success and the caller assumes that message was sent, when in fact it
had failed. This resulted in the hang in the message exchange between master and slave
vxconfigd.

RESOLUTION:
Fix is to retry the send operation on GAB fd after some delay if the poll times out in the context
of EAGAIN or ENOMEM error. The fix is applicable on both master and slave side functions

* 4158517 (Tracking ID: 4159199)

SYMPTOM:
coredump was being generated while running the TC "./scripts/admin/vxtune/vxdefault.tc" on AIX 7.3 TL2
gettimeofday(??, ??) at 0xd02a7dfc
get_exttime(), line 532 in "vm_utils.c"
cbr_cmdlog(argc = 2, argv = 0x2ff224e0, a_client_id = 0), line 275 in "cbr_cmdlog.c"
main(argc = 2, argv = 0x2ff224e0), line 296 in "vxtune.c"

DESCRIPTION:
Passing NULL parameter to gettimeofday function was causing coredump creation

RESOLUTION:
Code changes have been made to pass timeval parameter instead of NULL to gettimeofday function.

* 4158920 (Tracking ID: 4159680)

SYMPTOM:
0 Fri Apr  5 20:32:30 IST 2024 + read bd_dg bd_dgid
         0 Fri Apr  5 20:32:30 IST 2024 +          0 Fri Apr  5 20:32:30 IST 2024 first_time=1
+ clean_tempdir
         0 Fri Apr  5 20:32:30 IST 2024 + whence -v set_proc_oom_score
         0 Fri Apr  5 20:32:30 IST 2024 set_proc_oom_score not found
         0 Fri Apr  5 20:32:30 IST 2024 +          0 Fri Apr  5 20:32:30 IST 2024 1> /dev/null
+ set_proc_oom_score 17695012
         0 Fri Apr  5 20:32:30 IST 2024 /usr/lib/vxvm/bin/vxconfigbackupd[295]: set_proc_oom_score:  not found
         0 Fri Apr  5 20:32:30 IST 2024 + vxnotify

DESCRIPTION:
type set_proc_oom_score &>/dev/null && set_proc_oom_score $$

Here the stdout and stderr stream is not getting redirected to /dev/null. This is because "&>" is incompatible with POSIX.
>out 2>&1 is a POSIX-compliant way to redirect both standard output and standard error to out. It also works in pre-POSIX Bourne shells.

RESOLUTION:
The code changes have been done to fix the problem.

* 4159564 (Tracking ID: 4160533)

SYMPTOM:
System is unbootable after enabling dmp native support in solaris 11.4 sparc with below error:
NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/487 fstype zfs
panic[cpu0]/thread=20012000: vfs_mountroot: cannot mount root
Warning - stack not written to the dumpbuf
genunix:vfs_mountroot+494 ()
genunix:main+224 ()

DESCRIPTION:
ZFS import failure resulted as a consequence of the fact that rvalp parameter returned a value not equal to 0 and the Veritas code should not touch the rvalp in such cases. This behavior was not a big problem in old SRUs, but Oracle started to use non-zero rvalp as better error

RESOLUTION:
Code changes have been made to avoid touch rvalp in case of IOCTL failure.

* 4161646 (Tracking ID: 4149528)

SYMPTOM:
----------
Vxconfigd and vx commands hang. The vxconfigd stack is seen as follows.

        volsync_wait
        volsiowait
        voldco_read_dco_toc
        voldco_await_shared_tocflush
        volcvm_ktrans_fmr_cleanup
        vol_ktrans_commit
        volconfig_ioctl
        volsioctl_real
        vols_ioctl
        vols_unlocked_ioctl
        do_vfs_ioctl
        ksys_ioctl
        __x64_sys_ioctl
        do_syscall_64
        entry_SYSCALL_64_after_hwframe

DESCRIPTION:
------------
There is a hang in CVM reconfig and DCO-TOC protocol. This results in vxconfigd and vxvm commands to hang. 
In case overlapping reconfigs, it is possible that rebuild seqno on master and slave end up having different values.
At this point if some DCO-TOC protocol is also in progress, the protocol gets hung due to difference in the rebuild
seqno (messages are dropped).

One can find messages similar to following in the /etc/vx/log/logger.txt on master node. We can see the mismatch in 
the rebuild seqno in the two messages.  Look at the strings -  "rbld_seq: 1" "fsio-rbld_seqno: 0". The seqno received
from slave is 1 and the one present on master is 0.

	Jan 16 11:57:56:329170 1705386476329170 38ee  FMR dco_toc_req: mv: masterfsvol1-1  rcvd req withold_seq: 0  rbld_seq: 1
	Jan 16 11:57:56:329171 1705386476329171 38ee  FMR dco_toc_req: mv: masterfsvol1-1  pend rbld, retry rbld_seq: 1  fsio-rbld_seqno: 0  old: 0  cur: 3  new: 3 
flag: 0xc10d  st

RESOLUTION:
----------
Instead of using rebuild seqno to determine whether the DCO TOC protocol is running the same reconfig, using 
reconfig seqno as a rebuild seqno. Since the reconfig seqno on all nodes in the cluster is same, the DCO TCO
protocol will find consistent rebuild seqno during CVM reconfig and will not result in some node dropping
the DCO TOC protocol messages.
Added CVM protocol version check while using reconfig seqno as rebuild seqno. Thus new functionality will 
come into effect only if CVM protocol version is >= 300.

Patch ID: VRTSaslapm 8.0.2.1500

* 4125322 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4133009 (Tracking ID: 4133010)

SYMPTOM:
aslapm rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to aslapm rpm.

* 4137995 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vm-sol11_sparc-Patch-8.0.2.1800.tar.gz to /tmp
2. Untar vm-sol11_sparc-Patch-8.0.2.1800.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-sol11_sparc-Patch-8.0.2.1800.tar.gz
    # tar xf /tmp/vm-sol11_sparc-Patch-8.0.2.1800.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvxvm802P1800 [<host1> <host2>...]

You can also install this patch together with 8.0.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
The installation of this P-Patch will cause downtime.

Run the Installer script to automatically install the patch:
-----------------------------------------------------------

To install the patch perform the following steps on at least one node in the cluster:

1. Copy the patch vm-sol11_sparc-Patch-8.0.2.1800.tar.gz to /tmp
2. Untar vm-sol11_sparc-Patch-8.0.2.1800.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-sol11_sparc-Patch-8.0.2.1800.tar.gz
    # tar xf /tmp/vm-sol11_sparc-Patch-8.0.2.1800.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvxvm802P1800 [<host1>   <host2>   ...]

This patch can be installed in connection with the InfoScale 8.0.2 maintenance release using the Install Bundles installer feature:

1. Download this patch and extract it to a directory of your choosing
2. Change to the directory hosting the Veritas InfoScale 8.0.2 base product software and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory you created in step 1.

    # ./installer -patch_path [<host1>   <host2>   ...]


Install the patch manually:
--------------------------
o Before-applying-the-patch:-
  (a) Stop applications that access VxVM volumes.
  (b) Stop I/Os to all the VxVM volumes.
  (c) Umount any filesystems residing on VxVM volumes.
  (d) In case of multiple boot environments, boot using the BE (Boot-Environment) you wish to install the patch on.

For Solaris 11, refer to the man pages for specific instructions on using the 'pkg' command to install the patch provided.

Any other special or non-generic installation instructions should be described below as special instructions.

The following example installs the updated VRTSvxvm patch on a single-standalone machine:

        Example# pkg install --accept -g /patch_location/VRTSvxvm.p5p VRTSvxvm

After 'pkg install' please follow mandatory configuration steps mentioned in special instructions.

Please follow the special instructions mentioned below after installing the patch.


REMOVING THE PATCH
------------------
For Solaris 11.1 or later, if DMP native support is enabled, DMP controls the ZFS root pool. Turn off native support before removing the patch.

***  If DMP native support is enabled:
                 a.It is essential to disable DMP native support.

                        Run the following command to disable DMP native support

                        # vxdmpadm settune dmp_native_support=off

                  b.Reboot the system
                         # reboot

NOTE: If you do not disable native support prior to removing the VxVM patch, the system cannot be restarted after you remove DMP.
Please ensure you have access to the base 8.0.2 Veritas software prior to removing the updated VRTSvxvm package.

NOTE: Uninstalling the patch will remove the entire package.

The following example removes a patch from a standalone system:

The VRTSvxvm package cannot be removed unless you also remove the VRTSaslapm package.Therefore the pkg uninstall command will fail as follows:

# pkg uninstall VRTSvxvm
Creating Plan (Solver setup): -
pkg uninstall: Unable to remove 'VRTSvxvm@8.0.2.1800' due to the following packages that depend on it:
  VRTSaslapm@8.0.2.0


You will also need to uninstall the VRTSaslapm package.

# pkg uninstall VRTSvxvm VRTSaslapm

NOTE: You will need access to the base software of the VRTSvxvm package (original source media) to reinstall the uninstalled packages.


KNOWN ISSUES
------------
* Tracking ID: 4181021

SYMPTOM: The issue is observed when the tunable "use_hw_replicatedev" is off and no options are specified in the main.cf file to import SRDF disk groups.

WORKAROUND: Set ClearClone=1 in CVMVoldg in main.cf file or at the Diskgroup level.



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE


Applies to the following product releases

This update requires

IS-8.0.2 Update2 (Unix) on Solaris11_sparc platform


Update files

File name Description Version Platform Size