Storage Foundation 7.4.2 Configuration and Upgrade Guide - Linux
- Section I. Introduction and configuration of Storage Foundation
- Section II. Upgrade of Storage Foundation
- Planning to upgrade Storage Foundation
- About the upgrade
- Supported upgrade paths
- Preparing to upgrade SF
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
- Upgrading Storage Foundation
- Performing an automated SF upgrade using response files
- Performing post-upgrade tasks
- Optional configuration steps
- Re-joining the backup boot disk group into the current disk group
- Reverting to the backup boot disk group after an unsuccessful upgrade
- Recovering VVR if automatic upgrade fails
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- Upgrading VxVM disk group versions
- Updating variables
- Setting the default disk group
- Verifying the Storage Foundation upgrade
- Planning to upgrade Storage Foundation
- Section III. Post configuration tasks
- Section IV. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for Linux
About using the postcheck option
You can use the installer's post-check to determine installation-related problems and to aid in troubleshooting.
Note:
This command option requires downtime for the node.
When you use the postcheck option, it can help you troubleshoot the following VCS-related issues:
The heartbeat link does not exist.
The heartbeat link cannot communicate.
The heartbeat link is a part of a bonded or aggregated NIC.
A duplicated cluster ID exists (if LLT is not running at the check time).
The VRTSllt pkg version is not consistent on the nodes.
The llt-linkinstall value is incorrect.
The /etc/llthosts and /etc/llttab configuration is incorrect.
the
/etc/gabtabfile is incorrect.The incorrect GAB linkinstall value exists.
The VRTSgab pkg version is not consistent on the nodes.
The
main.cffile or thetypes.cffile is invalid.The
/etc/VRTSvcs/conf/sysnamefile is not consistent with the hostname.The cluster UUID does not exist.
The
uuidconfig.plfile is missing.The VRTSvcs pkg version is not consistent on the nodes.
The
/etc/vxfenmodefile is missing or incorrect.The
/etc/vxfendg fileis invalid.The vxfen link-install value is incorrect.
The VRTSvxfen pkg version is not consistent.
The postcheck option can help you troubleshoot the following SFHA or SFCFSHA issues:
Volume Manager cannot start because the
/etc/vx/reconfig.d/state.d/install-dbfile has not been removed.Volume Manager cannot start because the
volbootfile is not loaded.Volume Manager cannot start because no license exists.
Cluster Volume Manager cannot start because the CVM configuration is incorrect in the
main.cffile. For example, the Autostartlist value is missing on the nodes.Cluster Volume Manager cannot come online because the node ID in the
/etc/llthostsfile is not consistent.Cluster Volume Manager cannot come online because Vxfen is not started.
Cluster Volume Manager cannot start because gab is not configured.
Cluster Volume Manager cannot come online because of a CVM protocol mismatch.
Cluster Volume Manager group name has changed from "cvm", which causes CVM to go offline.
You can use the installer's post-check option to perform the following checks:
General checks for all products:
All the required RPMs are installed.
The versions of the required RPMs are correct.
There are no verification issues for the required RPMs.
Checks for Volume Manager (VM):
Lists the daemons which are not running (vxattachd, vxconfigbackupd, vxesd, vxrelocd ...).
Lists the disks which are not in 'online' or 'online shared' state (vxdisk list).
Lists the diskgroups which are not in 'enabled' state (vxdg list).
Lists the volumes which are not in 'enabled' state (vxprint -g <dgname>).
Lists the volumes which are in 'Unstartable' state (vxinfo -g <dgname>).
Lists the volumes which are not configured in
/etc/fstab.
Checks for File System (FS):
Lists the VxFS kernel modules which are not loaded (
vxfs/fdd/vxportal.).Whether all VxFS file systems present in
/etc/fstabfile are mounted.Whether all VxFS file systems present in
/etc/fstabare in disk layout 12 or higher.Whether all mounted VxFS file systems are in disk layout 12 or higher.
Checks for Cluster File System:
Whether FS and ODM are running at the latest protocol level.
Whether all mounted CFS file systems are managed by VCS.
Whether cvm service group is online.