Veritas InfoScale™ 7.3.1 Release Notes - Linux
- Introduction
- Changes introduced in 7.3.1
- Changes related to installation and upgrades
- Changes related to the Cluster Server engine
- Changes related to Cluster Server agents
- Changes related to InfoScale in cloud environments
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to replication
- Changes related to Dynamic Multipathing
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Issues related to Veritas InfoScale Storage in Amazon Web Services cloud environments
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Application isolation feature known Issues
- Cloud deployment known issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
When a client node goes down, for reasons such as node panic, I/O fencing does not come up on that client node after node restart (3341322)
This issue happens when one of the following conditions is true:
Any of the CP servers configured for HTTPS communication goes down.
The CP server service group in any of the CP servers configured for HTTPS communication goes down.
Any of the VIPs in any of the CP servers configured for HTTPS communication goes down.
When you restart the client node, fencing configuration starts on the node. The fencing daemon, vxfend, invokes some of the fencing scripts on the node. Each of these scripts has a timeout value of 120 seconds. If any of these scripts fails, fencing configuration fails on that node.
Some of these scripts use cpsadm commands to communicate with CP servers. When the node comes up, cpsadm commands try to connect to the CP server using VIPs for a timeout value of 60 seconds. So, if the multiple cpsadm commands that are run within a single script exceed the timeout value, then the total timeout value exceeds 120 seconds, which causes one of the scripts to time out. Hence, I/O fencing does not come up on the client node.
Note that this issue does not occur with IPM-based communication between CP server and client clusters.
Workaround: Fix the CP server.