InfoScale™ 9.0 Cluster Server Bundled Agents Reference Guide - Solaris
- Introducing bundled agents
- Storage agents
- About the storage agents
- DiskGroup agent
- DiskGroupSnap agent
- Dependencies for DiskGroupSnap agent
- Agent functions for DiskGroupSnap agent
- State definitions for DiskGroupSnap agent
- Attributes for DiskGroupSnap agent
- Notes for DiskGroupSnap agent
- Resource type definition for DiskGroupSnap agent
- Sample configurations for DiskGroupSnap agent
- Debug log levels for DiskGroupSnap agent
- Disk agent
- Volume agent
- VolumeSet agent
- Dependencies for VolumeSet agent
- Agent functions for VolumeSet agent
- State definitions for VolumeSet agent
- Attributes for VolumeSet agent
- Resource type definition for VolumeSet agent
- Sample configurations for VolumeSet agent
- Agent notes for VolumeSet agent
- Inaccessible volumes prevent the VolumeSet agent from coming online
- Debug log levels for VolumeSet agent
- Mount agent
- IMF awareness
- Dependencies for Mount agent
- Agent functions for Mount agent
- State definitions for Mount agent
- Attributes for Mount agent
- Resource type definition for Mount agent
- Notes for Mount agent
- High availability fire drill
- VxFS file system lock
- IMF usage notes
- IPv6 usage notes
- Support for loopback file system
- Enabling Level two monitoring for the Mount agent
- ZFS file system and pool creation example
- Support for VxFS direct mount inside non-global zones
- Sample configurations for Mount agent
- Debug log levels for Mount agent
- Zpool agent
- VMwareDisks agent
- SFCache agent
- Network agents
- About the network agents
- IP agent
- NIC agent
- About the IPMultiNICB and MultiNICB agents
- IPMultiNICB agent
- Dependencies for IPMultiNICB agent
- Requirements for IPMultiNICB
- Agent functions for IPMultiNICB agent
- State definitions for IPMultiNICB agent
- Attributes for IPMultiNICB agent
- Resource type definition for IPMultiNICB agent
- Manually migrating a logical IP address for IPMultiNICB agent
- Sample configurations for IPMultiNICB agent
- Debug log levels for IPMultiNICB agent
- MultiNICB agent
- Base and Multi-pathing modes for MultiNICB agent
- Oracle trunking for MultiNICB agent
- The haping utility for MultiNICB agent
- Dependencies for MultiNICB agent
- Agent functions for MultiNICB agent
- State definitions for MultiNICB agent
- Attributes for MultiNICB agent
- Optional attributes for Base and Mpathd modes for MultiNICB agent
- Optional attributes for Base mode for MultiNICB agent
- Optional attributes for Multi-pathing mode for MultiNICB agent
- Resource type definition for MultiNICB agent
- Solaris operating modes: Base and Multi-Pathing for MultiNICB agent
- Base mode for MultiNICB agent
- Failover and failback for MultiNICB agent
- Multi-Pathing mode for MultiNICB agent
- Configuring MultiNICB and IPMultiNICB agents on Solaris 11
- Trigger script for MultiNICB agent
- Sample configurations for MultiNICB agent
- Debug log levels for MultiNICB agent
- DNS agent
- Dependencies for DNS agent
- Agent functions for DNS agent
- State definitions for DNS agent
- Attributes for DNS agent
- Resource type definition for DNS agent
- Agent notes for DNS agent
- About using the VCS DNS agent on UNIX with a secure Windows DNS server
- High availability fire drill for DNS agent
- Monitor scenarios for DNS agent
- Sample Web server configuration for DNS agent
- Secure DNS update for BIND 9 for DNS agent
- Setting up secure updates using TSIG keys for BIND 9 for DNS agent
- Sample configurations for DNS agent
- Debug log levels for DNS agent
- File share agents
- About the file service agents
- NFS agent
- NFSRestart agent
- Share agent
- About the Samba agents
- NetBios agent
- Service and application agents
- About the services and applications agents
- AlternateIO agent
- Apache HTTP server agent
- Application agent
- IMF awareness
- High availability fire drill for Application agent
- Dependencies for Application agent
- Agent functions
- State definitions for Application agent
- Attributes for Application agent
- Resource type definition for Application agent
- Notes for Application agent
- Sample configurations for Application agent
- Debug log levels for Application agent
- CoordPoint agent
- LDom agent
- Configuring primary and logical domain dependencies and failure policy
- IMF awareness
- Dependencies
- Agent functions
- State definitions
- Attributes
- Resource type definition
- LDom agent notes
- About the auto-boot? variable
- Notes for the DomainFailurePolicy attribute
- Using VCS to migrate a logical domain
- Configuring the LDom agent for DR in a Global Cluster environment
- Using the LDom agent with IMF
- Sample configuration 1
- Sample configuration 2
- Configuration to support user-initiated LDom migration
- Configuration for VCS-initiated migration
- Sample configuration (Dynamic virtual machine service group failover)
- Debug log levels
- Process agent
- IMF awareness
- High availability fire drill for Process agent
- Dependencies for Process agent
- Agent functions for Process agent
- State definitions for Process agent
- Attributes for Process agent
- Resource type definition for Process agent
- Usage notes for Process agent
- Sample configurations for Process agent
- Debug log levels for Process agent
- ProcessOnOnly agent
- Project agent
- RestServer agent
- Zone agent
- Infrastructure and support agents
- Testing agents
- Replication agents
Notes for the DomainFailurePolicy attribute
When the DomainFailurePolicy attribute is set, the LDom agent sets the master domain of the logical domain with the key of the attribute and the value of the key as the failure policy of the master domain.
The LDom agent uses the following command to set the master for the logical domain:
# ldm set-domain master=master-domain guestldom
The LDom agent uses the following command to set the failure policy for the master domain:
# ldm set-domain failure-policy=failure-policy master-domain
As the DomainFailurePolicy attribute is available at the resource level, you can set the failure policy of the master domain to different values for different LDom resource knowingly or unknowingly. However, at any given point, the LDom agent can set only one failure policy for the master domain. In a cluster with multiple LDom resources, different values for the failure policy of the master domain can create a conflict. To avoid such a conflict, the LDom agent uses internal priority while setting the failure policy of the master domain.
The internal priority is as follows:
panic: highest
reset: high
stop: low
ignore: lowest
If the failure policy of the master domain is set to a lower priority on the system than the one set in the LDom resource for the DomainFailurePolicy attribute, then the failure policy of the master domain is changed to the value in the attribute.
If the failure policy of the master domain is set to a higher priority on the system than the one set in the LDom resource for the DomainFailurePolicy attribute, then the failure policy of the master domain will not be changed to the value in the attribute. The LDom agent logs a message to indicate the conflict for the first time.
If the failure policy of the master domain is set to ignore, then the LDom agent does not add the master domain to the master list of the logical domain. If the master domain is already part of the masters list of the logical domain, the LDom agent removes the master domain from the masters list.
Note:
Arctera does not recommend to set the failure policy of any of the master domains to panic.
Example 1
Failure policy of the master domain (primary) is set to ignore on the system and the DomainFailurePolicy attribute for the LDom resource is changed to { primary = "stop" }. To check whether the failure policy of the master domain (primary) is set to ignore on the system, enter the following command:
# ldm list-bindings primary | grep failure-policy
In this example, as the internal priority of the LDom agent is assigned to stop and as it is higher than ignore, the failure policy of the primary domain is changed to stop. The LDom agent uses the following command to change the failure policy of the primary domain to stop:
# ldm set-domain failure-policy=stop primary
Example 2
Failure policy of the master domain (primary) is set to panic on the system and the DomainFailurePolicy attribute for the LDom resource is changed to { primary = "stop" }. To check whether the failure policy of the master domain (primary) is set to panic on the system, enter the following command:
# ldm list-bindings primary | grep failure-policy
In this example, as the internal priority of the LDom agent is assigned to stop and as it is lower than panic, the failure policy of the primary domain is retained as panic.
If the failure policy of a master domain need to be set to a value of lower priority than the value currently set on the system, you must manually execute the ldm command. The LDom agent uses the following command to change the failure policy of the primary domain to stop from reset or panic:
# ldm set-domain failure-policy=stop primary
Example 3
If the value of the failure policy of the master domain is specified as ignore in the DomainFailurePolicy attribute, then the master domain is excluded from the masters list of the logical domain by the LDom agent.
If the masters list of a logical domain contains primary and secondary and if the DomainFailurePolicy attribute of the LDom resource for the logical domain is changed to {primary = ignore, secondary = "stop" }, then the primary domain is removed from the masters list of the logical domain.
Before you change the DomainFailurePolicy attribute, you can enter the following command to check whether the masters list of a logical domain contains primary and secondary:
# ldm list-bindings guestldom | grep master
The following output shows that the logical domain contains both primary and secondary:
master=primary, secondary
After you change the DomainFailurePolicy attribute, you can enter the following command to check whether the primary domain is removed from the masters list of the logical domain.
# ldm list-bindings guestldom | grep master
The following output shows that the primary domain is removed from the masters list:
master= secondary
For use case scenarios and for VCS configuration where the DomainFailurePolicy attribute must be set, refer to the InfoScale Virtualization Guide.