InfoScale™ 9.0 Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: AIX,Linux,Solaris

Before configuring monitoring for generic applications

Ensure that you complete the following tasks before configuring monitoring for generic applications:

  • Install the Cluster Server on the physical machine, virtual machine, logical domain, or LPAR on which you want to configure the application for monitoring.

  • If you are going to launch the wizard from VOM, ensure that the cluster has been configured and running.

  • Assign the following privileges to the logged-on user where you want to configure application monitoring:

    • When wizard is launched through vSphere client, assign Configure Application Monitoring (Admin) privileges.

    • When wizard is launched through VOM, the logged-on user group must be assigned the Admin role on the cluster or on the Availability perspective.

      The permission on the cluster may be explicitly assigned or inherited from a parent Organization.

  • Install the application and the associated components that you want to monitor on the physical machine, virtual machine, logical domain, or LPAR.

  • If you have configured a firewall, ensure that your firewall settings allow access to ports used by Cluster Server installer, wizards, and services.

    Verify that the following ports are not blocked by a firewall:

    Physical environment, logical domain, and LPAR

    5634, 14161, 14162, 14163, and 14164

    VMware environment

    443, 5634, 14152, and 14153

    Note:

    In the physical environment, logical domain, or LPAR, ensure that at least one of the following ports 14161, 14162, 14163, or 14164 is kept open.

  • You must not select bonded interfaces for cluster communication. A bonded interface is a logical NIC, formed by grouping several physical NICs together. All NICs in a bond have an identical MAC address, due to which you may experience the following issues:

    • SSO configuration failure.

    • The wizard may fail to discover the specified network adapters.

    • The wizard may fail to discover or validate the specified system name.

  • If you want to configure the storage dependencies of the application through the wizard, the LVM volumes or VxVM volumes used by the application should not be mounted on more than one mount point path.

  • The host name of the system must be resolvable through the DNS server or locally, using /etc/hosts file entries.

  • To review the information about the functions, attributes, and resource type definition of the VCS Application agent, refer to the Cluster Server Bundled Agents Reference Guide.

    You can download the latest documents from SORT: https://sort.veritas.com/documents

  • If your application uses storage mount points, you must ensure that those mount points are already mounted on the physical machine, virtual machine, logical domain, or LPAR from which you are configuring the application for monitoring. All the required disks must be attached and all the storage components must be available. You must launch the Arctera High Availability Configuration Wizard from the physical machine, virtual machine, logical domain, or LPAR on which the application is running. The wizard discovers the disks that are attached and the storage that is currently available.

Additional prerequisite for VOM
  • The wizard option is available in VOM only after the cluster is configured and running, so configure the cluster through CPI installer or manually.

Additional prerequisites for VMware virtual environment
  • Install and enable VMware Tools on the virtual machine where you want to monitor applications with VCS. Install a version that is compatible with the VMware ESX/ESXi server.

  • Install the VMware vSphere Client. You can configure application monitoring from the Arctera High Availability tab in the vSphere Client.

    You can also configure application monitoring directly from a browser window using the following URL:

    https://VMNameorIP:5634/vcs/admin/application_health.html

    VMNameorIP is the host name or IP address of the virtual machine on which you want to configure application monitoring.

  • Install Arctera High Availability Console on a Windows system in your data center and register the Arctera High Availability plug-in with the vCenter server.

  • You must not restore a snapshot on a virtual machine where an application is currently online, if the snapshot was taken when the application was offline on that virtual machine. Doing this may cause an unwanted failover. This also applies in the reverse scenario; you should not restore a snapshot where the application was online on a virtual machine, where the application is currently offline. This may lead to a misconfiguration where the application is online on multiple systems simultaneously.

  • Verify that the disks used by the application that you want to monitor are attached to non-shared controllers so that they can be detached from the system and attached to another system.

  • While creating a VCS cluster in a virtual environment, you must configure the cluster communication link over a public network in addition to private adapters. The link using the public adapter should be assigned as a low-priority link. This helps in case the private network adapters fail, leading to a condition where the systems cannot connect to each other, consider that the other system has faulted, and then try to gain access to the disks, thereby leading to an application fault.

  • You must not attach multiple types of SCSI controllers to the virtual machines because storage dependencies of the application cannot be determined and configured.

  • The term 'shared storage' refers to the removable disks attached to the virtual machine. It does not refer to disks attached to the shared controllers of the virtual machine.

  • Verify that the boot sequence of the virtual machine is such that the boot disk (OS hard disk) is placed before the removable disks. If the sequence places the removable disks before the boot disk, the virtual machine may not reboot after an application failover. The reboot may halt with an OS not found error. This issue occurs because during the application failover, the removable disks are detached from the current virtual machine and are attached to the failover target system.

  • By default, the controller ID and port must remain the same on all cluster nodes. If you do not want the resource to have the same controller ID and port, you should localize the attribute for all cluster nodes. Localization allows all cluster nodes to have different controller IDs and port numbers. For more information about localizing an attribute, refer to the Cluster Server Administrator's Guide.