Veritas InfoScale™ Operations Manager 7.3.1 Add-ons User's Guide
- Section I. VCS Utilities Add-on 7.3.1
- Section II. Distribution Manager Add-on 7.3.1
- Section III. Fabric Insight Add-on 7.3.1
- Section IV. Patch Installer Add-on 7.3.1
- Introduction to Patch Installer Add-on
- Using Patch Installer Add-on
- Prerequisites for deploying SFHA hot fixes
- Deploying SFHA hot fixes
- Requirements for scripts to customize SFHA hot fix deployment
- Adding pre-installation and post-installation scripts to SFHA hot fixes
- Removing or replacing custom scripts for SFHA hot fixes
- Viewing information about SFHA hot fix deployment requests
- Section V. Storage Insight Add-on 7.3.1
- Performing the deep discovery of enclosures
- About Storage Insight Add-on
- About the discovery host
- About the network-attached storage discovery
- Adding HITACHI storage enclosures for deep discovery
- Adding EMC Symmetrix storage enclosures for deep discovery
- Adding IBM XIV storage enclosures for deep discovery
- Adding NetApp storage enclosures for deep discovery
- Adding EMC CLARiiON storage enclosures for deep discovery
- Adding HP EVA storage enclosures for deep discovery
- Adding IBM System Storage DS enclosures for deep discovery
- Adding EMC Celerra storage enclosures for deep discovery
- Adding EMC VNX storage enclosures for deep discovery
- Adding EMC VPLEX storage enclosures for deep discovery
- Adding 3PAR storage enclosures for deep discovery
- Adding IBM SVC storage enclosures for deep discovery
- Editing the deep discovery configuration for an enclosure
- Removing deep discovery for a storage enclosure
- Refreshing the enclosures that are configured for deep array discovery
- Monitoring the usage of thin pools
- Monitoring storage array metering data
- Managing LUN classifications
- Appendix A. Enclosure configuration prerequisites
- HITACHI enclosure configuration prerequisites
- EMC Symmetrix storage array configuration prerequisites
- Physical connection requirements for EMC Symmetrix enclosure
- Device setup requirements for EMC Symmetrix arrays
- Veritas InfoScale Operations Manager setup requirements for in-band EMC Symmetrix storage arrays
- Veritas InfoScale Operations Manager setup requirements to discover EMC Symmetrix storage arrays through remote SYMAPI servers
- IBM XIV enclosure configuration prerequisites
- NetApp storage enclosure configuration prerequisites
- EMC CLARiiON storage enclosures configuration prerequisites
- Hewlett-Packard Enterprise Virtual Array (HP EVA) configuration prerequisites
- IBM System Storage DS enclosure configuration prerequisites
- IBM SVC enclosure configuration prerequisites
- EMC Celerra enclosure configuration prerequisites
- EMC VNX storage enclosure configuration prerequisites
- EMC VPLEX storage enclosure configuration prerequisites
- 3PAR storage enclosure configuration prerequisites
- Appendix B. Commands used by Management Server for deep discovery of enclosures
- HITACHI storage enclosure commands
- EMC Symmetrix storage enclosure commands
- IBM XIV storage enclosures commands
- NetApp storage enclosure commands
- EMC CLARiiON storage enclosure commands
- HP EVA storage enclosure commands
- IBM System Storage DS enclosure commands
- EMC Celerra storage enclosure commands
- EMC VNX (Block) storage enclosure commands
- EMC VNX (File) storage enclosure commands
- EMC VPLEX storage enclosure commands
- 3PAR storage enclosure commands
- IBM SVC storage enclosure commands
- Performing the deep discovery of enclosures
- Section VI. Storage Insight SDK Add-on 7.3.1
- Overview of Storage Insight SDK Add-on 7.3.1
- Managing Veritas InfoScale Operations Manager Storage Insight plug-ins
- About creating Storage Insight plug-in
- About installing Storage Insight SDK Add-on
- About discovery script
- About the enclosure discovery command output
- --list encls command to discover all enclosures
- --list pdevs --encl enclosure Id command to discover all physical disks in an enclosure
- --list ldevs --encl enclosure Id command to discover all logical disks for an enclosure
- --list adapters --encl enclosure Id command to discover all adapters of an enclosure
- --list ports --encl enclosure Id command to discover all ports for an enclosure
- --list capacities --encl enclosure Id command to discover aggregate physical, RAID group, and logical capacities for an enclosure
- --list ldevpdevmap --encl enclosure Id command to discover logical disk-physical disk mapping for an enclosure
- --list ldevhostmap --encl enclosure Id command to discover logical device-host mapping for an enclosure
- --list meta-ldevs --encl enclosure Id command to discover the mapping of meta logical disks with segment logical disks for an enclosure
- --list raidgroups --encl enclosure Id command to discover RAID groups for an enclosure
- --list thinpools --encl enclosure Id command to discover all thin pools for an enclosure
- --list rgpdevmap --encl enclosure Id command to discover RAID group-physical disk mapping for an enclosure
- --list rgldevmap --encl enclosure Id command to discover RAID group-logical device mapping for an enclosure
- --list tpsrcldevmap --encl enclosure Id command to discover thin pool-source logical device mapping for an enclosure
- --list tpldevmap --encl enclosure Id command to discover thin pool logical device mapping for an enclosure
- --list replications --encl enclosure Id command to discover the replications for an enclosure
- About additional scripts
- About device identifiers
- Storage Insight Plug-in sample
- Creating a Storage Insight plug-in
- Editing a Storage Insight plug-in
- Testing a Storage Insight plug-in
- About creating Storage Insight plug-in
- Section VII. Storage Provisioning and Enclosure Migration Add-on 7.3.1
- Provisioning storage
- About storage provisioning
- About creating a storage template
- Creating a storage template using VxFS file systems
- Creating a storage template using NTFS file systems
- Creating a storage template using volumes
- Updating a storage template
- Provisioning storage
- Uploading storage templates
- Downloading storage templates
- Deleting storage templates
- Locking storage templates
- Unlocking storage templates
- Migrating volumes
- Provisioning storage
- Section VIII. Veritas HA Plug-in for VMware vSphere Web Client
- Introduction to Veritas HA Plug-in for vSphere Web Client
- Installation and uninstallation of Veritas HA Plug-in for vSphere Web Client
- Configurations for Veritas HA Plug-in for vSphere Web Client
- Registering the HA Plug-in with VMware vCenter Server
- Unregistering the HA Plug-in from VMware vCenter Server
- Deploying the HA Plug-in if the Management Server is configured in a high availability environment
- Adding managed hosts to the Management Server
- Migrating virtual machines to Veritas InfoScale Operations Manager
- Section IX. Application Migration Add-on
- Introduction to Application Migration Add-on
- Creating and managing an application migration plan
- Supported versions and platforms
- User privileges
- Prerequisites for creating an application migration plan
- Prerequisites for migration to AWS
- VVR Replication: Environment variables used in application migration
- Creating an application migration plan
- Understanding user-defined tasks
- Understanding application migration operations
- Understanding the cleanup operation
- Understanding the tasks executed in each operation
- Validations performed before migration plan execution
- Executing the application migration plan
- Editing an application migration plan
- Deleting application migration plan(s)
- Exporting application migration plan(s)
- Importing application migration plan(s)
- Viewing historical runs
- Viewing properties of an application migration plan
- Application migration logs
Understanding the Rehearse operation
In this operation, you can bring application online on the target cluster and test the application before performing the actual migration.
In this operation, the sync status of all volumes are checked after which cluster configuration of the selected service group is discovered in the source and translated to the target. The mirror disk group is then detached from the source cluster nodes and endian changes are performed on all volumes of the mirror disk group in the target cluster nodes.
After the endian changes are done, the service groups are brought online in the target cluster. After ensuring the application is running fine in the target, the service groups are taken offline and the target cluster configuration is removed.
Before removing the cluster configuration, a backup of the configuration is taken on the first node of the target cluster in the /etc/VRTSvcs/conf/config directory and the name will be in the following format:
main.cf_plan_name.date.time
The volumes in the mirror disk groups are then reattached to the corresponding volumes in the source disk group.
The operation aborts if all volumes between the source and mirror disk groups are not completely synced.
Initially, all RVGs are checked to ensure 100% sync. At times, if the application is writing to the volumes, data sync might be in progress to the secondary site and hence the Rehearse operation will not proceed. In such scenario, you can reduce or stop application writes so that data remains in sync.
Pre-requisites are done on target cluster nodes in order to create space optimized snapshot of volumes such as preparing volumes, cache volume, and cache object creation. IP, DiskGroup/CVmVolDg, RVGLogowner resources are then removed from the target cluster for the disk groups which are being migrated as part of the plan. This is to ensure no duplicate resources appear for an entity during cluster translation.
Cluster configuration of the selected service group and its dependencies are then discovered on the source and translated to the target. The file systems on all mounted volumes of the disk groups which are part of the replication is then frozen on the source for a moment and space optimized snapshots of these volumes are taken on the target with the help of VVR In-Band Control Messaging (vxibc). After the snapshots are taken, endian changes are performed on these snapshots and the service groups are brought online on the target. When service groups are brought online, the snapshot volumes gets mounted on the target cluster. Applications started on the target can write on these snapshot volumes until the cache volume fills up. After ensuring the application is running fine on the target, the service groups are taken offline and the target cluster configuration is removed.
Before removing the cluster configuration, the configuration is backed up on the first node of the target cluster in the /etc/VRTSvcs/conf/config directory and the name is in the following format:
main.cf_plan_name.date.time
The disk groups are then imported on the target cluster node and the snapshots are destroyed. The IP, DiskGroup/CVMVolDg, and RVGLogowner resources are re-created on the target, as required, and then brought online.