NetBackup IT Analytics System Administrator Guide
- Introduction
- Preparing for updates
- Backing up and restoring data
- Best practices for disaster recovery
- Oracle database backups
- File system backups
- Oracle database: Cold backup
- Oracle database: Export backups
- Scheduling the oracle database export
- Oracle database: On demand backup
- Restoring the NetBackup IT Analytics system
- Import the Oracle database
- Manual steps for database import / export using data pump
- Monitoring NetBackup IT Analytics
- Accessing NetBackup IT Analytics reports with the REST API
- Defining NetBackup estimated tape capacity
- Automating host group management
- About automating host group management
- Task overview: managing host groups in bulk
- Preparing to use PL/SQL utilities
- General utilities
- Categorize host operating systems by platform and version
- Identifying a host group ID
- Move or copy clients
- Organize clients by attribute
- Move host group
- Delete host group
- Move hosts and remove host groups
- Organize clients into groups by backup server
- Merge duplicate backup clients
- Merge duplicate hosts
- Bulk load utilities
- Veritas NetBackup utilities
- Automate NetBackup utilities
- Organize clients into groups by management server
- Set up an inactive clients group
- Set up a host group for clients in inactive policies
- Set up clients by policy
- Set up clients by policy type
- IBM Tivoli storage manager utilities
- Set up clients by policy domain
- Set up clients by IBM Tivoli storage manager instance
- Scheduling utilities to run automatically
- Host matching identification for single-domain multi-customer environments
- Attribute management
- Attribute bulk load utilities
- Attribute naming rules
- Rename attributes before upgrading
- Load host attributes and values
- Load attributes and values and assign to hosts
- Load array attributes and values and assign to arrays
- Overview of application attributes and values
- Load application database attributes and values
- Load MS Exchange organization attributes and values
- Load LUN attributes and values
- Load switch attributes and values
- Load port attributes and values
- Load Subscription attributes and values
- Importing generic backup data
- Backup job overrides
- Managing host data collection
- System configuration in the Portal
- System configuration in the Portal
- System configuration: functions
- Navigation overview
- System configuration parameter descriptions: Additional info
- Anomaly detection
- Data collection: Capacity chargeback
- Database administration: database
- Host discovery: EMC Avamar
- Host discovery: Host
- Events captured for audit
- Custom parameters
- Adding/editing a custom parameter
- Portal customizations
- Configuring global default inventory object selection
- Restricting user IDs to single sessions
- Customizing date format in the report scope selector
- Customizing the maximum number of lines for exported reports
- Customizing the total label display in tabular reports
- Customizing the host management page size
- Customizing the path and directory for File Analytics database
- Configuring badge expiration
- Configuring the maximum cache size in memory
- Configuring the cache time for reports
- Performance profile schedule customization
- LDAP and SSO authentication for Portal access
- Change Oracle database user passwords
- Integrate with CyberArk
- Tuning NetBackup IT Analytics
- Working with log files
- About debugging NetBackup IT Analytics
- Turn on debugging
- Database logging
- Portal and data collector log files - reduce logging
- Database SCON logging - reduce logging
- Refreshing the database SCON log
- Logging user activity in audit.log
- Logging only what a user deletes
- Logging all user activity
- Data collector log files
- Data collector log file organization
- Data collector log file naming conventions
- General data collector log files
- Find the event / meta collector ID
- Portal log files
- Database log files
- Installation / Upgrade log files
- Defining report metrics
- SNMP trap alerting
- SSL certificate configuration
- SSL certificate configuration
- SSL implementation overview
- Obtain an SSL certificate
- Update the web server configuration to enable SSL
- Configure virtual hosts for portal and / or data collection SSL
- Enable / Disable SSL for a Data Collector
- Enable / Disable SSL for emailed reports
- Test and troubleshoot SSL configurations
- Create a self-signed SSL certificate
- Configure the Data Collector to trust the certificate
- Keystore file locations on the Data Collector server
- Import a certificate into the Data Collector Java keystore
- Keystore on the portal server
- Add a virtual interface to a Linux server
- Add a virtual / secondary IP address on Windows
- Portal properties: Format and portal customizations
- Introduction
- Configuring global default inventory object selection
- Restricting user IDs to single sessions
- Customizing date format in the report scope selector
- Customizing the maximum number of lines for exported reports
- Customizing the total label display in tabular reports
- Customizing the host management page size
- Customizing the path and directory for file analytics database
- Configuring badge expiration
- Configuring the maximum cache size in memory
- Configuring the cache time for reports
- Configuring LDAP to use active directory (AD) for user group privileges
- Data retention periods for SDK database objects
- Data retention periods for SDK database objects
- Data aggregation
- Find the domain ID and database table names
- Retention period update for SDK user-defined objects example
- SDK user-defined database objects
- Capacity: default retention for basic database tables
- Capacity: default retention for EMC Symmetrix enhanced performance
- Capacity: Default retention for EMC XtremIO
- Capacity: Default retention for Dell EMC Elastic Cloud Storage (ECS)
- Capacity: Default retention for Windows file server
- Capacity: Default retention for Pure Storage FlashArray
- Cloud: Default retention for Amazon Web Services (AWS)
- Cloud: Default retention for Microsoft Azure
- Cloud: Default retention for OpenStack Ceilometer
- Configure multi-tenancy data purging retention periods
- Troubleshooting
- Appendix A. Kerberos based proxy user's authentication in Oracle
- Appendix B. Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver
- About Transport Layer Security (TLS)
- TLS in Oracle environment
- Configure TLS in Oracle with NetBackup IT Analytics on Linux in split architecture
- Configure TLS in Oracle with NetBackup IT Analytics on Linux in non-split architecture
- Configure TLS in Oracle with NetBackup IT Analytics on Windows in split architecture
- Configure TLS in Oracle with NetBackup IT Analytics on Windows in non-split architecture
- Configure TLS in user environment
- Appendix C. NetBackup IT Analytics for NetBackup on Kubernetes and appliances
Merge duplicate hosts
To merge duplicate hosts, you must have a local CSV copy of the Duplicate Host Analysis report. This report serves as an input to the duplicate host merging script.
Follow these recommendations before you merge duplicate hosts:
Carefully set the report scope and generate the Duplicate Host Analysis report, as its CSV export copy serves as an input for the host merge script.
Use a copy of the original CSV export as an input for the merge duplicate hosts script. The original CSV can serve as a reference in future.
Since the host merge process is irreversible, it must be executed by an administrator with comprehensive knowledge of backup solutions.
Back up the database before performing the host merge since the process is irreversible.
Since merge duplicate hosts is an advanced process, make sure you have followed all the recommendations suggested above.
The merging duplicate hosts is performed in the following order:
Generate the Duplicate Host Analysis report on the NetBackup IT Analytics export it in CSV format.
Edit the CSV copy of the Duplicate Host Analysis report for the host merge script.
Run the host merge script using the CSV input.
- Access the Duplicate Host Analysis report from Reports tab > System Administration reports.
- Click the Duplicate Host Analysis report name. Use the descriptions in the table below to set the required report scope and generate the report.
Field name
Description
Host Group
Allow you to select the host groups, backup servers, or hosts for the report scope. Your selection narrows down the scope and helps find duplicates more efficiently by targeting specific host groups or host name.
Find Duplicates Using:
Host Name
Display Name
You can choose between Host Name (default) and Display Name. Both searches are case-sensitive.
For Host Name, the system compares the internal host names to find duplicates. This is the default criterion in the legacy host merge option.
For Display Name, the system uses the display or external names of the hosts to find duplicates.
Host Type for the Duplicate Host
Clients Only
All
Clients Only allows you to find duplicates only for hosts that are identified as Clients (hosts backed up by any backup system).
All detects duplicates for all types of hosts.
Surviving host Selection Criteria
Highest Job Count
Most Recently Updated
Allows you to specify the criteria to select the surviving host among the duplicates when performing a host merge.
Highest Job Count: Selects the host with most associated jobs as the surviving host. This is the default criterion of the legacy host merge option, as a higher job count suggests that the host has more data associated with it.
Most Recently Updated: Selects the most recently updated host as the surviving host. Use this option when the duplicate hosts found are no longer actively collecting new data, as it helps to retain the most current host.
Cascade into sub-groups
The scope selector default is to cascade to all child sub-groups when generating the report. If you prefer to report ONLY on the host group you selected, then uncheck Cascade into sub-groups.
Filter by Common Attributes
Select this checkbox to have the report scope display attributes using "AND" logic. By selecting this feature, the report will display those results with the intersection of the selected criteria.
If this checkbox is not selected, the report will display attributes using "OR" logic. For example, if you select attribute values, Campbell, Engineering, Cost Center 1 and select Filter by Common Attributes, the report will display only the results that contain all 3 attribute values. If you do not select Filter by Common Attributes, the report will display all results with attributes Campbell, Engineering, or Cost Center 1.
Apply Attributes to Backup Servers
Select this checkbox to apply the attributes only to the backup servers, instead of hosts.
- After generating the report, export the report in CSV format on your system.
- Create a copy of the CSV report and prepare the copy for the host merge script as described in the next step.
This Duplicate Host Analysis report displays one row each for each suspected duplicate pair. If multiple duplicate hosts are detected, the report displays one row for each duplicate pair. For example, if host A has three potential duplicates, the report displays three rows - one for each duplicate.
Update the values of the following columns in the CSV copy as suggested below:
Surviving Host: Default value of this report column is Main, which indicates that the duplicates will be merged into the Main host. To change the surviving host, change its value to Duplicate. This way, all hosts are merged into the duplicate host. Main and Duplicate are the only acceptable values in this column.
Is Duplicate Host's Merge supported: This column supports only Yes and No as values. Delete all the rows containing the value No from the report CSV that you plan to use as input for the host merge process.
None other than the above modifications must be made to the report CSV that you plan to use for the host merge process. Your report CSV is now ready to serve as an input for the host merge script.
The host merge script has a provision to perform a pre-assessment during which it evaluates errors in the CSV and suggests corrections before proceeding further. You must ensure a successful pre-assessment and only then proceed to merge the hosts. Any error in the report CSV will result in the script aborting the process. You must provide the report CSV path along with the file name, log file path, and log file name when you run the script.
Caution:
As the host merge process is irreversible, you must back up your database and follow all the recommendations suggested above before you proceed.
To merge duplicate hosts:
- Run the host_merge.sql script from the
../database/toolsdirectory.You can run the script from
SQL*PlusorSQL Developeras portal or equivalent user which has access to all the schema tables. - Enter the following details when requested by the script:
Enter 1 or 2: Enter 1 to run the pre-assessment when you run the script for the first time. You can enter 2 when the pre-assessment is successful.
Enter Duplicate Host Analysis CSV file name with full path: Enter the report CSV file path including the file name.
Enter log file path: Enter the location for log file (without the file name)
Enter log file name: Enter the name of the log file.
After your pre-assessment is successful, repeat this step with option 2 to complete the host merge.