Veritas Flex Appliance Getting Started and Administration Guide
- Product overview
- Release notes
- Flex Appliance 2.1 new features, enhancements, and changes
- Flex Appliance 2.1.1 new features, enhancements, and changes
- Flex Appliance 2.1.2 new features, enhancements, and changes
- Flex Appliance 2.1.3 new features, enhancements, and changes
- Flex Appliance 2.1.4 new features, enhancements, and changes
- Supported upgrade paths to this release
- Operational notes
- Flex Appliance 2.1 release content
- Flex Appliance 2.1.1 release content
- Flex Appliance 2.1.2 release content
- Flex Appliance 2.1.3 release content
- Flex Appliance 2.1.4 release content
- Getting started
- Initial configuration guidelines and checklist
- Performing the initial configuration
- Adding a node
- Accessing and using the Flex Appliance Shell
- Accessing and using the Flex Appliance Console
- Managing the appliance from the Appliance Management Console
- Setting the date and time for appliance nodes
- Common tasks in Flex Appliance
- Managing network settings
- Managing users
- Overview of the Flex Appliance default users
- Changing the password policy
- Managing Flex Appliance Console users and tenants
- Adding a tenant
- Editing a tenant
- Removing a tenant
- Adding a local user to the Flex Appliance Console
- Connecting a remote user domain to the Flex Appliance Console
- Importing a remote user or user group to the Flex Appliance Console
- Editing a remote user domain in the Flex Appliance Console
- Changing a local user password in the Flex Appliance Console
- Expiring local user passwords in the Flex Appliance Console
- Removing users from the Flex Appliance Console
- Managing user authentication with smart cards or digital certificates
- Changing the hostadmin user password in the Flex Appliance Shell
- Changing the sysadmin user password in the Veritas Remote Management Interface
- Using Flex Appliance
- Managing the repository
- Creating application instances
- Managing application instances from Flex Appliance and NetBackup
- Managing application instances from Flex Appliance
- Upgrading application instances
- Updating an application instance to a newer revision
- About Flex Appliance upgrades and updates
- Appliance security
- Monitoring the appliance
- Reconfiguring the appliance
- Troubleshooting guidelines
Warnings and considerations for instance rollbacks
If you need to roll back an instance upgrade, review the following information before you begin.
Instances with MSDP storage or Cloud Catalyst storage do not support rollback. If you experience an upgrade failure that you cannot resolve, contact Veritas Technical Support for assistance.
Rollback of other instances should only be attempted as a last resort if there were serious problems with the upgrade.
A rollback restores the instance to a pre-upgrade checkpoint and reverses all operations that were performed after the upgrade, including backup data. For this reason, backup operations should be kept at a minimum for testing purposes only while the instance upgrade is in a pending state. Do not perform production operations until you commit or roll back the upgrade.
You cannot resize the instance storage until you commit or roll back the upgrade.
If you upgrade and roll back an application instance that has a lot of configured storage, the rollback can take a long time to complete. For example, an instance with 1 Petabyte of storage can take a little over an hour to roll back.
If a rollback is performed, there is a risk of data loss and data leakage for all operations that are performed after the upgrade. The longer the system was up and running before a rollback, the greater the chance of data loss and leakage. The data loss is not limited to losing backup data for the jobs that ran before the rollback. Future backups can be affected as well.
The following inconsistencies can occur if you decide to roll back:
Incremental or transaction log-based database backups:
If transaction logs were truncated after the upgrade and before the rollback, the database may not be protected.
To resolve this issue, perform a full database backup after the rollback.
Incremental Windows file system backups:
If the archive bit is used for incremental backup, it is reset upon completion of an incremental backup. If a rollback occurs, the incremental backup is lost, and subsequent incremental backups do not detect that these files changed. The files are not backed up again until a full backup is performed.
To resolve this issue, perform a full backup after the rollback. If any files were modified in the lost incremental and then deleted before the next full backup, those files are lost.
Backup expiration catalog and storage inconsistency:
If backup images expire and cleanup begins after the upgrade and before the rollback, backup data may be removed from storage units external to the instance. For example, this behavior can happen with an MSDP media server, cloud storage, OST storage, or tape storage. When a rollback of the primary server catalog occurs, the catalog indicates that there is a valid backup even though the data was removed from storage. This inconsistency results in backup data that cannot be restored, duplicated, or replicated. It may also affect scheduling of subsequent backups (delaying backups or performing incrementals instead of fulls).
Orphaned backups on storage:
If backup images are created on external storage after the upgrade and before the rollback of the primary server, the backup images exist on storage but not in the NetBackup catalog. This discrepancy results in situations where the backups are never removed from storage (data leakage).
To resolve this issue, import the images from storage or use the consistency check tools.
Backup considerations if the instance is a media server:
The backups between the upgrade and rollback are not restorable even though NetBackup has them in the catalog.
Unfinished SLP jobs fail, causing inconsistencies between the NetBackup primary server and the storage.
If any backups were deleted after the upgrade and before the rollback, those backups come back as storage leak.