Please enter search query.
Search <book_title>...
Veritas NetBackup™ for OpenStack Administrator's Guide
Last Published:
2021-06-07
Product(s):
NetBackup (9.1)
- Introduction
- Deploying NetBackup for OpenStack
- Requirements
- NetBackup for OpenStack network considerations
- Existing endpoints in OpenStack
- OpenStack endpoints required by NetBackup for OpenStack
- Recommendation: Provide access to all OpenStack Endpoint types
- Backup target access required by NetBackup for OpenStack
- Example of a typical NetBackup for OpenStack network integration
- Other examples of NetBackup for OpenStack network integrations
- Preparing the installation
- Spinning up the NetBackup for OpenStack VM
- Installing NetBackup for OpenStack Components
- Installing on RHOSP
- 1. Prepare for deployment
- 2] Upload NetBackup for OpenStack puppet module
- 3] Update overcloud roles data file to include NetBackup for OpenStack services
- 4] Prepare NetBackup for OpenStack container images
- 5] Provide environment details in nbos_env.yaml
- 6] Deploy overcloud with NetBackup OpenStack environment
- 7] Verify deployment
- 8] Additional Steps on NetBackup for OpenStack Appliance
- 9] Troubleshooting for overcloud deployment failures
- Installing on RHOSP
- Configuring NetBackup for OpenStack
- Post Installation Health-Check
- Uninstalling from RHOSP
- Clean NetBackup for OpenStack Datamover API service
- Clean NetBackup for OpenStack Datamover Service
- Clean NetBackup for OpenStack haproxy resources
- Clean NetBackup for OpenStack Keystone resources
- Clean NetBackup for OpenStack database resources
- Revert overcloud deploy command
- Revert back to original RHOSP Horizon container
- Destroy the NetBackup for OpenStack VM Cluster
- Install workloadmgr CLI client
- Configuring NetBackup OpenStack Appliance
- Configuring NetBackup Master Server
- NetBackup for OpenStack policies
- Performing backups and restores of OpenStack
- About snapshots
- List of snapshots
- Creating a snapshot
- Snapshot overview
- Delete snapshots
- Snapshot Cancel
- About restores
- List of Restores
- Restores overview
- Delete a Restore
- Cancel a Restore
- One-Click Restore
- Selective Restore
- Inplace Restore
- Required restore.json for CLI
- About file search
- Navigating to the file search tab in Horizon
- Configuring and starting a file search in Horizon
- Start the File Search and retrieve the results in Horizon
- Doing a CLI File Search
- About snapshot mount
- Create a File Recovery Manager Instance
- Mounting a snapshot
- Accessing the File Recovery Manager
- Identifying mounted snapshots
- Unmounting a snapshot
- About schedulers
- Disable a schedule
- Enable a schedule
- Modify a schedule
- About email notifications
- Requirements to activate email Notifications
- Activate/Deactivate the email Notifications
- Performing Backup Administration tasks
- NBOS Backup Admin Area
- Policy Attributes
- Policy Quotas
- Managing Trusts
- Policy import and migration
- Disaster Recovery
- Example runbook for disaster recovery using NFS
- Scenario
- Prerequisites for the disaster recovery process
- Disaster recovery of a single policy
- Copy the policy directories to the configured NFS Volume
- Make the Mount-Paths available
- Reassign the policy
- Add admin-user to required domains and projects
- Discover orphaned policies from NFS-Storage of Target Cloud
- List available projects on Target Cloud in the Target Domain
- List available users on the Target Cloud in the Target Project that have the right backup trustee role
- Reassign the policy to the target project
- Verify that the policy is available at the desired target_project
- Restore the policy
- Clean up
- Disaster recovery of a complete cloud
- Reconfigure the Target NetBackup for OpenStack installation
- Make the Mount-Paths available
- Reassign the policy
- Add admin-user to required domains and projects
- Discover orphaned policies from NFS-Storage of Target Cloud
- List available projects on Target Cloud in the Target Domain
- List available users on the Target Cloud in the Target Project that have the right backup trustee role
- Reassign the policy to the target project
- Verify that the policy is available at the desired target_project
- Restore the policy
- Reconfigure the Target NetBackup for OpenStack installation back to the original one
- Clean up
- Troubleshooting
- Index
On the NetBackup for OpenStack Cluster
wlm-workloads
This service runs and is active on every NetBackup for OpenStack node.
[root@Upstream ~]# systemctl status wlm-workloads
● wlm-workloads.service - workloadmanager workloads service
Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled;
vendor preset: disabled)
Active: active (running) since Wed 2020-06-10 13:42:42 UTC; 1 weeks
4 days ago
Main PID: 12779 (workloadmgr-wor)
Tasks: 17
CGroup: /system.slice/wlm-workloads.service
├─12779 /home/stack/myansible/bin/python
/home/stack/myansible/bin/workloadmgr-workloads
--config-file=/etc/workloadmgr/workloadmgr.conf
├─12982 /home/stack/myansible/bin/python
/home/stack/myansible/bin/workloadmgr-workloads
--config-file=/etc/workloadmgr/workloadmgr.conf
├─12983 /home/stack/myansible/bin/python
/home/stack/myansible/bin/workloadmgr-workloads
--config-file=/etc/workloadmgr/workloadmgr.conf
├─12984 /home/stack/myansible/bin/python
/home/stack/myansible/bin/workloadmgr-workloads
--config-file=/etc/workloadmgr/workloadmgr.conf
[...]
wlm-api
This service runs and is active on every NetBackup for OpenStack node.
[root@Upstream ~]# systemctl status wlm-api
● wlm-api.service - Cluster Controlled wlm-api
Loaded: loaded (/etc/systemd/system/wlm-api.service; disabled;
vendor preset: disabled)
Drop-In: /run/systemd/system/wlm-api.service.d
└─50-pacemaker.conf
Active: active (running) since Thu 2020-04-16 22:30:11 UTC;
2 months 5 days ago
Main PID: 11815 (workloadmgr-api)
Tasks: 1
CGroup: /system.slice/wlm-api.service
└─11815 /home/stack/myansible/bin/python /home/stack/
myansible/bin/workloadmgr-api --config-file=/etc/
workloadmgr/workloadmgr.conf
wlm-scheduler
This service runs and is active on every NetBackup for OpenStack node.
[root@Upstream ~]# systemctl status wlm-scheduler
● wlm-scheduler.service - Cluster Controlled wlm-scheduler
Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled;
vendor preset: disabled)
Drop-In: /run/systemd/system/wlm-scheduler.service.d
└─50-pacemaker.conf
Active: active (running) since Thu 2020-04-02 13:49:22 UTC; 2 months
20 days ago
Main PID: 29439 (workloadmgr-sch)
Tasks: 1
CGroup: /system.slice/wlm-scheduler.service
└─29439 /home/stack/myansible/bin/python /home/stack/myansible
/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/
workloadmgr.conf
wlm-cron
This service is controlled by pacemaker and runs only on the master node
[root@Upstream ~]# systemctl status wlm-cron
● wlm-cron.service - Cluster Controlled wlm-cron
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled;
vendor preset: disabled)
Drop-In: /run/systemd/system/wlm-cron.service.d
└─50-pacemaker.conf
Active: active (running) since Wed 2021-01-27 19:59:26 UTC; 6 days ago
Main PID: 23071 (workloadmgr-cro)
CGroup: /system.slice/wlm-cron.service
├─23071 /home/stack/myansible/bin/python3 /home/stack/
myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr
/workloadmgr.conf
└─23248 /home/stack/myansible/bin/python3 /home/stack/
myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/
workloadmgr.conf
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: ● wlm-cron.service - Cluster Controlled wlm-cron
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled;
vendor preset: disabled)
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: Drop-In: /run/systemd/system/wlm-cron.service.d
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: └─50-pacemaker.conf
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: Active: active (running) since Wed 2021-01-27 19:59:26 UTC;
6 days ago
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: Main PID: 23071 (workloadmgr-cro)
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: CGroup: /system.slice/wlm-cron.service
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: ├─23071 /home/stack/myansible/bin/python3 /home/stack/myansible/
bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: ├─23248 /home/stack/myansible/bin/python3 /home/stack/myansible/
bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
Feb 03 19:28:43 nbosvm1-ansible-ussuri-ubuntu18-vagrant workloadmgr-cron
[23071]: └─27145 /usr/bin/systemctl status wlm-cron
Pacemaker Cluster Status
The pacemaker cluster controls and watches the VIP on the NetBackup for OpenStack Cluster. It also controls on which node the wlm-api and wlm-scheduler service runs.
[root@Upstream ~]# pcs status
Cluster name: NetBackup for OpenStack
WARNINGS:
Corosync and pacemaker node names do not match (IPs used in setup?)
Stack: corosync
Current DC: nbosvm1-ansible-ussuri-ubuntu18-vagrant (version
1.1.23-1.el7_9.1-9acf116022) - chapterition with quorum
Last updated: Wed Feb 3 19:20:02 2021
Last change: Wed Jan 27 20:00:12 2021 by root via crm_resource on
nbosvm1-ansible-ussuri-ubuntu18-vagrant
1 node configured
6 resource instances configured
Online: [ nbosvm1-ansible-ussuri-ubuntu18-vagrant ]
Full list of resources:
virtual_ip (ocf::heartbeat:IPaddr2): Started nbosvm1-ansible-
ussuri-ubuntu18-vagrant
virtual_ip_public (ocf::heartbeat:IPaddr2): Started nbosvm1-
ansible-ussuri-ubuntu18-vagrant
virtual_ip_admin (ocf::heartbeat:IPaddr2): Started nbosvm1-
ansible-ussuri-ubuntu18-vagrant
virtual_ip_internal (ocf::heartbeat:IPaddr2): Started nbosvm1-
ansible-ussuri-ubuntu18-vagrant
wlm-cron (systemd:wlm-cron): Started nbosvm1-ansible-ussuri-
ubuntu18-vagrant
Clone Set: lb_nginx-clone [lb_nginx]
Started: [ nbosvm1-ansible-ussuri-ubuntu18-vagrant ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Mount availability
The NetBackup for OpenStack Cluster needs access to the Backup Target and should have the correct mount at all times.
[root@Upstream ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 38M 3.8G 1% /dev/shm
tmpfs 3.8G 427M 3.4G 12% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/vda1 40G 8.8G 32G 22% /
tmpfs 773M 0 773M 0% /run/user/996
tmpfs 773M 0 773M 0% /run/user/0
10.10.2.20:/upstream 1008G 704G 254G 74% /var/NetBackupOpenStack-mounts/
MTAuMTAuMi4yMDovdXBzdHJlYW0=
10.10.2.20:/upstream2 483G 22G 462G 5% /var/NetBackupOpenStack-mounts/
MTAuMTAuMi4yMDovdXBzdHJlYW0y