NetBackup Flex Scale Network connectivity pre-requisites

Article: 100053432
Last Published: 2022-08-03
Ratings: 0 0
Product(s): Appliances

Description

There are several network configuration checks to be completed before you configure a Flexscale cluster.
This pre-installation checklist will help with them.

Pre-requisite

  1. Management network must be configured on all the nodes forming the cluster. Syntax to configure management interface.
    nbfs-2.1 > set network interface ip=<IPAddress> netmask=<Netmask> gateway=<Gateway>
  2. Have the configuration yaml ready before proceeding further.

Assumptions

  1. Most of the following pre-check commands requires root privileges, hence you may have to elevate yourself. Please note when you elevate yourself for the first time, you have to change the root password, when you do so please change the password on all the nodes with the same password. If root password is not same on all nodes forming the cluster, then the cluster configuration will fail.
  2. Most of the following pre-check recommendations are based on the assumption that ping is allowed on the site. If ping is not allowed please fall back to 'arping' for any of the following commands referring to ping. Here its syntax:

# arping -I <interface> <destination>

Pre-installation checklist

1) Management network connectivity

Check management network connectivity between nodes.

     management_interface_ip:
      - 1.2.3.4
      - 1.2.3.5
      - 1.2.3.6
      - 1.2.3.7

     For the above configuration, you've check the connectivity from all nodes.

     # ping -I eth1 1.2.3.4
     # ping -I eth1 1.2.3.5
     # ping -I eth1 1.2.3.6
     # ping -I eth1 1.2.3.7

     Also check the routing table via 'ip route show' command to make sure routes are set properly.

2) Private network connectivity  

Check private network between nodes are accessible. Login to any node and execute the following curl command. 

# curl -X PUT 'http://localhost:7080/hostagent/v1/api/execute?sync=true' -d '{ "name" : "node_discovery", "hosts" : "localhost", "roles" : ["node_discovery"] }'

This should discover all the nodes on the private network to form the cluster, if not all nodes are listed then there is a problem with private network connectivity. Check private network is configured properly and 172/169 IP range is allowed on that network.

A sample output with four nodes on the private network is listed below for reference. Please note private network interface is on eth6.

                              [
                                {
                                    "domain": "local",
                                    "hostname": "VTAS9031105",
                                    "ip": "172.1.2.3",
                                    "ipv6": "fd00::2"
                                },
                                {
                                    "domain": "local",
                                    "hostname": "VTAS9031104",
                                    "ip": "172.1.2.4",
                                    "ipv6": "fd00::3"
                                },
                                {
                                    "domain": "local",
                                    "hostname": "VTAS9031103",
                                    "ip": "172.1.2.5",
                                    "ipv6": "fd00::4"
                                },
                                {
                                    "domain": "local",
                                    "hostname": "VTAS9031102",
                                    "ip": "172.1.2.6",
                                    "ipv6": "fd00::1"
                                }
                            ]

Another useful command to validate node discovery.

site1-01:~ # avahi-browse -d local _discovery._sub._http._tcp -rt
+   eth4 IPv6 VTAS9031103                                   Web Site             local
+   eth4 IPv6 VTAS9031104                                   Web Site             local
+   eth4 IPv6 VTAS9031102                                   Web Site             local
+   eth4 IPv6 VTAS9031105                                   Web Site             local
+   eth4 IPv4 VTAS9031103                                   Web Site             local
+   eth4 IPv4 VTAS9031104                                   Web Site             local
+   eth4 IPv4 VTAS9031102                                   Web Site             local
+   eth4 IPv4 VTAS9031105                                   Web Site             local
=   eth4 IPv6 VTAS9031102                                   Web Site             local
   hostname = [VTAS9031102.local]
   address = [fd00::1]
   port = [12345]
   txt = []
=   eth4 IPv4 VTAS9031102                                   Web Site             local
   hostname = [VTAS9031102.local]
   address = [172.1.2.3]
   port = [12345]
   txt = []
=   eth4 IPv6 VTAS9031104                                   Web Site             local
   hostname = [VTAS9031104.local]
   address = [fd00::3]
   port = [12345]
   txt = []
=   eth4 IPv4 VTAS9031104                                   Web Site             local
   hostname = [VTAS9031104.local]
   address = [172.1.2.4]
   port = [12345]
   txt = []
=   eth4 IPv6 VTAS9031103                                   Web Site             local
   hostname = [VTAS9031103.local]
   address = [fd00::4]
   port = [12345]
   txt = []
=   eth4 IPv4 VTAS9031103                                   Web Site             local
   hostname = [VTAS9031103.local]
   address = [172.1.2.5]
   port = [12345]
   txt = []
=   eth4 IPv6 VTAS9031105                                   Web Site             local
   hostname = [VTAS9031105.local]
   address = [fd00::2]
   port = [12345]
   txt = []
=   eth4 IPv4 VTAS9031105                                   Web Site             local
   hostname = [VTAS9031105.local]
   address = [172.1.2.6]
   port = [12345]
   txt = []
site1-01:~ #

3) DNS reachability from node

Check all nodes can reach DNS server to resolve names. DNS may needs to be configured manually, edit /etc/resolv.conf file and configure DNS. Please revert any configuration changes before starting cluster configuration.

# ping -I eth1 z.z.z.z (Check DNS reach via eth1)

# ping -I eth1 master.server.com (Check master server reach via eth1)

# dig -b x.x.x.x +short master.server.com (Check dns via eth1)

Here.,

eth1 ip: x.x.x.x
netbackup master server hostname: master.server.com
dns server: z.z.z.z

4) Firewall and network ports requirements

-  If a firewall is configured, then ensure that the firewall settings allow access to the services and ports used by NetBackup Flex Scale. Enable both inbound and outbound communication for these ports and services.

 

Protocol/Port  Direction  Interface  Purpose 
TCP/22  OUT  Management IP  Sending product logs 
UDP/53  OUT  Management, Data IP  DNS resolution 
UDP/123  OUT  Management IP  NTP Synchronization 
TCP/UDP/389  OUT  Data IP  AD/LDAP access 
TCP/443  IN, OUT  Management, Data IP  NetBackup Web UI, IA, Ushare etc. 
TCP/1556  IN, OUT  Data IP  NetBackup PBX 
TCP/UDP/3269  OUT  Data IP  AD/LDAP SSL/TLS access 
TCP/8443  IN  Management IP  Cluster Config Web UI 
TCP/8443  IN  Data IP  vCenter, vSphere etc 
TCP/10102  IN, OUT  Data IP  NBU Media/Storage spad 
TCP/10082  IN, OUT  Data IP  NBU Media/Storage spoold 
TCP/13724  IN, OUT  Data IP  NBU vnetd 
TCP/14161  IN  Management IP  Appliance Web UI 
       
Optional       
(monitoring):       
TCP/25  OUT  Management IP  Email notifications 
TCP/162  OUT  Management IP  SNMP monitoring 
TCP/443  OUT  Management IP  Call Home monitoring 
       
Optional       
(Catalog replication):       
TCP/8199  IN, OUT  Data IP  Catalog replication 
TCP/8989  IN, OUT  Data IP  Catalog replication  
TCP/UDP/4145  IN, OUT  Data IP  Catalog replication  

5) Netbackup firewall port requirements

If a firewall is configured, then the few Netbackup specific tcp ports must be open through a firewall for NetBackup hosts to communicate with each other. 

Please refer to Tech Article https://www.veritas.com/support/en_US/article.100002391 for further details.

If all the above are configured correctly, please proceed with the cluster configuration using a configuration yaml file.

Was this content helpful?