Veritas Access Appliance 8.3 Administrator's Guide
- Section I. Introducing Access Appliance
- Section II. Configuring Access Appliance
- Managing users
- Managing licenses
- Configuring the network
- Configuring authentication services
- Configuring user authentication using digital certificates or smart cards
- Section III. Managing Access Appliance storage
- Configuring storage
- Managing disks
- Access Appliance as an iSCSI target
- Configuring storage
- Section IV. Managing Access Appliance file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Access Appliance as a CIFS server
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- Using Access Appliance as an Object Store server
- Configuring the S3 server using GUI
- Configuring the NFS server
- Section V. Managing Access Appliance security
- Managing security
- Setting up FIPS mode
- Configuring STIG
- Setting the banner
- Setting the password policy
- Immutability in Access Appliance
- Deploying certificates on Access Appliance
- Single Sign-On (SSO)
- Configuring multifactor authentication
- Section VI. Monitoring and troubleshooting
- Monitoring the appliance
- Configuring event notifications and audit logs
- About alert management
- Appliance log files
- Section VII. Provisioning and managing Access Appliance file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- About managing application I/O workloads using maximum IOPS settings
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VIII. Provisioning and managing Access Appliance shares
- Creating shares for applications
- Creating and maintaining NFS shares
- About the NFS shares
- Creating and maintaining CIFS shares
- About the CIFS shares
- About managing CIFS shares for Enterprise Vault
- Integrating Access Appliance with Data Insight
- Section IX. Managing Access Appliance storage services
- Configuring continuous replication
- How Access Appliance continuous replication works
- Configuring a continuous replication job using the GUI
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Configuring continuous replication
- Section X. Reference
Considerations for configuration a LACP bond
Standardized protocol is used to establish and maintain link-aggregation between systems. What sets LACP bonding mode apart from the other bonding modes supported by Linux is that the systems participating in LACP continuously exchange data (LACPDUs) with each other to determine which links within the bond should actively be used for the transmission and receipt of network traffic.
While the IEEE standard defines terminology specific to LACP, some vendors have chosen to use different terminology. In this section, LAG (link-aggregation group) and MLAG are used when describing link-aggregation on the switch side and bond is used when describing link-aggregation on the appliance side.
The following list is a non-comprehensive list of equivalent vendor-proprietary terms.
LAG
port-channel
trunk
multi-layer trunk (MLT)
MLAG
virtual Port-channel (vPC)
Multi-chassis link-aggregation group (MCLAG)
Split multi-layer trunk (SMLT)
Interfaces in an LACP bond are either placed into an active or suspended state. In the active state an interface is allowed to collect and process incoming network traffic as well as transmit outgoing network traffic. In the suspended state an interface does not collect incoming network traffic or transmit any outgoing network traffic until the condition that caused the suspension is resolved.
For all bonded appliance interfaces to be considered active within a LACP bond, you must ensure the following:
The switch ports which the bonded interfaces are connected to are all members of the same LAG. If the switchports are not members within the same LAG, the appliance elects an interface or a group of interfaces, which are members of the same LAG, to be active while placing all other bonded interfaces in a suspended state.
The LAG must be configured to use LACP. If the LAG is not configured to use LACP, the appliance elects a single interface to be active while all other interfaces are placed into a suspended state.
A LAG cannot span multiple switches without the following:
Configuration of MLAG between the switches
Deployment of EVPN LAG multihoming in the environment
Switches being stacked use a vendor proprietary technology.
Note:
Check the switch documentation to ensure that the make and model of the switch to ensure LACP functions properly with the switches in a stacked configuration.
Stacked switches, MLAG, and EVPN LAG multihoming allow multiple switches to function as a single logical switch from the perspective of LACP. When stacking, MLAG or EVPN LAG multihoming are deployed and the LAG is properly configured. The LACP bond on the appliance places all the bonded interfaces into an active state. If the appliance's bond spans across multiple switches without MLAG, EVPN LAG multihoming, or stacking configured on those switches, LACP on the appliance detects this and elects an interface or group of interfaces, which are members of the same LAG and connected to the same switch, to be active while placing all other bonded interfaces into a suspended state. Even if the LAGs on the switches are configured to use the same LAG ID, LACP on the appliance identifies that the LACPDUs were sent from multiple different switches.
Although Access Appliance is configured as a cluster, it is important to note that Red Hat Linux does not have a mechanism to perform MLAG. Therefore, each appliance must be treated as a unique end-system. For each bond created on the Access Appliance cluster, two LAGs must be created on the switch and each LAG must be assigned a different LAG ID.
For example, when you create a two-interface bond using the Access CLI, a bond interface is created on each node. The two switch ports which connect to node_A should be assigned to a LAG with a LAG ID of X and the two switch ports which connect to node_B should be assigned to a LAG with a LAG ID of Y. In this example, if all four switch ports are assigned the LAG ID of X, the switch or set of switches elects a set of switch ports connected to one of the appliance nodes and places it in an active state while other interfaces are placed into a suspended state. This leads to network connectivity issues as the suspended interfaces are still physically up on the appliance which allows VCS to assign an IP address to them. If this happens, when that node attempts to send traffic out the bond, the application sends the traffic down the network stack, and it is dropped by the bonding driver.
While LACP does not perform load balancing, it does have a mechanism for selecting an outgoing physical interface. This mechanism is responsible for calculating a hash based on selected attributes in the headers of the outgoing network traffic. LACP associates a hash with an interface and all traffic matching that hash egress the selected interface. It is important to note that you cannot control the outgoing interface, you can only influence the selection process. This mechanism is statically configured on each system and is not negotiated between the systems participating in LACP. In Linux this mechanism is called the xmit_hash_policy.
The Access Appliance supports the following configurations of the xmit_hash_policy:
layer2: A hash is calculated based on the source and destination MAC addresses in the Ethernet header. This xmit_hash_policy provides decent results if communication occurs between the appliance and multiple systems on the same subnet.
Imbalances occur if traffic is not distributed amongst multiple systems on the same subnet or if the traffic must be routed through a router/gateway.
layer2+3: A hash is calculated based on the source and destination MAC and IP address in the Ethernet and IP headers. This xmit_hash_policy provides decent results if communication occurs between the appliance and multiple systems.
Imbalances occur if most of the traffic is transmitted between the appliance and a single system.
layer3+4: A hash is calculated based on the source and destination IP and port in the IP and TCP/UDP headers. A unique hash is produced for each socket established on the appliance.
During the planning stage, it is important to consider which xmit_hash_policy is most appropriate based on the expected usage of the bond. It is also important to include the network administrators in this planning as the xmit_hash_policy only controls the outgoing traffic on the appliance and has no effect on how the switch chooses an egress interface.