Veritas InfoScale™ 8.0.2 Solutions in Cloud Environments
- Overview and preparation
- Overview of InfoScale solutions in cloud environments
- InfoScale agents for monitoring resources in cloud environments
- InfoScale FSS feature for storage sharing in cloud environments
- InfoScale non-FSS feature for storage sharing in cloud environments
- About SmartIO in AWS environments
- Preparing for InfoScale installations in cloud environments
- Installing the AWS CLI package
- VPC security groups example
- Configurations for Amazon Web Services - Linux
- Configurations for Amazon Web Services - Windows
- Replication configurations in AWS - Windows
- HA and DR configurations in AWS - Windows
- EBS Multi-Attach feature support with InfoScale Enterprise in AWS cloud
- InfoScale service group configuration wizards support for EBS Multi-Attach
- Failover within a subnet of an AWS AZ using virtual private IP - Windows
- Failover across AWS subnets using overlay IP - Windows
- Public access to InfoScale cluster nodes in AWS using Elastic IP - Windows
- DR from on-premises to AWS and across AWS regions or VPCs - Windows
- DR from on-premises to AWS - Windows
- Configurations for Microsoft Azure - Linux
- Configurations for Microsoft Azure - Windows
- Replication configurations in Azure - Windows
- HA and DR configurations in Azure - Windows
- Shared disk support in Azure cloud and InfoScale service group configuration using wizards
- Failover within an Azure subnet using private IP - Windows
- Failover across Azure subnets using overlay IP - Windows
- Public access to cluster nodes in Azure using public IP - Windows
- DR from on-premises to Azure and across Azure regions or VNets - Windows
- Configurations for Google Cloud Platform- Linux
- Configurations for Google Cloud Platform - Windows
- Replication to and across cloud environments
- Migrating files to the cloud using Cloud Connectors
- About cloud connectors
- About InfoScale support for cloud connectors
- How InfoScale migrates data using cloud connectors
- Limitations for file-level tiering
- About operations with Amazon Glacier
- Migrating data from on-premise to cloud storage
- Reclaiming object storage space
- Removing a cloud volume
- Examining in-cloud storage usage
- Sample policy file
- Replication support with cloud tiering
- Configuration for Load Balancer for AWS and Azure - Linux
- Troubleshooting issues in cloud deployments
InfoScale non-FSS feature for storage sharing in cloud environments
The nodes in the cluster may be located within the same zone or across zones (Availability Zone in case of AWS and user-defined site in case of Azure). Storage devices that are under VxVM control are prefixed with the private IP address of the node. You can override the default behavior with the vxdctl set hostpfirex command. For details, see the Storage Foundation Cluster File System High Availability Administrator's Guide - Linux.
Veritas recommends that you configure LLT over UDP for both, FSS and non-FSS clusters in the cloud. Veritas use LLT only for messages. Performance tuning is not required for LLT configuration on non-FSS environment.
Cloud-based networks are relatively slow and have high latency as compared to physical networks.
To achieve better LLT performance in high-latency cloud networks, set the following tunable values before you start LLT or the LLT services:
set-flow window:10
set-flow highwater:10000
set-flow lowwater:8000
set-flow rporthighwater:10000
set-flow rportlowwater:8000
set-flow ackval:5
set-flow linkburst:32
Disable the LLT adaptive window in Azure and in GCP as follows:
/etc/sysconfig/llt LLT_ENABLE_AWINDOW=0
The configuration that the following /etc/llttab file represents for Node 1 has links crossing IP routers. Notice that IP addresses are shown for each link on each peer node.
Availability zone do not support ARP(Address Resolution Protocol) protocol. So, need to disable LLT heartbeat over ARP and broadcast in the link command of the /etc/llttab file.
set-node Node1
set-cluster 1
link link1 udp - udp 50000 - 192.1.3.1 -
link link2 udp - udp 50001 - 192.1.4.1 -
#format: set-addr node-id link tag-name address
Example:
set-addr 0 link1 192.1.1.1
set-addr 0 link2 192.1.2.1
set-addr 2 link1 192.1.5.2
set-addr 2 link2 192.1.6.2
set-bcasthb 0
set-arp 0
set-node Node0
set-cluster 1