Veritas InfoScale™ 8.0.2 Solutions in Cloud Environments
- Overview and preparation
- Overview of InfoScale solutions in cloud environments
- InfoScale agents for monitoring resources in cloud environments
- InfoScale FSS feature for storage sharing in cloud environments
- InfoScale non-FSS feature for storage sharing in cloud environments
- About SmartIO in AWS environments
- Preparing for InfoScale installations in cloud environments
- Installing the AWS CLI package
- VPC security groups example
- Configurations for Amazon Web Services - Linux
- Configurations for Amazon Web Services - Windows
- Replication configurations in AWS - Windows
- HA and DR configurations in AWS - Windows
- EBS Multi-Attach feature support with InfoScale Enterprise in AWS cloud
- InfoScale service group configuration wizards support for EBS Multi-Attach
- Failover within a subnet of an AWS AZ using virtual private IP - Windows
- Failover across AWS subnets using overlay IP - Windows
- Public access to InfoScale cluster nodes in AWS using Elastic IP - Windows
- DR from on-premises to AWS and across AWS regions or VPCs - Windows
- DR from on-premises to AWS - Windows
- Configurations for Microsoft Azure - Linux
- Configurations for Microsoft Azure - Windows
- Replication configurations in Azure - Windows
- HA and DR configurations in Azure - Windows
- Shared disk support in Azure cloud and InfoScale service group configuration using wizards
- Failover within an Azure subnet using private IP - Windows
- Failover across Azure subnets using overlay IP - Windows
- Public access to cluster nodes in Azure using public IP - Windows
- DR from on-premises to Azure and across Azure regions or VNets - Windows
- Configurations for Google Cloud Platform- Linux
- Configurations for Google Cloud Platform - Windows
- Replication to and across cloud environments
- Migrating files to the cloud using Cloud Connectors
- About cloud connectors
- About InfoScale support for cloud connectors
- How InfoScale migrates data using cloud connectors
- Limitations for file-level tiering
- About operations with Amazon Glacier
- Migrating data from on-premise to cloud storage
- Reclaiming object storage space
- Removing a cloud volume
- Examining in-cloud storage usage
- Sample policy file
- Replication support with cloud tiering
- Configuration for Load Balancer for AWS and Azure - Linux
- Troubleshooting issues in cloud deployments
DR from on-premises to AWS and across AWS regions or VPCs - Linux
InfoScale Enterprise lets you use the global cluster option (GCO) for DR configurations. You can use a DR configuration to fail over applications across different regions or VPCs in AWS. The cluster nodes can be in the same AZ or in different AZs.
The following information is required:
VPN tunnel information between regions or VPCs
The IP address to be used for cross-cluster communication:
Virtual private IP for cluster the nodes that exist in the same subnet
Overlay IP for cluster the nodes that exist in different subnets
You can also use GCO to configure applications for DR from an on-premises site to AWS.
Note:
If you use an Amazon VPN tunnel in a global cluster configuration between an on-premises site and AWS, the cluster nodes in the cloud must be in the same subnet.
The following graphic depicts a sample DR configuration across AWS regions:
The sample configuration includes the following elements:
VPN tunnel between Region A and Region B
The primary site has the following elements:
A virtual private cloud, VPC 1, is configured in Region A of the AWS cloud.
An application is configured for HA using an InfoScale cluster that comprises two nodes, Node 1 and Node 2, which are EC2 instances.
Node 1 exists in Subnet 1 and Node 2 exists in Subnet 2.
The overlay IP allows the private IP of a node to be fail over from one subnet to another in an AZ during failover or failback.
The secondary site has the following elements:
A virtual private cloud, VPC 2, is configured in Region B of the AWS cloud.
The same application is configured for HA on Node 3 and Node 4, which exist in Subnet 3 and Subnet 4 respectively.
The overlay IP allows the private IP of a node to fail over from one subnet to another in an AZ.
The following snippet is a service group configuration from a sample VCS configuration file (main.cf) at the primary site (Region A):
include "types.cf"
cluster sitever (
ClusterAddress = "172.32.1.2"
SecureClus = 1
)
remotecluster sitecal (
ClusterAddress = "172.35.1.2"
ConnectTimeout = 3000
SocketTimeout = 3000
)
heartbeat Icmp (
ClusterList = { sitecal }
Arguments @sitecal = { "172.35.1.2" }
)
system ip-172-31-21-156 (
)
system ip-172-31-61-106 (
)
group ClusterService (
SystemList = { ip-172-31-21-156 = 0, ip-172-31-61-106 = 1 }
AutoStartList = { ip-172-31-21-156, ip-172-31-61-106 }
OnlineRetryLimit = 3
OnlineRetryInterval = 120
)
AWSIP Aws_Ipres (
OverlayIP = "172.32.1.2/32"
Device = eth0
AWSBinDir = "/usr/local/bin"
)
Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart -secure"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac -secure" }
RestartLimit = 3
)
IP Ipres (
Device = eth0
Address = "172.32.1.2"
NetMask = "255.255.255.0"
)
NIC gconic (
Device = eth0
)
Aws_Ipres requires Ipres
Ipres requires gconic
wac requires IpresThe following snippet is a service group configuration from a sample VCS configuration file (main.cf) at the secondary site (Region B):
include "types.cf"
cluster sitecal (
ClusterAddress = "172.35.1.2"
SecureClus = 1
)
remotecluster sitever (
ClusterAddress = "172.32.1.2"
ConnectTimeout = 3000
SocketTimeout = 3000
)
heartbeat Icmp (
ClusterList = { sitever }
Arguments @sitever = { "172.32.1.2" }
)
system ip-172-34-20-109 (
)
system ip-172-34-30-231 (
)
group ClusterService (
SystemList = { ip-172-34-20-109 = 0, ip-172-34-30-231 = 1 }
AutoStartList = { ip-172-34-20-109, ip-172-34-30-231 }
OnlineRetryLimit = 3
OnlineRetryInterval = 120
)
AWSIP Aws_Ipres (
OverlayIP = "172.35.1.2/32"
Device = eth0
AWSBinDir = "/usr/local/bin"
)
Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart -secure"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac -secure" }
RestartLimit = 3
)
IP Ipres (
Device = eth0
Address = "172.35.1.2"
NetMask = "255.255.255.0"
)
NIC gconic (
Device = eth0
)
Aws_Ipres requires Ipres
Ipres requires gconic
wac requires Ipres