InfoScale™ 9.0 Solutions in Cloud Environments
- Overview and preparation
- Overview of InfoScale solutions in cloud environments
 - InfoScale agents for monitoring resources in cloud environments
 - InfoScale FSS feature for storage sharing in cloud environments
 - InfoScale non-FSS feature for storage sharing in cloud environments
 - About SmartIO in AWS environments
 - Preparing for InfoScale installations in cloud environments
 - Installing the AWS CLI package
 - VPC security groups example
 
 - Configurations for Amazon Web Services - Linux
 - Configurations for Amazon Web Services - Windows
- Replication configurations in AWS - Windows
 - HA and DR configurations in AWS - Windows
- EBS Multi-Attach feature support with InfoScale Enterprise in AWS cloud
 - InfoScale service group configuration wizards support for EBS Multi-Attach
 - Failover within a subnet of an AWS AZ using virtual private IP - Windows
 - Failover across AWS subnets using overlay IP - Windows
 - Public access to InfoScale cluster nodes in AWS using Elastic IP - Windows
 - DR from on-premises to AWS and across AWS regions or VPCs - Windows
 - DR from on-premises to AWS - Windows
 
 
 - Configurations for Microsoft Azure - Linux
 - Configurations for Microsoft Azure - Windows
- Replication configurations in Azure - Windows
 - HA and DR configurations in Azure - Windows
- Shared disk support in Azure cloud and InfoScale service group configuration using wizards
 - Failover within an Azure subnet using private IP - Windows
 - Failover across Azure subnets using overlay IP - Windows
 - Public access to cluster nodes in Azure using public IP - Windows
 - DR from on-premises to Azure and across Azure regions or VNets - Windows
 
 
 - Configurations for Google Cloud Platform- Linux
 - Configurations for Google Cloud Platform - Windows
 - Replication to and across cloud environments
 - Migrating files to the cloud using Cloud Connectors
- About cloud connectors
 - About InfoScale support for cloud connectors
 - How InfoScale migrates data using cloud connectors
 - Limitations for file-level tiering
 - About operations with Amazon Glacier
 - Migrating data from on-premise to cloud storage
 - Reclaiming object storage space
 - Removing a cloud volume
 - Examining in-cloud storage usage
 - Sample policy file
 - Replication support with cloud tiering
 
 - Configuration for Load Balancer for AWS and Azure - Linux
 - Troubleshooting issues in cloud deployments
 
Migrating data from on-premise to cloud storage
Make sure that the data you want to migrate includes regular files, not empty directories or symbolic links.
To migrate data from on-premise to cloud storage
- Create the policy file 
policy.xml.See the Administering SmartTier chapter in the Storage Foundation Administrator's Guide - Linux.
For a sample policy file:
See Sample policy file.
 - Create a volume set  with existing volumes.
Note:
Unmount the file system before creating the volume set. After creating the volume set, mount it at the same mount point.
# umount mount_path_of_data_volume # vxvset -g dg_name make vset_name local_data_volume # mount -t vxfs /dev/vx/dsk/dg_name/vset_name \ mount_path_of_data_volume
 - Create buckets/containers in the cloud storage namespace. See the related cloud vendor documentation for instructions.
 - Create the  cloud volume.
For block-level data migration:
# vxassist -g dg_name make cloudvol_name size vxcloud=on
For file-level data migration:
# vxassist -g dg_name make cloudvol_name size fscloud=on
 - Configure the Cloud target.
Connector
Command
S3
# vxcloud -g diskgroup_name \ addcloud cloudvol_name host=host_address \ bucket=bucket_name access_key=access_key \ type=S3 secret_key=secret_key \ [https=true|false] [sig_version=v4|v2]
where, secret_key and access_key are credentials to access the vendor cloud services.
By default, https is set to true and sig_version is set to v4.
Glacier
# vxcloud -g diskgroup_name \ addcloud vol_name host=host_address \ bucket=vault_name access_key=access_key \ secret_key=secret_key type=GLACIER
where, secret_key and access_key are credentials to access the vendor cloud services.
Note:
Only file-level tiering is supported with Amazon Glacier.
BLOB
# vxcloud -g diskgroup_name \ addcloud cloudvol_name \ host=host_address bucket=bucket_name \ access_key=access_key type=BLOB \ endpoint=account_name [https=true|false]
where, access_key is the credential to access the vendor cloud services and endpoint is the storage account name of the user.
By default, https is set to true.
Google Cloud
# vxcloud -g diskgroup_name \ addcloud cloudvol_name host=host_address \ bucket=bucket_name type=GOOGLE \ google_config=config.json_file_path [https=true|false]
where,
config.jsonis a file that contains the private key, project_id, and client_email values for the Google service account. Download this file in the JSON format from the Service Accounts tab of the GCP Console.By default, https is set to true.
Note:
Acrtera recommends that you associate each cloud volume with a separate bucket.
 - Add the cloud volume to the volume set:
# vxvset -g dg_name addvol vset_name cloudvol_name
 - Verify that the cloud volume is appropriately tagged.
# fsvoladm queryflags dataonly mount_path_of_data_volume cloudvol_name
 - Assign placement classes to the local and cloud volumes.
# vxassist -g dg_name settag local_datavol_name \ vxfs.placement_class.LOCAL vxassist -g dg_name settag cloudvol_name \ vxfs.placement_class.CLOUD
 - Assign the policy to the file systems.
# fsppadm assign mount_path_of_data_volume policy.xml
 - View an analysis report of the data transfer.
# fsppadm analyze mount_path_of_data_volume
 - Enforce the policy to move the data between the local volume and the cloud volume. 
Note:
You can create a cron job to schedule the migration of old data onto the cloud volume.
# fsppadm enforce mount_path_of_data_volume
 - Verify the location of files on the local and cloud volumes:
# fsmap -a list_file # fsmap -a /data1/* Volume Extent Type File Offset Extent Size File localvol Data 0 1048576 /data1/reports-2016-03 cloudvol Data 0 1048576 /data1/reports-2016-04
 - Check the free space and used space across volumes in the volume set using 
# fsvoladm list mount_path_of_data_volume # fsvoladm list /data1 devid size used avail name 0 2097152 356360 1740792 localvol 1 10737418240 40 10737418200 cloudvol