Veritas InfoScale™ 7.4.1 Solutions in Cloud Environments

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.1)
Platform: Linux,Windows
  1. Overview and preparation
    1.  
      Overview of InfoScale solutions in cloud environments
    2.  
      InfoScale agents for monitoring resources in cloud environments
    3.  
      InfoScale feature for storage sharing in cloud environments
    4.  
      About SmartIO in AWS environments
    5.  
      Preparing for InfoScale installations in cloud environments
    6.  
      Installing the AWS CLI package
  2. Configurations for Amazon Web Services - Linux
    1. Replication configurations in AWS - Linux
      1.  
        Replication from on-premises to AWS - Linux
      2.  
        Replication across AZs within an AWS region - Linux
      3.  
        Replication across AWS regions - Linux
      4.  
        Replication across multiple AWS AZs and regions (campus cluster) - Linux
    2. HA and DR configurations in AWS - Linux
      1.  
        Failover within a subnet of an AWS AZ using virtual private IP - Linux
      2.  
        Failover across AWS subnets using overlay IP - Linux
      3.  
        Public access to InfoScale cluster nodes in AWS using elastic IP - Linux
      4.  
        DR from on-premises to AWS and across AWS regions or VPCs - Linux
  3. Configurations for Amazon Web Services - Windows
    1. Replication configurations in AWS - Windows
      1.  
        Replication from on-premises to AWS - Windows
      2.  
        Replication across AZs in an AWS region - Windows
      3.  
        Replication across AWS regions - Windows
    2. HA and DR configurations in AWS - Windows
      1.  
        Failover within a subnet of an AWS AZ using virtual private IP - Windows
      2.  
        Failover across AWS subnets using overlay IP - Windows
      3.  
        Public access to InfoScale cluster nodes in AWS using Elastic IP - Windows
      4.  
        DR from on-premises to AWS and across AWS regions or VPCs - Windows
      5.  
        DR from on-premises to AWS - Windows
  4. Configurations for Microsoft Azure - Linux
    1. Replication configurations in Azure - Linux
      1.  
        Replication from on-premises to Azure - Linux
      2.  
        Replication within an Azure region - Linux
      3.  
        Replication across Azure regions - Linux
      4.  
        Replication across multiple Azure sites and regions (campus cluster) - Linux
      5.  
        About identifying a temporary resource disk - Linux
    2. HA and DR configurations in Azure - Linux
      1.  
        Failover within an Azure subnet using private IP - Linux
      2.  
        Failover across Azure subnets using overlay IP - Linux
      3.  
        Public access to cluster nodes in Azure using public IP - Linux
      4.  
        DR from on-premises to Azure and across Azure regions or VNets - Linux
  5. Configurations for Microsoft Azure - Windows
    1. Replication configurations in Azure - Windows
      1.  
        Replication from on-premises to Azure - Windows
      2.  
        Replication within an Azure region - Windows
      3.  
        Replication across Azure regions - Windows
    2. HA and DR configurations in Azure - Windows
      1.  
        Failover within an Azure subnet using private IP - Windows
      2.  
        Failover across Azure subnets using overlay IP - Windows
      3.  
        Public access to cluster nodes in Azure using public IP - Windows
      4.  
        DR from on-premises to Azure and across Azure regions or VNets - Windows
  6. Configurations for Google Cloud Platform- Linux
    1. Replication configurations in GCP - Linux
      1.  
        Replication across GCP regions - Linux
      2.  
        Replication across multiple GCP zones and regions (campus cluster) - Linux
    2. HA and DR configurations in GCP - Linux
      1.  
        Failover within a subnet of a GCP zone using virtual private IP - Linux
      2.  
        Failover across GCP subnets using overlay IP - Linux
      3.  
        DR across GCP regions or VPC networks - Linux
      4.  
        Shared storage within a GCP zone or across GCP zones - Linux
  7. Configurations for Google Cloud Platform - Windows
    1. Replication configurations in GCP - Windows
      1.  
        Replication from on-premises to GCP - Windows
      2.  
        Replication across zones in a GCP region - Windows
      3.  
        Replication across GCP regions - Windows
    2. HA and DR configurations in GCP - Windows
      1.  
        Failover within a subnet of a GCP zone using virtual private IP - Windows
      2.  
        Failover across GCP subnets using overlay IP - Windows
      3.  
        DR across GCP regions or VPC networks - Windows
  8. Replication to and across cloud environments
    1.  
      Data replication in supported cloud environments
    2.  
      Supported replication scenarios
    3.  
      Setting up replication across AWS and Azure environments
  9. Migrating files to the cloud using Cloud Connectors
    1.  
      About cloud connectors
    2.  
      About InfoScale support for cloud connectors
    3.  
      How InfoScale migrates data using cloud connectors
    4.  
      Limitations for file-level tiering
    5.  
      About operations with Amazon Glacier
    6.  
      Migrating data from on-premise to cloud storage
    7.  
      Reclaiming object storage space
    8.  
      Removing a cloud volume
    9.  
      Examining in-cloud storage usage
    10.  
      Sample policy file
    11.  
      Replication support with cloud tiering
  10. Troubleshooting issues in cloud deployments
    1.  
      In an Azure environment, exporting a disk for Flexible Storage Sharing (FSS) may fail with "Disk not supported for FSS operation" error

In an Azure environment, exporting a disk for Flexible Storage Sharing (FSS) may fail with "Disk not supported for FSS operation" error

For Flexible Storage Sharing, you must first export all the non-shared disks for network sharing. If your deployment setup involves JBOD type of disks, you may notice the following while exporting the non-shared disks:

  • The disk export operation fails with the "Disk not supported for FSS operation" error

    # vxdisk export DiskName

    VxVM vxdisk ERROR V-5-1-531 Device DiskName: export failed: Disk not supported for FSS operations

  • The checkfss disk command fails with the "Disk not valid for FSS operation" error

    # vxddladm checkfss DiskName

    VxVM vxddladm INFO V-5-1-18714 DiskName is not a valid disk for FSS operation

This issue occurs if a JBOD definition is not added to the disks.

Workaround:

Before exporting a disk for network sharing, add a JBOD definition on the disk.

Note:

You can add a JBOD definition to a disk only if the disk SCSI inquiry supports a unique serial number. You cannot export a disk for network sharing, if the disk's SCSI inquiry fails to have a unique serial number.

To add a JBOD definition to a disk, perform the following steps:

  1. Run a query on the standard disk pages (0, 0x80, and 0x83) to find the available unique serial number.

    For page 0:

    # /etc/vx/diag.d/vxscsiinq -d /dev/vx/rdmp/DiskName

    For page 0x80:

    # /etc/vx/diag.d/vxscsiinq -d -e 1 -p 0x80 /dev/vx/rdmp/DiskName

    For page 0x83

    # /etc/vx/diag.d/vxscsiinq -d -e 1 -p 0x83 /dev/vx/rdmp/DiskName

    Following is a sample output of the command for page number 0x83 that contains the unique serial number.

    ------- Identifier Descriptor 1 -------
    ID type             : 0x1 (T10 vendor ID based)
    Protocol Identifier : 0x0
    Code set            : 0x1
    PIV                 : 0x0
    Association         : 0x0
    Length              : 0x18   
    Data                : 4d5346542020202069c0ae2f82ab294b834866ff...
            /dev/vx/rdmp/DiskName: Raw data size 32
    Bytes:  0 -  7  0x00  0x83  0x00  0x1c  0x01  0x01  0x00  0x18 ........
    Bytes:  8 - 15  0x4d  0x53  0x46  0x54  0x20  0x20  0x20  0x20 MSFT
    Bytes: 16 - 23  0x69  0xc0  0xae  0x2f  0x82  0xab  0x29  0x4b i../..)K
    Bytes: 24 - 31  0x83  0x48  0x66  0xff  0x1d  0xd3  0xf5  0xcb .Hf.....
    
  2. Note the following values from the command output in step 1:

    opcode= 0x12 (18) (This is a standard opcode)

    pagecode= page number that contains the unique serial number (for example, 083)

    offset= byte offset where the serial number starts (For example, in the output here the offset value is 8)

    length= length of the unique serial number that is provided in the Length field (24 or 0x18)

  3. Add JBOD definition.

    # vxddladm addjbod vid=vendorid serialnum=opcode/pagecode/offset/length

    For example, # vxddladm addjbod vid=MSFT serialnum=18/083/8/0x18

  4. Scan disks.

    # vxdisk scandisks

  5. Verify if the JBOD definition has been added sucessfully.

    # vxddladm checkfss DiskName

    The command output displays a confirmation message.