Veritas NetBackup™ Cloud Administrator's Guide
- About NetBackup cloud storage
- About the cloud storage
- About the Amazon S3 cloud storage API type
- Protecting data in Amazon Glacier for long-term retention
- Protecting data using Amazon's cloud tiering
- About EMC Atmos cloud storage API type
- About Microsoft Azure cloud storage API type
- About OpenStack Swift cloud storage API type
- Configuring cloud storage in NetBackup
- Scalable Storage properties
- Cloud Storage properties
- About the NetBackup CloudStore Service Container
- About the NetBackup media servers for cloud storage
- Configuring a storage server for cloud storage
- NetBackup cloud storage server properties
- Configuring a storage unit for cloud storage
- Changing cloud storage disk pool properties
- Monitoring and Reporting
- Operational notes
- About unified logging
- About legacy logging
- Troubleshooting cloud storage configuration issues
- Troubleshooting cloud storage operational issues
Data restore from the Google Nearline storage class may fail
Data restore from the Google Nearline storage class may fail, if your READ_BUFFER_SIZE in NetBackup is set to a value that is greater than the allotted read throughput. Google allots the read throughput based on the total size of the data that you have stored in the Google Nearline storage class.
The default READ_BUFFER_SIZE is 100 MB.
The NetBackup bptm logs show the following error after the data restore from Google Nearline fails:
HTTP status: 429, Retry type: RETRY_EXHAUSTED
Google provides 4 MB/s of read throughput per TB of data that you store in the Google Nearline storage class per location. You should change the READ_BUFFER_SIZE value in NetBackup to match it to the read throughput that Google allots.
For example, if the data that you have stored in the Google Nearline storage class is 5 TB, you should change the READ_BUFFER_SIZE value to match it to the allotted read throughput, which equals to 20 MB.
Refer to the Google guidelines, for more information: