Storage Foundation 7.3 Administrator's Guide - Linux
- Section I. Introducing Storage Foundation
- Overview of Storage Foundation
- How Dynamic Multi-Pathing works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering sites and remote mirrors
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Managing application I/O workloads using maximum IOPS settings
- Section VI. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VII. Optimizing storage with Storage Foundation
- Understanding storage optimization solutions in Storage Foundation
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Veritas InfoScale 4k sector device support solution
- Section VIII. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Deduplicating data
- Compressing files
- About compressing files
- Use cases for compressing files
- Migrating files to the cloud using Simple Storage Service (S3) Connector
- Section IX. Administering storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Technology Preview: Erasure coding in Veritas InfoScale storage environments
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Sample supported root disk layouts for encapsulation
- Encapsulating and mirroring the root disk
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Managing volumes and disk groups
- Section X. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- Appendix C. Command reference
Recovery of erasure coded volumes
An erasure coded volume may need recovery for multiple reasons:
Underlying storage failed transiently leaving the data on it stale.
Underlying storage evacuated or migrated (relocation or hot-relocation)
System failed while the volume was undergoing read/write operations. Recovery is not yet supported for this scenario.
Recovery of volumes leads to rebuild of data on the revived storage. The Flashsnap feature can be used to rebuild the data efficiently, that is, by avoiding rebuild of data that is already consistent. Flashsnap provides point-in-time tracking of changes on a volume when a disk is detached. To use the Flashsnap feature, prepare the volume for such recovery, preferably during creation of the erasure coded volume or before any disk failure, by adding the latest supported version of DCO.
To prepare the volume for optimal recovery, use the vxsnap command.
# vxsnap -g disk_group prepare vol_name ndcomir=n
It is recommended to have as many mirrors (specified using the ndcomir attribute) in the DCO as the fault tolerance required for the volume.
If an erasure coded volume has DCO, any read or write failure on any sub-disk enables tracking of the write operation on the volume and the same is used to optimally recover the sub-disk when the storage subsystem is revived and recovered.
An erasure coded volume can be recovered automatically or manually.
Automatic recovery | If the storage fault is transient (for example, when a node contributing storage in the FSS cluster fails and then revives), the data on the revived storage is stale and needs to be rebuilt or recovered. The vxattachd daemon automatically detects storage revival. If an erasure coded volume is present on the detected storage, recovery is automatically initiated to synchronize the stale disks. |
Manual recovery | If the vxattachd daemon is not running, the erasure coded volume cannot be recovered automatically even if the daemon is restarted. In such cases, the user must manually recover the erasure coded volume using the vxrecover command: # vxrecover -g disk_group |
Recovery of an erasure coded volume results in the creation of a task that can be controlled using the vxtask command. Use the vxtask list command to locate the related recovery task and monitor or control it as needed.
See the vxtask(1M) manual page.