Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- About Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Hot-relocation
- Volume sets
- Provisioning new usable storage
- Administering disks
- About disk management
- Disk devices
- Discovering and configuring newly added disk devices
- Partial device discovery
- Discovering disks and dynamically adding disk arrays
- Third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing supported disks in the DISKS category
- Displaying details about a supported array library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Disks under VxVM control
- Changing the disk-naming scheme
- About the Array Volume Identifier (AVID) attribute
- Discovering the association between enclosure-based disk names and OS-based disk names
- About disk installation and formatting
- Displaying or changing default disk layout attributes
- Adding a disk to VxVM
- RAM disk support in VxVM
- Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks
- Rootability
- Displaying disk information
- Controlling Powerfail Timeout
- Removing disks
- Removing a disk from VxVM control
- Removing and replacing disks
- Enabling a disk
- Taking a disk offline
- Renaming a disk
- Reserving disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Disabling multi-pathing and making devices invisible to VxVM
- Enabling multi-pathing and making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using vxdmpadm
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying extended device attributes
- Suppressing or including devices for VxVM or DMP control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers or array ports
- Enabling I/O for paths, controllers or array ports
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Displaying information about the DMP error-handling thread
- Configuring array policy modules
- Online dynamic reconfiguration
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Removing LUNs dynamically from an existing target ID
- Adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Cleaning up the operating system device tree after removing LUNs
- Upgrading the array controller firmware online
- Replacing a host bus adapter
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Adding a disk to a disk group
- Removing a disk from a disk group
- Moving disks between disk groups
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Renaming a disk group
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Disabling a disk group
- Destroying a disk group
- Upgrading the disk group version
- About the configuration daemon in VxVM
- Backing up and restoring disk group configuration data
- Using vxnotify to monitor configuration changes
- Working with existing ISP disk groups
- Creating and administering subdisks and plexes
- About subdisks
- Creating subdisks
- Displaying subdisk information
- Moving subdisks
- Splitting subdisks
- Joining subdisks
- Associating subdisks with plexes
- Associating log subdisks
- Dissociating subdisks from plexes
- Removing subdisks
- Changing subdisk attributes
- About plexes
- Creating plexes
- Creating a striped plex
- Displaying plex information
- Attaching and associating plexes
- Taking plexes offline
- Detaching plexes
- Reattaching plexes
- Moving plexes
- Copying volumes to plexes
- Dissociating and removing plexes
- Changing plex attributes
- Creating volumes
- About volume creation
- Types of volume layouts
- Creating a volume
- Using vxassist
- Discovering the maximum size of a volume
- Disk group alignment constraints on volumes
- Creating a volume on any disk
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a volume with a version 0 DCO volume
- Creating a volume with a version 20 DCO volume
- Creating a volume with dirty region logging enabled
- Creating a striped volume
- Mirroring across targets, controllers or enclosures
- Mirroring across media types (SSD and HDD)
- Creating a RAID-5 volume
- Creating tagged volumes
- Creating a volume using vxmake
- Initializing and starting a volume
- Accessing a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- About volume administration
- Displaying volume information
- Monitoring and controlling tasks
- About SF Thin Reclamation feature
- Reclamation of storage on thin reclamation arrays
- Monitoring Thin Reclamation using the vxtask command
- Using SmartMove with Thin Provisioning
- Admin operations on an unmounted VxFS thin volume
- Stopping a volume
- Starting a volume
- Resizing a volume
- Adding a mirror to a volume
- Removing a mirror
- Adding logs and maps to volumes
- Preparing a volume for DRL and instant snapshots
- Specifying storage for version 20 DCO plexes
- Using a DCO and DCO volume with a RAID-5 volume
- Determining the DCO version number
- Determining if DRL is enabled on a volume
- Determining if DRL logging is active on a volume
- Disabling and re-enabling DRL
- Removing support for DRL and instant snapshots from a volume
- Adding traditional DRL logging to a mirrored volume
- Upgrading existing volumes to use version 20 DCOs
- Setting tags on volumes
- Changing the read policy for mirrored volumes
- Removing a volume
- Moving volumes from a VM disk
- Enabling FastResync on a volume
- Performing online relayout
- Converting between layered and non-layered volumes
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- About the cluster functionality of VxVM
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Requesting node status and discovering the master node
- Changing the CVM master manually
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Handling cloned disks in a shared disk group
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Setting the disk detach policy on a shared disk group
- Setting the disk group failure policy on a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
- Glossary
Upgrading existing volumes to use version 20 DCOs
You can upgrade a volume created before VxVM 4.0 to take advantage of new features such as instant snapshots and DRL logs that are configured within the DCO volume. You must upgrade the version of the disk groups, remove snapshots and version 0 DCOs that are associated with volumes in the disk groups, and configure the volumes with version 20 DCOs.
The plexes of the DCO volume require persistent storage space on disk to be available. To make room for the DCO plexes, you may need to add extra disks to the disk group, or reconfigure volumes to free up space in the disk group. You can also add disk space by using the disk group move feature to bring in spare disks from a different disk group.
The vxsnap prepare command automatically enables FastResync on the volume and on any snapshots that are generated from it.
If the volume is a RAID-5 volume, it is converted to a layered volume that can be used with snapshots and FastResync.
To upgrade an existing disk group and the volumes that it contains
- Upgrade the disk group that contains the volume to the latest version before performing the remainder of the procedure described in this section. To check the version of a disk group, use the following command :
# vxdg list diskgroup
To upgrade a disk group to the latest version, use the following command:
# vxdg upgrade diskgroup
- To discover which volumes in the disk group have version 0 DCOs associated with them, use the following command:
# vxprint [-g diskgroup] -F "%name" -e "v_hasdcolog"
This command assumes that the volumes can only have version 0 DCOs as the disk group has just been upgraded.
To upgrade each volume within the disk group, repeat the following steps as required.
- If the volume to be upgraded has a traditional DRL plex or subdisk (that is, the DRL logs are not held in a version 20 DCO volume), use the following command to remove this:
# vxassist [-g diskgroup] remove log volume [nlog=n]
To specify the number, n, of logs to be removed, use the optional attribute nlog=n . By default, the vxassist command removes one log.
- For a volume that has one or more associated snapshot volumes, use the following command to reattach and resynchronize each snapshot:
# vxassist [-g diskgroup] snapback snapvol
If FastResync was enabled on the volume before the snapshot was taken, the data in the snapshot plexes is quickly resynchronized from the original volume. If FastResync was not enabled, a full resynchronization is performed.
- To turn off FastResync for the volume, use the following command :
# vxvol [-g diskgroup] set fastresync=off volume
- To dissociate a version 0 DCO object, DCO volume and snap objects from the volume, use the following command:
# vxassist [-g diskgroup] remove log volume logtype=dco
- To upgrade the volume, use the following command:
# vxsnap [-g diskgroup] prepare volume [ndcomirs=number] \ [regionsize=size] [drl=on|sequential|off] \ [storage_attribute ...]
The ndcomirs attribute specifies the number of DCO plexes that are created in the DCO volume. You should configure as many DCO plexes as there are data and snapshot plexes in the volume. The DCO plexes are used to set up a DCO volume for any snapshot volume that you subsequently create from the snapshot plexes. For example, specify ndcomirs=5 for a volume with 3 data plexes and 2 snapshot plexes.
The regionsize attribute specifies the size of the tracked regions in the volume. A write to a region is tracked by setting a bit in the change map. The default value is 64k (64KB). A smaller value requires more disk space for the change maps, but the finer granularity provides faster resynchronization.
To enable DRL logging on the volume, specify drl=on (this is the default setting). If you need sequential DRL, specify drl=sequential. If you do not need DRL, specify drl=off.
To define the disks that can or cannot be used for the plexes of the DCO volume, you can also specify vxassist-style storage attributes.