Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- About Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Hot-relocation
- Volume sets
- Provisioning new usable storage
- Administering disks
- About disk management
- Disk devices
- Discovering and configuring newly added disk devices
- Partial device discovery
- Discovering disks and dynamically adding disk arrays
- Third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing supported disks in the DISKS category
- Displaying details about a supported array library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Disks under VxVM control
- Changing the disk-naming scheme
- About the Array Volume Identifier (AVID) attribute
- Discovering the association between enclosure-based disk names and OS-based disk names
- About disk installation and formatting
- Displaying or changing default disk layout attributes
- Adding a disk to VxVM
- RAM disk support in VxVM
- Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks
- Rootability
- Displaying disk information
- Controlling Powerfail Timeout
- Removing disks
- Removing a disk from VxVM control
- Removing and replacing disks
- Enabling a disk
- Taking a disk offline
- Renaming a disk
- Reserving disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Disabling multi-pathing and making devices invisible to VxVM
- Enabling multi-pathing and making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using vxdmpadm
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying extended device attributes
- Suppressing or including devices for VxVM or DMP control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers or array ports
- Enabling I/O for paths, controllers or array ports
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Displaying information about the DMP error-handling thread
- Configuring array policy modules
- Online dynamic reconfiguration
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Removing LUNs dynamically from an existing target ID
- Adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Cleaning up the operating system device tree after removing LUNs
- Upgrading the array controller firmware online
- Replacing a host bus adapter
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Adding a disk to a disk group
- Removing a disk from a disk group
- Moving disks between disk groups
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Renaming a disk group
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Disabling a disk group
- Destroying a disk group
- Upgrading the disk group version
- About the configuration daemon in VxVM
- Backing up and restoring disk group configuration data
- Using vxnotify to monitor configuration changes
- Working with existing ISP disk groups
- Creating and administering subdisks and plexes
- About subdisks
- Creating subdisks
- Displaying subdisk information
- Moving subdisks
- Splitting subdisks
- Joining subdisks
- Associating subdisks with plexes
- Associating log subdisks
- Dissociating subdisks from plexes
- Removing subdisks
- Changing subdisk attributes
- About plexes
- Creating plexes
- Creating a striped plex
- Displaying plex information
- Attaching and associating plexes
- Taking plexes offline
- Detaching plexes
- Reattaching plexes
- Moving plexes
- Copying volumes to plexes
- Dissociating and removing plexes
- Changing plex attributes
- Creating volumes
- About volume creation
- Types of volume layouts
- Creating a volume
- Using vxassist
- Discovering the maximum size of a volume
- Disk group alignment constraints on volumes
- Creating a volume on any disk
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a volume with a version 0 DCO volume
- Creating a volume with a version 20 DCO volume
- Creating a volume with dirty region logging enabled
- Creating a striped volume
- Mirroring across targets, controllers or enclosures
- Mirroring across media types (SSD and HDD)
- Creating a RAID-5 volume
- Creating tagged volumes
- Creating a volume using vxmake
- Initializing and starting a volume
- Accessing a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- About volume administration
- Displaying volume information
- Monitoring and controlling tasks
- About SF Thin Reclamation feature
- Reclamation of storage on thin reclamation arrays
- Monitoring Thin Reclamation using the vxtask command
- Using SmartMove with Thin Provisioning
- Admin operations on an unmounted VxFS thin volume
- Stopping a volume
- Starting a volume
- Resizing a volume
- Adding a mirror to a volume
- Removing a mirror
- Adding logs and maps to volumes
- Preparing a volume for DRL and instant snapshots
- Specifying storage for version 20 DCO plexes
- Using a DCO and DCO volume with a RAID-5 volume
- Determining the DCO version number
- Determining if DRL is enabled on a volume
- Determining if DRL logging is active on a volume
- Disabling and re-enabling DRL
- Removing support for DRL and instant snapshots from a volume
- Adding traditional DRL logging to a mirrored volume
- Upgrading existing volumes to use version 20 DCOs
- Setting tags on volumes
- Changing the read policy for mirrored volumes
- Removing a volume
- Moving volumes from a VM disk
- Enabling FastResync on a volume
- Performing online relayout
- Converting between layered and non-layered volumes
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- About the cluster functionality of VxVM
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Requesting node status and discovering the master node
- Changing the CVM master manually
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Handling cloned disks in a shared disk group
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Setting the disk detach policy on a shared disk group
- Setting the disk group failure policy on a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
- Glossary
Using persistent attributes
You can define volume allocation attributes so they can be reused in subsequent operations. These attributes are called persistent attributes, and they are stored in a set of hidden volume tags. The persist attribute determines whether an attribute persists, and how the current command might use or modify preexisting persisted attributes. You can specify persistence rules in defaults files, in rules, or on the command line. For more information, see the vxassist manual page.
To illustrate how persistent attributes work, we'll use the following vxsf_rules files. It contains a rule, rule1, which defines the mediatype attribute. This rule also uses the persist attribute to make the mediatype attribute persistent.
# cat /etc/default/vxsf_rules
volume rule rule1 { mediatype:ssd persist=extended }The following command confirms that LUNs ibm_ds8x000_0266 and ibm_ds8x000_0268 are solid-state disk (SSD) devices.
# vxdisk listtag DEVICE NAME VALUE ibm_ds8x000_0266 vxmediatype ssd ibm_ds8x000_0268 vxmediatype ssd
The following command creates a volume, vol1, in the disk group dg3. rule1 is specified on the command line, so those attributes are also applied to vol1.
# vxassist -g dg3 make vol1 100m rule=rule1
The following command shows that the volume vol1 is created off the SSD device ibm_ds8x000_0266 as specified in rule1.
# vxprint -g dg3 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg3 dg3 - - - - - - dm ibm_ds8x000_0266 ibm_ds8x000_0266 - 2027264 - - - - dm ibm_ds8x000_0267 ibm_ds8x000_0267 - 2027264 - - - - dm ibm_ds8x000_0268 ibm_ds8x000_0268 - 2027264 - - - - v vol1 fsgen ENABLED 204800 - ACTIVE - - pl vol1-01 vol1 ENABLED 204800 - ACTIVE - - sd ibm_ds8x000_0266-01 vol1-01 ENABLED 204800 0 - - -
The following command displays the attributes that are defined in rule1.
# vxassist -g dg3 help showattrs rule=rule1 alloc=mediatype:ssd persist=extended
If no persistent attributes are defined, the following command grows vol1 on hard disk drive (HDD) devices. However, at the beginning of this section, mediatype:ssd was defined as a persistent attribute. Therefore, the following command honors this original intent and grows the volume on SSD devices.
# vxassist -g dg3 growby vol1 1g
The following vxprint command confirms that the volume was grown on SSD devices.
# vxprint -g dg3 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg3 dg3 - - - - - - dm ibm_ds8x000_0266 ibm_ds8x000_0266 - 2027264 - - - - dm ibm_ds8x000_0267 ibm_ds8x000_0267 - 2027264 - - - - dm ibm_ds8x000_0268 ibm_ds8x000_0268 - 2027264 - - - - v vol1 fsgen ENABLED 2301952 - ACTIVE - - pl vol1-01 vol1 ENABLED 2301952 - ACTIVE - - sd ibm_ds8x000_0266-01 vol1-01 ENABLED 2027264 0 - - - sd ibm_ds8x000_0268-01 vol1-01 ENABLED 274688 2027264 - - -