Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- About Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Hot-relocation
- Volume sets
- Provisioning new usable storage
- Administering disks
- About disk management
- Disk devices
- Discovering and configuring newly added disk devices
- Partial device discovery
- Discovering disks and dynamically adding disk arrays
- Third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing supported disks in the DISKS category
- Displaying details about a supported array library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Disks under VxVM control
- Changing the disk-naming scheme
- About the Array Volume Identifier (AVID) attribute
- Discovering the association between enclosure-based disk names and OS-based disk names
- About disk installation and formatting
- Displaying or changing default disk layout attributes
- Adding a disk to VxVM
- RAM disk support in VxVM
- Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks
- Rootability
- Displaying disk information
- Controlling Powerfail Timeout
- Removing disks
- Removing a disk from VxVM control
- Removing and replacing disks
- Enabling a disk
- Taking a disk offline
- Renaming a disk
- Reserving disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Disabling multi-pathing and making devices invisible to VxVM
- Enabling multi-pathing and making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using vxdmpadm
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying extended device attributes
- Suppressing or including devices for VxVM or DMP control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers or array ports
- Enabling I/O for paths, controllers or array ports
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Displaying information about the DMP error-handling thread
- Configuring array policy modules
- Online dynamic reconfiguration
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Removing LUNs dynamically from an existing target ID
- Adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Cleaning up the operating system device tree after removing LUNs
- Upgrading the array controller firmware online
- Replacing a host bus adapter
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Adding a disk to a disk group
- Removing a disk from a disk group
- Moving disks between disk groups
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Renaming a disk group
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Disabling a disk group
- Destroying a disk group
- Upgrading the disk group version
- About the configuration daemon in VxVM
- Backing up and restoring disk group configuration data
- Using vxnotify to monitor configuration changes
- Working with existing ISP disk groups
- Creating and administering subdisks and plexes
- About subdisks
- Creating subdisks
- Displaying subdisk information
- Moving subdisks
- Splitting subdisks
- Joining subdisks
- Associating subdisks with plexes
- Associating log subdisks
- Dissociating subdisks from plexes
- Removing subdisks
- Changing subdisk attributes
- About plexes
- Creating plexes
- Creating a striped plex
- Displaying plex information
- Attaching and associating plexes
- Taking plexes offline
- Detaching plexes
- Reattaching plexes
- Moving plexes
- Copying volumes to plexes
- Dissociating and removing plexes
- Changing plex attributes
- Creating volumes
- About volume creation
- Types of volume layouts
- Creating a volume
- Using vxassist
- Discovering the maximum size of a volume
- Disk group alignment constraints on volumes
- Creating a volume on any disk
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a volume with a version 0 DCO volume
- Creating a volume with a version 20 DCO volume
- Creating a volume with dirty region logging enabled
- Creating a striped volume
- Mirroring across targets, controllers or enclosures
- Mirroring across media types (SSD and HDD)
- Creating a RAID-5 volume
- Creating tagged volumes
- Creating a volume using vxmake
- Initializing and starting a volume
- Accessing a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- About volume administration
- Displaying volume information
- Monitoring and controlling tasks
- About SF Thin Reclamation feature
- Reclamation of storage on thin reclamation arrays
- Monitoring Thin Reclamation using the vxtask command
- Using SmartMove with Thin Provisioning
- Admin operations on an unmounted VxFS thin volume
- Stopping a volume
- Starting a volume
- Resizing a volume
- Adding a mirror to a volume
- Removing a mirror
- Adding logs and maps to volumes
- Preparing a volume for DRL and instant snapshots
- Specifying storage for version 20 DCO plexes
- Using a DCO and DCO volume with a RAID-5 volume
- Determining the DCO version number
- Determining if DRL is enabled on a volume
- Determining if DRL logging is active on a volume
- Disabling and re-enabling DRL
- Removing support for DRL and instant snapshots from a volume
- Adding traditional DRL logging to a mirrored volume
- Upgrading existing volumes to use version 20 DCOs
- Setting tags on volumes
- Changing the read policy for mirrored volumes
- Removing a volume
- Moving volumes from a VM disk
- Enabling FastResync on a volume
- Performing online relayout
- Converting between layered and non-layered volumes
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- About the cluster functionality of VxVM
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Requesting node status and discovering the master node
- Changing the CVM master manually
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Handling cloned disks in a shared disk group
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Setting the disk detach policy on a shared disk group
- Setting the disk group failure policy on a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
- Glossary
Overview of cluster volume management
Over the past several years, parallel applications using shared data access have become increasingly popular. Examples of commercially available applications include Oracle Real Application Clusters™ (RAC), Sybase Adaptive Server®, and Informatica Enterprise Cluster Edition. In addition, the semantics of Network File System (NFS), File Transfer Protocol (FTP), and Network News Transfer Protocol (NNTP) allow these workloads to be served by shared data access clusters. Finally, numerous organizations have developed internal applications that take advantage of shared data access clusters.
The cluster functionality of VxVM (CVM) works together with the cluster monitor daemon that is provided by VCS or by the host operating system. The cluster monitor informs VxVM of changes in cluster membership. Each node starts up independently and has its own cluster monitor plus its own copies of the operating system and VxVM/CVM. When a node joins a cluster, it gains access to shared disk groups and volumes. When a node leaves a cluster, it loses access to these shared objects. A node joins a cluster when you issue the appropriate command on that node.
Warning:
The CVM functionality of VxVM is supported only when used in conjunction with a cluster monitor that has been configured correctly to work with VxVM.
Figure: Example of a 4-node CVM cluster shows a simple cluster arrangement consisting of four nodes with similar or identical hardware characteristics (CPUs, RAM and host adapters), and configured with identical software (including the operating system).
To the cluster monitor, all nodes are the same. VxVM objects configured within shared disk groups can potentially be accessed by all nodes that join the cluster. However, the CVM functionality of VxVM requires that one node act as the master node; all other nodes in the cluster are slave nodes. Any node is capable of being the master node, and it is responsible for coordinating certain VxVM activities.
In this example, node 0 is configured as the CVM master node and nodes 1, 2 and 3 are configured as CVM slave nodes. The nodes are fully connected by a private network and they are also separately connected to shared external storage (either disk arrays or JBODs: just a bunch of disks) via SCSI or Fibre Channel in a Storage Area Network (SAN).
In this example, each node has two independent paths to the disks, which are configured in one or more cluster-shareable disk groups. Multiple paths provide resilience against failure of one of the paths, but this is not a requirement for cluster configuration. Disks may also be connected by single paths.
The private network allows the nodes to share information about system resources and about each other's state. Using the private network, any node can recognize which other nodes are currently active, which are joining or leaving the cluster, and which have failed. The private network requires at least two communication channels to provide redundancy against one of the channels failing. If only one channel were used, its failure would be indistinguishable from node failure - a condition known as network partitioning.
You can run commands that configure or reconfigure VxVM objects on any node in the cluster. These tasks include setting up shared disk groups, creating and reconfiguring volumes, and performing snapshot operations.
The first node to join a cluster performs the function of master node. If the master node leaves a cluster, one of the slave nodes is chosen to be the new master.