Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- About Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Hot-relocation
- Volume sets
- Provisioning new usable storage
- Administering disks
- About disk management
- Disk devices
- Discovering and configuring newly added disk devices
- Partial device discovery
- Discovering disks and dynamically adding disk arrays
- Third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing supported disks in the DISKS category
- Displaying details about a supported array library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Disks under VxVM control
- Changing the disk-naming scheme
- About the Array Volume Identifier (AVID) attribute
- Discovering the association between enclosure-based disk names and OS-based disk names
- About disk installation and formatting
- Displaying or changing default disk layout attributes
- Adding a disk to VxVM
- RAM disk support in VxVM
- Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks
- Rootability
- Displaying disk information
- Controlling Powerfail Timeout
- Removing disks
- Removing a disk from VxVM control
- Removing and replacing disks
- Enabling a disk
- Taking a disk offline
- Renaming a disk
- Reserving disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Disabling multi-pathing and making devices invisible to VxVM
- Enabling multi-pathing and making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using vxdmpadm
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying extended device attributes
- Suppressing or including devices for VxVM or DMP control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers or array ports
- Enabling I/O for paths, controllers or array ports
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Displaying information about the DMP error-handling thread
- Configuring array policy modules
- Online dynamic reconfiguration
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Removing LUNs dynamically from an existing target ID
- Adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Cleaning up the operating system device tree after removing LUNs
- Upgrading the array controller firmware online
- Replacing a host bus adapter
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Adding a disk to a disk group
- Removing a disk from a disk group
- Moving disks between disk groups
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Renaming a disk group
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Disabling a disk group
- Destroying a disk group
- Upgrading the disk group version
- About the configuration daemon in VxVM
- Backing up and restoring disk group configuration data
- Using vxnotify to monitor configuration changes
- Working with existing ISP disk groups
- Creating and administering subdisks and plexes
- About subdisks
- Creating subdisks
- Displaying subdisk information
- Moving subdisks
- Splitting subdisks
- Joining subdisks
- Associating subdisks with plexes
- Associating log subdisks
- Dissociating subdisks from plexes
- Removing subdisks
- Changing subdisk attributes
- About plexes
- Creating plexes
- Creating a striped plex
- Displaying plex information
- Attaching and associating plexes
- Taking plexes offline
- Detaching plexes
- Reattaching plexes
- Moving plexes
- Copying volumes to plexes
- Dissociating and removing plexes
- Changing plex attributes
- Creating volumes
- About volume creation
- Types of volume layouts
- Creating a volume
- Using vxassist
- Discovering the maximum size of a volume
- Disk group alignment constraints on volumes
- Creating a volume on any disk
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a volume with a version 0 DCO volume
- Creating a volume with a version 20 DCO volume
- Creating a volume with dirty region logging enabled
- Creating a striped volume
- Mirroring across targets, controllers or enclosures
- Mirroring across media types (SSD and HDD)
- Creating a RAID-5 volume
- Creating tagged volumes
- Creating a volume using vxmake
- Initializing and starting a volume
- Accessing a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- About volume administration
- Displaying volume information
- Monitoring and controlling tasks
- About SF Thin Reclamation feature
- Reclamation of storage on thin reclamation arrays
- Monitoring Thin Reclamation using the vxtask command
- Using SmartMove with Thin Provisioning
- Admin operations on an unmounted VxFS thin volume
- Stopping a volume
- Starting a volume
- Resizing a volume
- Adding a mirror to a volume
- Removing a mirror
- Adding logs and maps to volumes
- Preparing a volume for DRL and instant snapshots
- Specifying storage for version 20 DCO plexes
- Using a DCO and DCO volume with a RAID-5 volume
- Determining the DCO version number
- Determining if DRL is enabled on a volume
- Determining if DRL logging is active on a volume
- Disabling and re-enabling DRL
- Removing support for DRL and instant snapshots from a volume
- Adding traditional DRL logging to a mirrored volume
- Upgrading existing volumes to use version 20 DCOs
- Setting tags on volumes
- Changing the read policy for mirrored volumes
- Removing a volume
- Moving volumes from a VM disk
- Enabling FastResync on a volume
- Performing online relayout
- Converting between layered and non-layered volumes
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- About the cluster functionality of VxVM
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Requesting node status and discovering the master node
- Changing the CVM master manually
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Handling cloned disks in a shared disk group
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Setting the disk detach policy on a shared disk group
- Setting the disk group failure policy on a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
- Glossary
Upgrading the array controller firmware online
Storage array subsystems need code upgrades as fixes, patches, or feature upgrades. You can perform these upgrades online when the file system is mounted and I/Os are being served to the storage.
Legacy storage subsystems contain two controllers for redundancy. An online upgrade is done one controller at a time. DMP fails over all I/O to the second controller while the first controller is undergoing an Online Controller Upgrade. After the first controller has completely staged the code, it reboots, resets, and comes online with the new version of the code. The second controller goes through the same process, and I/O fails over to the first controller.
Note:
Throughout this process, application I/O is not affected.
Array vendors have different names for this process. For example, EMC calls it a nondisruptive upgrade (NDU) for CLARiiON arrays.
A/A type arrays require no special handling during this online upgrade process. For A/P, A/PF, and ALUA type arrays, DMP performs array-specific handling through vendor-specific array policy modules (APMs) during an online controller code upgrade.
When a controller resets and reboots during a code upgrade, DMP detects this state through the SCSI Status. DMP immediately fails over all I/O to the next controller.
If the array does not fully support NDU, all paths to the controllers may be unavailable for I/O for a short period of time. Before beginning the upgrade, set the dmp_lun_retry_timeout tunable to a period greater than the time that you expect the controllers to be unavailable for I/O. DMP retries the I/Os until the end of the dmp_lun_retry_timeout period, or until the I/O succeeds, whichever happens first. Therefore, you can perform the firmware upgrade without interrupting the application I/Os.
For example, if you expect the paths to be unavailable for I/O for 300 seconds, use the following command:
# vxdmpadm settune dmp_lun_retry_timeout=300
DMP retries the I/Os for 300 seconds, or until the I/O succeeds.
To verify which arrays support Online Controller Upgrade or NDU, see the hardware compatibility list (HCL) at the following URL: