Storage Foundation 7.3 Administrator's Guide - AIX
- Section I. Introducing Storage Foundation
- Overview of Storage Foundation
- How Dynamic Multi-Pathing works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Hot-relocation
- Dirty region logging
- Volume snapshots
- FastResync
- Volume sets
- How VxVM handles hardware clones or snapshots
- How Veritas File System works
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Setting default values for vxassist
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Management of the use and require type of persistent attributes
- Creating volumes of a specific layout
- Creating a volume on specific disks
- Creating volumes on specific media types
- Specifying ordered allocation of storage to volumes
- Site-based allocation
- Changing the read policy for mirrored volumes
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Converting a file system to VxFS
- Mounting a VxFS file system
- log mount option
- delaylog mount option
- tmplog mount option
- logiosize mount option
- nodatainlog mount option
- blkclear mount option
- mincache mount option
- convosync mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- cio mount option
- mntlock mount option
- ckptautomnt mount option
- Combining mount command options
- Unmounting a file system
- Resizing a file system
- Displaying information on mounted file systems
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Displaying details about an Array Support Library
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Making devices invisible to VxVM
- Making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Low Impact Path Probing (LIPP)
- Configuring Subpaths Failover Groups (SFG)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Manually replacing a host bus adapter online
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- About disk installation and formatting
- Adding and removing disks
- Renaming a disk
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Administering sites and remote mirrors
- Section V. Optimizing I/O performance
- Section VI. Using Point-in-time copies
- Understanding point-in-time copy methods
- About point-in-time copies
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- About volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Creating and managing space-optimized instant snapshots
- Creating and managing full-sized instant snapshots
- Creating and managing third-mirror break-off snapshots
- Creating and managing linked break-off snapshot volumes
- Creating multiple instant snapshots
- Creating instant snapshots of volume sets
- Adding snapshot mirrors to a volume
- Removing a snapshot mirror
- Removing a linked break-off snapshot volume
- Adding a snapshot to a cascaded snapshot hierarchy
- Refreshing an instant space-optimized snapshot
- Reattaching an instant full-sized or plex break-off snapshot
- Reattaching a linked break-off snapshot volume
- Restoring a volume from an instant space-optimized snapshot
- Dissociating an instant snapshot
- Removing an instant snapshot
- Splitting an instant snapshot hierarchy
- Displaying instant snapshot information
- Controlling instant snapshot synchronization
- Listing the snapshots created on a cache
- Tuning the autogrow attributes of a cache
- Monitoring and displaying cache usage
- Growing and shrinking a cache
- Removing a cache
- Creating instant snapshots
- Linked break-off snapshots
- Cascaded snapshots
- Creating multiple snapshots
- Restoring the original volume from a snapshot
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- About Storage Checkpoints
- Storage Checkpoint administration
- Storage Checkpoint space management considerations
- Restoring from a Storage Checkpoint
- Storage Checkpoint quotas
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VII. Optimizing storage with Storage Foundation
- Understanding storage optimization solutions in Storage Foundation
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Displaying VxFS file system usage on thin reclamation LUNs
- Reclaiming space on a file system
- Reclaiming space on a disk, disk group, or enclosure
- About the reclamation log file
- Monitoring Thin Reclamation using the vxtask command
- Configuring automatic reclamation
- Veritas InfoScale 4k sector device support solution
- Section VIII. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- About multi-volume file systems
- About volume types
- Features implemented using multi-volume file system (MVFS) support
- Creating multi-volume file systems
- Converting a single volume file system to a multi-volume file system
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Reporting file extents
- Load balancing
- Converting a multi-volume file system to a single volume file system
- Administering SmartTier
- About SmartTier
- Supported SmartTier document type definitions
- Placement classes
- Administering placement policies
- File placement policy grammar
- File placement policy rules
- Calculating I/O temperature and access temperature
- Multiple criteria in file placement policy rule statements
- Multiple file selection criteria in SELECT statement clauses
- Multiple placement classes in <ON> clauses of CREATE statements and in <TO> clauses of RELOCATE statements
- Multiple placement classes in <FROM> clauses of RELOCATE and DELETE statements
- Multiple conditions in <WHEN> clauses of RELOCATE and DELETE statements
- File placement policy rule and statement ordering
- File placement policies and extending files
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Deduplicating data
- Compressing files
- About compressing files
- Compressing files with the vxcompress command
- Interaction of compressed files and other commands
- Interaction of compressed files and other features
- Interaction of compressed files and applications
- Use cases for compressing files
- Section IX. Administering storage
- Administering VxVM volumes as paging devices
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Using vxnotify to monitor configuration changes
- Performing online relayout
- Adding a mirror to a volume
- Configuring SmartMove
- Removing a mirror
- Setting tags on volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Removing a disk from a disk group
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Setting up configuration database copies (metadata) for a disk group
- Renaming a disk group
- Handling conflicting configuration copies
- Disabling a disk group
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Working with existing ISP disk groups
- Managing plexes and subdisks
- Decommissioning storage
- Using DMP with a SAN boot disk
- Configuring DMP for SAN booting
- Administering the root volume group (rootvg) under DMP control
- Running the bosboot command when LVM rootvg is enabled for DMP
- Extending an LVM rootvg that is enabled for DMP
- Reducing the native rootvg that is enabled for DMP
- Mirroring the root volume group
- Removing the mirror for the root volume group (rootvg)
- Cloning a LVM rootvg that is enabled for DMP
- Cleaning up the alternate disk volume group when LVM rootvg is enabled for DMP
- Using mksysb when the root volume group is under DMP control
- Upgrading Storage Foundation and AIX on a DMP-enabled rootvg
- Quotas
- About Veritas File System quota limits
- About quota files on Veritas File System
- About Veritas File System quota commands
- About quota checking with Veritas File System
- Using Veritas File System quotas
- Turning on Veritas File System quotas
- Turning on Veritas File System quotas at mount time
- Editing Veritas File System quotas
- Modifying Veritas File System quota time limits
- Viewing Veritas File System disk quotas and usage
- Displaying blocks owned by users or groups
- Turning off Veritas File System quotas
- Support for 64-bit Quotas
- File Change Log
- Section X. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- About tuning Storage Foundation
- Tuning the VxFS file system
- DMP tunable parameters
- Methods to change Dynamic Multi-Pathing tunable parameters
- DMP driver tunables
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- Appendix C. Command reference
Calculating I/O temperature and access temperature
An important application of VxFS SmartTier is automating the relocation of inactive files to lower cost storage. If a file has not been accessed for the period of time specified in the <ACCAGE> element, a scan of the file system should schedule the file for relocation to a lower tier of storage. But, time since last access is inadequate as the only criterion for activity-based relocation.
Why time since last access is inadequate as the only criterion for activity-based relocation:
Access age is a binary measure. The time since last access of a file is computed by subtracting the time at which the fsppadm enforce command is issued from the POSIX atime in the file's metadata. If a file is opened the day before the fsppadm enforce command, its time since last access is one day, even though it may have been inactive for the month preceding. If the intent of a policy rule is to relocate inactive files to lower tier volumes, it will perform badly against files that happen to be accessed, however casually, within the interval defined by the value of the <ACCAGE> pa-rameter.
Access age is a poor indicator of resumption of significant activity. Using ACCAGE, the time since last access, as a criterion for relocating inactive files to lower tier volumes may fail to schedule some relocations that should be performed, but at least this method results in less relocation activity than necessary. Using ACCAGE as a criterion for relocating previously inactive files that have become active is worse, because this method is likely to schedule relocation activity that is not warranted. If a policy rule's intent is to cause files that have experienced I/O activity in the recent past to be relocated to higher performing, perhaps more failure tolerant storage, ACCAGE is too coarse a filter. For example, in a rule specifying that files on
tier2volumes that have been accessed within the last three days should be relocated totier1volumes, no distinction is made between a file that was browsed by a single user and a file that actually was used intensively by applications.
SmartTier implements the concept of I/O temperature and access temperature to overcome these deficiencies. A file's I/O temperature is equal to the number of bytes transferred to or from it over a specified period of time divided by the size of the file. For example, if a file occupies one megabyte of storage at the time of an fsppadm enforce operation and the data in the file has been completely read or written 15 times within the last three days, VxFS calculates its 3-day average I/O temperature to be 5 (15 MB of I/O ÷ 1 MB file size ÷ 3 days).
Similarly, a file's average access temperature is the number of read or write requests made to it over a specified number of 24-hour periods divided by the number of periods. Unlike I/O temperature, access temperature is unrelated to file size. A large file to which 20 I/O requests are made over a 2-day period has the same average access temperature as a small file accessed 20 times over a 2-day period.
If a file system's active placement policy includes any <IOTEMP> or <ACCESSTEMP> clauses, VxFS begins policy enforcement by using information in the file system's FCL file to calculate average I/O activity against all files in the file system during the longest <PERIOD> specified in the policy. Shorter specified periods are ignored. VxFS uses these calculations to qualify files for I/O temperature-based relocation and deletion.
See About the Veritas File System File Change Log file.
Note:
If FCL is turned off, I/O temperature-based relocation will not be accurate. When you invoke the fsppadm enforce command, the command displays a warning if the FCL is turned off.
As its name implies, the File Change Log records information about changes made to files in a VxFS file system. In addition to recording creations, deletions, extensions, the FCL periodically captures the cumulative amount of I/O activity (number of bytes read and written) on a file-by-file basis. File I/O activity is recorded in the FCL each time a file is opened or closed, as well as at timed intervals to capture information about files that remain open for long periods.
If a file system's active file placement policy contains <IOTEMP> clauses, execution of the fsppadm enforce command begins with a scan of the FCL to extract I/O activity information over the period of interest for the policy. The period of interest is the interval between the time at which the fsppadm enforce command was issued and that time minus the largest interval value specified in any <PERIOD> element in the active policy.
For files with I/O activity during the largest interval, VxFS computes an approximation of the amount of read, write, and total data transfer (the sum of the two) activity by subtracting the I/O levels in the oldest FCL record that pertains to the file from those in the newest. It then computes each file's I/O temperature by dividing its I/O activity by its size at Tscan. Dividing by file size is an implicit acknowledgement that relocating larger files consumes more I/O resources than relocating smaller ones. Using this algorithm requires that larger files must have more activity against them in order to reach a given I/O temperature, and thereby justify the resource cost of relocation.
While this computation is an approximation in several ways, it represents an easy to compute, and more importantly, unbiased estimate of relative recent I/O activity upon which reasonable relocation decisions can be based.
File relocation and deletion decisions can be based on read, write, or total I/O activity.
The following XML snippet illustrates the use of IOTEMP in a policy rule to specify relocation of low activity files from tier1 volumes to tier2 volumes:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrwbytes}">
<MAX Flags="lt">3</MAX>
<PERIOD Units="days">4</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>This snippet specifies that files to which the rule applies should be relocated from tier1 volumes to tier2 volumes if their I/O temperatures fall below 3 over a period of 4 days. The Type="nrwbytes}" XML attribute specifies that total data transfer activity, which is the the sum of bytes read and bytes written, should be used in the computation. For example, a 50 megabyte file that experienced less than 150 megabytes of data transfer over the 4-day period immediately preceding the fsppadm enforce scan would be a candidate for relocation. VxFS considers files that experience no activity over the period of interest to have an I/O temperature of zero. VxFS relocates qualifying files in the order in which it encounters the files in its scan of the file system directory tree.
Using I/O temperature or access temperature rather than a binary indication of activity, such as the POSIX atime or mtime, minimizes the chance of not relocating files that were only accessed occasionally during the period of interest. A large file that has had only a few bytes transferred to or from it would have a low I/O temperature, and would therefore be a candidate for relocation to tier2 volumes, even if the activity was very recent.
But, the greater value of I/O temperature or access temperature as a file relocation criterion lies in upward relocation: detecting increasing levels of I/O activity against files that had previously been relocated to lower tiers in a storage hierarchy due to inactivity or low temperatures, and relocating them to higher tiers in the storage hierarchy.
The following XML snippet illustrates relocating files from tier2 volumes to tier1 when the activity level against them increases.
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier2</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes">
<MAX Flags="gt">5</MAX>
<PERIOD Units="days">2</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>The <RELOCATE> statement specifies that files on tier2 volumes whose I/O temperature as calculated using the number of bytes read is above 5 over a 2-day period are to be relocated to tier1 volumes. Bytes written to the file during the period of interest are not part of this calculation.
Using I/O temperature rather than a binary indicator of activity as a criterion for file relocation gives administrators a granular level of control over automated file relocation that can be used to attune policies to application requirements. For example, specifying a large value in the <PERIOD> element of an upward relocation statement prevents files from being relocated unless I/O activity against them is sustained. Alternatively, specifying a high temperature and a short period tends to relocate files based on short-term intensity of I/O activity against them.
I/O temperature and access temperature utilize the sqlite3 database for building a temporary table indexed on an inode. This temporary table is used to filter files based on I/O temperature and access temperature. The temporary table is stored in the database file .__fsppadm_fcliotemp.db, which resides in the lost+found directory of the mount point.