Please enter search query.
Search <book_title>...
Storage Foundation Cluster File System High Availability 8.0.2 Administrator's Guide - Linux
Last Published:
2024-04-02
Product(s):
InfoScale & Storage Foundation (8.0.2)
Platform: Linux
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Storage Foundation Cluster File System High Availability
- About Dynamic Multi-Pathing (DMP)
- About Veritas Volume Manager
- About Veritas File System
- About Storage Foundation Cluster File System (SFCFS)
- About Veritas InfoScale Operations Manager
- About Veritas Replicator
- Use cases for Storage Foundation Cluster File System High Availability
- How Dynamic Multi-Pathing works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Hot-relocation
- Dirty region logging
- Volume snapshots
- Support for atomic writes
- FastResync
- Volume sets
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- How Storage Foundation Cluster File System High Availability works
- When to use Storage Foundation Cluster File System High Availability
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About Cluster Server architecture
- About the Storage Foundation Cluster File System High Availability namespace
- About asymmetric mounts
- About primary and secondary cluster nodes
- Determining or moving primaryship
- About synchronizing time on Cluster File Systems
- About file system tunables
- About setting the number of parallel fsck threads
- Storage Checkpoints
- About Storage Foundation Cluster File System High Availability backup strategies
- About parallel I/O
- About the I/O error handling policy for Cluster Volume Manager
- About recovering from I/O failures
- About single network link and reliability
- Split-brain and jeopardy handling
- About I/O fencing
- About I/O fencing for SFCFSHA in virtual machines that do not support SCSI-3 PR
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About I/O fencing configuration files
- How I/O fencing works in different event scenarios
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- Storage Foundation Cluster File System High Availability and Veritas Volume Manager cluster functionality agents
- Veritas Volume Manager cluster functionality
- How Cluster Volume Manager works
- About the cluster functionality of VxVM
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Availability of shared disk group configuration copies
- About redirection of application I/Os with CVM I/O shipping
- Storage disconnectivity and CVM disk detach policies
- About the types of storage connectivity failures
- About disk detach policies
- How CVM handles local storage disconnectivity with the global detach policy
- How CVM handles local storage disconnectivity with the local detach policy
- Guidelines for choosing detach policies
- How CVM detach policies interact with I/O shipping
- CVM storage disconnectivity scenarios that are policy independent
- Availability of cluster nodes and shared disk groups
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Application isolation in CVM environments with disk group sub-clustering
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Setting default values for vxassist
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Management of the use and require type of persistent attributes
- Creating volumes of a specific layout
- Creating a volume on specific disks
- Creating volumes on specific media types
- Creating encrypted volumes
- Changing the encryption password
- Viewing encrypted volumes
- Automating startup for encrypted volumes
- Configuring a Key Management Server
- Specifying ordered allocation of storage to volumes
- Site-based allocation
- Changing the read policy for mirrored volumes
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Converting a file system to VxFS
- Mounting a VxFS file system
- log mount option
- delaylog mount option
- tmplog mount option
- logiosize mount option
- nodatainlog mount option
- blkclear mount option
- mincache mount option
- convosync mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- cio mount option
- mntlock mount option
- ckptautomnt mount option
- Combining mount command options
- Unmounting a file system
- Resizing a file system
- Displaying information on mounted file systems
- Identifying file system types
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Displaying details about an Array Support Library
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Making devices invisible to VxVM
- Making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Low Impact Path Probing (LIPP)
- Configuring Subpaths Failover Groups (SFG)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Configuring latency threshold tunable for metro/geo array
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- About disk installation and formatting
- Adding and removing disks
- Renaming a disk
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- About Storage Foundation Cluster File System High Availability administration
- Administering CFS
- Adding CFS file systems to a VCS configuration
- Uses of cfsmount to mount and cfsumount to unmount CFS file system
- Removing CFS file systems from VCS configuration
- Resizing CFS file systems
- Verifying the status of CFS file system nodes and their mount points
- Verifying the state of the CFS port
- CFS agents and AMF support
- CFS agent log files
- CFS commands
- About the mount, fsclustadm, and fsadm commands
- Synchronizing system clocks on all nodes
- Growing a CFS file system
- About the /etc/fstab file
- When the CFS primary node fails
- About Storage Checkpoints on SFCFSHA
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- Listing all the CVM shared disks
- Viewing all available disks in a cluster
- Establishing CVM cluster membership manually
- Methods to control CVM master selection
- About setting cluster node preferences for master failover
- Cluster node preference for master failover
- Considerations for setting CVM node preferences
- Setting the cluster node preference using the CVMCluster agent
- Setting the cluster node preference value for master failover using the vxclustadm command
- Example of setting the cluster node preference value for master failover
- About changing the CVM master manually
- Enabling the application isolation feature in CVM environments
- Disabling the application isolation feature in a CVM cluster
- Changing the disk group master manually
- Setting the sub-cluster node preference value for master failover
- Importing a shared disk group manually
- Deporting a shared disk group manually
- Mapping remote storage to a node in the cluster
- Removing remote storage mappings from a node in the cluster
- Starting shared volumes manually
- Evaluating the state of CVM ports
- Verifying if CVM is running in an SFCFSHA cluster
- Verifying CVM membership state
- Verifying the state of CVM shared disk groups
- Verifying the activation mode
- CVM log files
- Requesting node status and discovering the master node
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Enabling I/O shipping for shared disk groups
- Setting the detach policy for shared disk groups
- Volume-level I/O shipping
- Enabling or disabling volume-level I/O shipping
- Controlling the CVM tolerance to storage disconnectivity
- Handling cloned disks in a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering Flexible Storage Sharing
- About Flexible Storage Sharing disk support
- About the volume layout for Flexible Storage Sharing disk groups
- Setting the host prefix
- Exporting a disk for Flexible Storage Sharing
- Setting the Flexible Storage Sharing attribute on a disk group
- Using the host disk class and allocating storage
- Administering mirrored volumes using vxassist
- Displaying exported disks and network shared disk groups
- Tuning LLT for memory and performance in FSS environments
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- General guidelines for using the vxfentsthdw utility
- About the vxfentsthdw command options
- Testing the coordinator disk group using the -c option of vxfentsthdw
- Performing non-destructive testing on the disks using the -r option
- Testing the shared disks using the vxfentsthdw -m option
- Testing the shared disks listed in a file using the vxfentsthdw -f option
- Testing all the disks in a disk group using the vxfentsthdw -g option
- Testing a disk with existing keys
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- CP server operations (cpsadm)
- Adding and removing SFCFSHA cluster entries from the CP server database
- Adding and removing a SFCFSHA cluster node from the CP server database
- Adding or removing CP server users
- Listing the CP server users
- Listing the nodes in all the SFCFSHA clusters
- Listing the membership of nodes in the SFCFSHA cluster
- Preempting a node
- Registering and unregistering a node
- Enable and disable access for a user to a SFCFSHA cluster
- Starting and stopping CP server outside VCS control
- Checking the connectivity of CP servers
- Adding and removing virtual IP addresses and ports for CP servers at run-time
- Taking a CP server database snapshot
- Replacing coordination points for server-based fencing in an online cluster
- Refreshing registration keys on the coordination points for server-based fencing
- Deployment and migration scenarios for CP server
- Migrating from non-secure to secure setup for CP server and SFCFSHA cluster communication
- About migrating between disk-based and server-based fencing configurations
- Migrating from disk-based to server-based fencing in an online cluster
- Migrating from server-based to disk-based fencing in an online cluster
- Migrating between fencing configurations using response files
- Sample response file to migrate from disk-based to server-based fencing
- Sample response file to migrate from server-based fencing to disk-based fencing
- Sample response file to migrate from single CP server-based fencing to server-based fencing
- Response file variables to migrate between fencing configurations
- Enabling or disabling the preferred fencing policy
- About I/O fencing log files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Enabling S3 server
- Configuring OpenStack
- Using Clustered NFS
- Understanding how Clustered NFS works
- Sample use cases
- cfsshare manual page
- Configure and unconfigure Clustered NFS
- Administering Clustered NFS
- Displaying the NFS shared CFS file systems
- Sharing a CFS file system previously added to VCS
- Unsharing the previous shared CFS file system
- Adding an NFS shared CFS file system to VCS
- Deleting the NFS shared CFS file system from VCS
- Adding a virtual IP address to VCS
- Deleting a virtual IP address from VCS
- Adding an IPv6 virtual IP address to VCS in a pure IPv6 configuration
- Deleting an IPv6 virtual IP address from VCS in a pure IPv6 configuration
- Adding a virtual IP address to VCS in a dual-stack configuration
- Deleting a virtual IP address from VCS in a dual-stack configuration
- Changing the share options associated with an NFS share
- Sharing a file system checkpoint
- Samples for configuring a Clustered NFS
- Sample main.cf file
- How to mount an NFS-exported file system on the NFS clients
- Debugging Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Recovering from a loss of site connectivity
- Recovering from host failure
- Recovering from storage failure
- Recovering from site failure
- Recovering from disruption of connectivity to storage at the remote sites from hosts on all sites
- Recovering from disruption to connectivity to storage at all sites from the hosts at a site
- Automatic site reattachment
- Administering iSCSI with SFCFSHA
- Administering datastores with SFCFSHA
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Veritas Volume Manager throttling of administrative I/O
- Managing application I/O workloads using maximum IOPS settings
- About application volume groups
- Creating application volume groups
- Viewing the list of application volume groups
- Setting the maximum IOPS threshold on application volume groups
- Viewing the IOPS statistics for application volume groups
- Removing the maximum IOPS setting from application volume groups
- Adding volumes to an application volume group
- Removing volumes from an application volume group
- Removing an application volume group
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Storage Foundation Cluster File System High Availability
- About Oracle Disk Manager and Oracle Managed Files
- Setting up Veritas Extension for Oracle Disk Manager
- Configuring Veritas Extension for Oracle Disk Manager
- Preparing existing database storage for Oracle Disk Manager
- Verifying that Oracle Disk Manager is configured
- Disabling the Oracle Disk Manager feature
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- About point-in-time copies
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- About volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Creating and managing space-optimized instant snapshots
- Creating and managing full-sized instant snapshots
- Creating and managing third-mirror break-off snapshots
- Creating and managing linked break-off snapshot volumes
- Creating multiple instant snapshots
- Creating instant snapshots of volume sets
- Adding snapshot mirrors to a volume
- Removing a snapshot mirror
- Removing a linked break-off snapshot volume
- Adding a snapshot to a cascaded snapshot hierarchy
- Refreshing an instant space-optimized snapshot
- Reattaching an instant full-sized or plex break-off snapshot
- Reattaching a linked break-off snapshot volume
- Restoring a volume from an instant space-optimized snapshot
- Dissociating an instant snapshot
- Removing an instant snapshot
- Splitting an instant snapshot hierarchy
- Displaying instant snapshot information
- Controlling instant snapshot synchronization
- Listing the snapshots created on a cache
- Tuning the autogrow attributes of a cache
- Monitoring and displaying cache usage
- Growing and shrinking a cache
- Removing a cache
- Creating instant snapshots
- Linked break-off snapshots
- Cascaded snapshots
- Creating multiple snapshots
- Restoring the original volume from a snapshot
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- About Storage Checkpoints
- Storage Checkpoint administration
- Storage Checkpoint space management considerations
- Restoring from a Storage Checkpoint
- Storage Checkpoint quotas
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- About thin provisioning
- About thin optimization solutions in Storage Foundation Cluster File System High Availability
- About SmartMove
- About the Thin Reclamation feature
- About reclaiming space on Solid State Devices (SSDs) with the TRIM operation
- Determining when to reclaim space on a thin reclamation LUN
- How automatic reclamation works
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Displaying VxFS file system usage on thin reclamation LUNs
- Reclaiming space on a file system
- Reclaiming space on a disk, disk group, or enclosure
- About the reclamation log file
- Monitoring Thin Reclamation using the vxtask command
- Configuring automatic reclamation
- Veritas InfoScale 4k sector device support solution
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- About multi-volume file systems
- About volume types
- Features implemented using multi-volume file system (MVFS) support
- Creating multi-volume file systems
- Converting a single volume file system to a multi-volume file system
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Reporting file extents
- Load balancing
- Converting a multi-volume file system to a single volume file system
- Administering SmartTier
- About SmartTier
- Supported SmartTier document type definitions
- Placement classes
- Administering placement policies
- File placement policy grammar
- File placement policy rules
- Calculating I/O temperature and access temperature
- Multiple criteria in file placement policy rule statements
- Multiple file selection criteria in SELECT statement clauses
- Multiple placement classes in <ON> clauses of CREATE statements and in <TO> clauses of RELOCATE statements
- Multiple placement classes in <FROM> clauses of RELOCATE and DELETE statements
- Multiple conditions in <WHEN> clauses of RELOCATE and DELETE statements
- File placement policy rule and statement ordering
- File placement policies and extending files
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Deduplicating data
- Compressing files
- About compressing files
- Compressing files with the vxcompress command
- Interaction of compressed files and other commands
- Interaction of compressed files and other features
- Interaction of compressed files and applications
- Use cases for compressing files
- Section X. Administering and protecting storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Using vxnotify to monitor configuration changes
- Performing online relayout
- Adding a mirror to a volume
- Configuring SmartMove
- Removing a mirror
- Setting tags on volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Removing a disk from a disk group
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Setting up configuration database copies (metadata) for a disk group
- Renaming a disk group
- Handling conflicting configuration copies
- Disabling a disk group
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Working with existing ISP disk groups
- Managing plexes and subdisks
- Erasure coding in Veritas InfoScale storage environments
- Using Distributed Parity
- Allocating logs on different disks
- Limitations of erasure coded volumes
- Erasure coding deployment scenarios
- I/O operations on erasure coded volumes
- Recovery of erasure coded volumes
- Relocation of faulted storage containing erasure coded volumes
- Initializing an erasure coded volume
- Resizing an erasure coded volume
- Customized failure domain
- Decommissioning storage
- Rootability
- Root Disk Encapsulation (RDE) is not supported
- Encapsulating a disk
- Device name format changes in RHEL 7 environments after encapsulation
- Rootability
- Restrictions on using rootability with Linux
- Sample supported root disk layouts for encapsulation
- Example 1: supported root disk layouts for encapsulation
- Example 2: supported root disk layouts for encapsulation
- Example 3: supported root disk layouts for encapsulation
- Example 4: supported root disk layouts for encapsulation
- Sample unsupported root disk layouts for encapsulation
- Example 1: unsupported root disk layouts for encapsulation
- Example 2: unsupported root disk layouts for encapsulation
- Example 3: unsupported root disk layouts for encapsulation
- Example 4: unsupported root disk layouts for encapsulation
- Booting root volumes
- Boot-time volume restrictions
- Creating redundancy for the root disk
- Creating an archived back-up root disk for disaster recovery
- Encapsulating and mirroring the root disk
- Upgrading the kernel on a root encapsulated system
- Administering an encapsulated boot disk
- Unencapsulating the root disk
- Quotas
- About Veritas File System quota limits
- About quota files on Veritas File System
- About Veritas File System quota commands
- About quota checking with Veritas File System
- Using Veritas File System quotas
- Turning on Veritas File System quotas
- Turning on Veritas File System quotas at mount time
- Editing Veritas File System quotas
- Modifying Veritas File System quota time limits
- Viewing Veritas File System disk quotas and usage
- Displaying blocks owned by users or groups
- Turning off Veritas File System quotas
- Support for 64-bit Quotas
- File Change Log
- Support for protection against ransomware
- About support for protection against ransomware
- Write Once, Read Many (WORM) storage
- Secure clock
- Audit logging
- Non-modifiable storage checkpoints
- Post-upgrade tasks to enable the use of non-modifiable checkpoints
- Creating non-modifiable checkpoints
- Setting retention periods for non-modifiable checkpoints
- Making an existing checkpoint non-modifiable
- Deletion of non-modifiable checkpoints
- Compatibility of WORM flag with relevant checkpoint operations
- Restrictions and limitations on the promote operation for checkpoints
- Restrictions and limitations on the mount operations
- Soft WORM storage
- Secure file system
- Secure File System for Oracle Single Instance
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- About tuning Storage Foundation Cluster File System High Availability
- Tuning the VxFS file system
- DMP tunable parameters
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- About AMF tunable parameters
- Appendix C. Command reference
- Appendix D. Creating a starter database
- Appendix E. Executive Order logging
Storage Checkpoints
Storage Foundation Cluster File System High Availability (SFCFSHA) provides a Storage Checkpoint feature that quickly creates a persistent image of a file sytem at an exact point in time.