Storage Foundation Cluster File System High Availability 8.0.2 Administrator's Guide - Linux
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Storage Foundation Cluster File System High Availability
- About Dynamic Multi-Pathing (DMP)
- About Veritas Volume Manager
- About Veritas File System
- About Storage Foundation Cluster File System (SFCFS)
- About Veritas InfoScale Operations Manager
- About Veritas Replicator
- Use cases for Storage Foundation Cluster File System High Availability
- How Dynamic Multi-Pathing works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Hot-relocation
- Dirty region logging
- Volume snapshots
- Support for atomic writes
- FastResync
- Volume sets
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- How Storage Foundation Cluster File System High Availability works
- When to use Storage Foundation Cluster File System High Availability
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About Cluster Server architecture
- About the Storage Foundation Cluster File System High Availability namespace
- About asymmetric mounts
- About primary and secondary cluster nodes
- Determining or moving primaryship
- About synchronizing time on Cluster File Systems
- About file system tunables
- About setting the number of parallel fsck threads
- Storage Checkpoints
- About Storage Foundation Cluster File System High Availability backup strategies
- About parallel I/O
- About the I/O error handling policy for Cluster Volume Manager
- About recovering from I/O failures
- About single network link and reliability
- Split-brain and jeopardy handling
- About I/O fencing
- About I/O fencing for SFCFSHA in virtual machines that do not support SCSI-3 PR
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About I/O fencing configuration files
- How I/O fencing works in different event scenarios
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- Storage Foundation Cluster File System High Availability and Veritas Volume Manager cluster functionality agents
- Veritas Volume Manager cluster functionality
- How Cluster Volume Manager works
- About the cluster functionality of VxVM
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Availability of shared disk group configuration copies
- About redirection of application I/Os with CVM I/O shipping
- Storage disconnectivity and CVM disk detach policies
- About the types of storage connectivity failures
- About disk detach policies
- How CVM handles local storage disconnectivity with the global detach policy
- How CVM handles local storage disconnectivity with the local detach policy
- Guidelines for choosing detach policies
- How CVM detach policies interact with I/O shipping
- CVM storage disconnectivity scenarios that are policy independent
- Availability of cluster nodes and shared disk groups
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Application isolation in CVM environments with disk group sub-clustering
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Setting default values for vxassist
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Management of the use and require type of persistent attributes
- Creating volumes of a specific layout
- Creating a volume on specific disks
- Creating volumes on specific media types
- Creating encrypted volumes
- Changing the encryption password
- Viewing encrypted volumes
- Automating startup for encrypted volumes
- Configuring a Key Management Server
- Specifying ordered allocation of storage to volumes
- Site-based allocation
- Changing the read policy for mirrored volumes
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Converting a file system to VxFS
- Mounting a VxFS file system
- log mount option
- delaylog mount option
- tmplog mount option
- logiosize mount option
- nodatainlog mount option
- blkclear mount option
- mincache mount option
- convosync mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- cio mount option
- mntlock mount option
- ckptautomnt mount option
- Combining mount command options
- Unmounting a file system
- Resizing a file system
- Displaying information on mounted file systems
- Identifying file system types
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Displaying details about an Array Support Library
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Making devices invisible to VxVM
- Making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Low Impact Path Probing (LIPP)
- Configuring Subpaths Failover Groups (SFG)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Configuring latency threshold tunable for metro/geo array
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- About disk installation and formatting
- Adding and removing disks
- Renaming a disk
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- About Storage Foundation Cluster File System High Availability administration
- Administering CFS
- Adding CFS file systems to a VCS configuration
- Uses of cfsmount to mount and cfsumount to unmount CFS file system
- Removing CFS file systems from VCS configuration
- Resizing CFS file systems
- Verifying the status of CFS file system nodes and their mount points
- Verifying the state of the CFS port
- CFS agents and AMF support
- CFS agent log files
- CFS commands
- About the mount, fsclustadm, and fsadm commands
- Synchronizing system clocks on all nodes
- Growing a CFS file system
- About the /etc/fstab file
- When the CFS primary node fails
- About Storage Checkpoints on SFCFSHA
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- Listing all the CVM shared disks
- Viewing all available disks in a cluster
- Establishing CVM cluster membership manually
- Methods to control CVM master selection
- About setting cluster node preferences for master failover
- Cluster node preference for master failover
- Considerations for setting CVM node preferences
- Setting the cluster node preference using the CVMCluster agent
- Setting the cluster node preference value for master failover using the vxclustadm command
- Example of setting the cluster node preference value for master failover
- About changing the CVM master manually
- Enabling the application isolation feature in CVM environments
- Disabling the application isolation feature in a CVM cluster
- Changing the disk group master manually
- Setting the sub-cluster node preference value for master failover
- Importing a shared disk group manually
- Deporting a shared disk group manually
- Mapping remote storage to a node in the cluster
- Removing remote storage mappings from a node in the cluster
- Starting shared volumes manually
- Evaluating the state of CVM ports
- Verifying if CVM is running in an SFCFSHA cluster
- Verifying CVM membership state
- Verifying the state of CVM shared disk groups
- Verifying the activation mode
- CVM log files
- Requesting node status and discovering the master node
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Enabling I/O shipping for shared disk groups
- Setting the detach policy for shared disk groups
- Volume-level I/O shipping
- Enabling or disabling volume-level I/O shipping
- Controlling the CVM tolerance to storage disconnectivity
- Handling cloned disks in a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering Flexible Storage Sharing
- About Flexible Storage Sharing disk support
- About the volume layout for Flexible Storage Sharing disk groups
- Setting the host prefix
- Exporting a disk for Flexible Storage Sharing
- Setting the Flexible Storage Sharing attribute on a disk group
- Using the host disk class and allocating storage
- Administering mirrored volumes using vxassist
- Displaying exported disks and network shared disk groups
- Tuning LLT for memory and performance in FSS environments
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- General guidelines for using the vxfentsthdw utility
- About the vxfentsthdw command options
- Testing the coordinator disk group using the -c option of vxfentsthdw
- Performing non-destructive testing on the disks using the -r option
- Testing the shared disks using the vxfentsthdw -m option
- Testing the shared disks listed in a file using the vxfentsthdw -f option
- Testing all the disks in a disk group using the vxfentsthdw -g option
- Testing a disk with existing keys
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- CP server operations (cpsadm)
- Adding and removing SFCFSHA cluster entries from the CP server database
- Adding and removing a SFCFSHA cluster node from the CP server database
- Adding or removing CP server users
- Listing the CP server users
- Listing the nodes in all the SFCFSHA clusters
- Listing the membership of nodes in the SFCFSHA cluster
- Preempting a node
- Registering and unregistering a node
- Enable and disable access for a user to a SFCFSHA cluster
- Starting and stopping CP server outside VCS control
- Checking the connectivity of CP servers
- Adding and removing virtual IP addresses and ports for CP servers at run-time
- Taking a CP server database snapshot
- Replacing coordination points for server-based fencing in an online cluster
- Refreshing registration keys on the coordination points for server-based fencing
- Deployment and migration scenarios for CP server
- Migrating from non-secure to secure setup for CP server and SFCFSHA cluster communication
- About migrating between disk-based and server-based fencing configurations
- Migrating from disk-based to server-based fencing in an online cluster
- Migrating from server-based to disk-based fencing in an online cluster
- Migrating between fencing configurations using response files
- Sample response file to migrate from disk-based to server-based fencing
- Sample response file to migrate from server-based fencing to disk-based fencing
- Sample response file to migrate from single CP server-based fencing to server-based fencing
- Response file variables to migrate between fencing configurations
- Enabling or disabling the preferred fencing policy
- About I/O fencing log files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Enabling S3 server
- Configuring OpenStack
- Using Clustered NFS
- Understanding how Clustered NFS works
- Sample use cases
- cfsshare manual page
- Configure and unconfigure Clustered NFS
- Administering Clustered NFS
- Displaying the NFS shared CFS file systems
- Sharing a CFS file system previously added to VCS
- Unsharing the previous shared CFS file system
- Adding an NFS shared CFS file system to VCS
- Deleting the NFS shared CFS file system from VCS
- Adding a virtual IP address to VCS
- Deleting a virtual IP address from VCS
- Adding an IPv6 virtual IP address to VCS in a pure IPv6 configuration
- Deleting an IPv6 virtual IP address from VCS in a pure IPv6 configuration
- Adding a virtual IP address to VCS in a dual-stack configuration
- Deleting a virtual IP address from VCS in a dual-stack configuration
- Changing the share options associated with an NFS share
- Sharing a file system checkpoint
- Samples for configuring a Clustered NFS
- Sample main.cf file
- How to mount an NFS-exported file system on the NFS clients
- Debugging Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Recovering from a loss of site connectivity
- Recovering from host failure
- Recovering from storage failure
- Recovering from site failure
- Recovering from disruption of connectivity to storage at the remote sites from hosts on all sites
- Recovering from disruption to connectivity to storage at all sites from the hosts at a site
- Automatic site reattachment
- Administering iSCSI with SFCFSHA
- Administering datastores with SFCFSHA
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Veritas Volume Manager throttling of administrative I/O
- Managing application I/O workloads using maximum IOPS settings
- About application volume groups
- Creating application volume groups
- Viewing the list of application volume groups
- Setting the maximum IOPS threshold on application volume groups
- Viewing the IOPS statistics for application volume groups
- Removing the maximum IOPS setting from application volume groups
- Adding volumes to an application volume group
- Removing volumes from an application volume group
- Removing an application volume group
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Storage Foundation Cluster File System High Availability
- About Oracle Disk Manager and Oracle Managed Files
- Setting up Veritas Extension for Oracle Disk Manager
- Configuring Veritas Extension for Oracle Disk Manager
- Preparing existing database storage for Oracle Disk Manager
- Verifying that Oracle Disk Manager is configured
- Disabling the Oracle Disk Manager feature
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- About point-in-time copies
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- About volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Creating and managing space-optimized instant snapshots
- Creating and managing full-sized instant snapshots
- Creating and managing third-mirror break-off snapshots
- Creating and managing linked break-off snapshot volumes
- Creating multiple instant snapshots
- Creating instant snapshots of volume sets
- Adding snapshot mirrors to a volume
- Removing a snapshot mirror
- Removing a linked break-off snapshot volume
- Adding a snapshot to a cascaded snapshot hierarchy
- Refreshing an instant space-optimized snapshot
- Reattaching an instant full-sized or plex break-off snapshot
- Reattaching a linked break-off snapshot volume
- Restoring a volume from an instant space-optimized snapshot
- Dissociating an instant snapshot
- Removing an instant snapshot
- Splitting an instant snapshot hierarchy
- Displaying instant snapshot information
- Controlling instant snapshot synchronization
- Listing the snapshots created on a cache
- Tuning the autogrow attributes of a cache
- Monitoring and displaying cache usage
- Growing and shrinking a cache
- Removing a cache
- Creating instant snapshots
- Linked break-off snapshots
- Cascaded snapshots
- Creating multiple snapshots
- Restoring the original volume from a snapshot
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- About Storage Checkpoints
- Storage Checkpoint administration
- Storage Checkpoint space management considerations
- Restoring from a Storage Checkpoint
- Storage Checkpoint quotas
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- About thin provisioning
- About thin optimization solutions in Storage Foundation Cluster File System High Availability
- About SmartMove
- About the Thin Reclamation feature
- About reclaiming space on Solid State Devices (SSDs) with the TRIM operation
- Determining when to reclaim space on a thin reclamation LUN
- How automatic reclamation works
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Displaying VxFS file system usage on thin reclamation LUNs
- Reclaiming space on a file system
- Reclaiming space on a disk, disk group, or enclosure
- About the reclamation log file
- Monitoring Thin Reclamation using the vxtask command
- Configuring automatic reclamation
- Veritas InfoScale 4k sector device support solution
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- About multi-volume file systems
- About volume types
- Features implemented using multi-volume file system (MVFS) support
- Creating multi-volume file systems
- Converting a single volume file system to a multi-volume file system
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Reporting file extents
- Load balancing
- Converting a multi-volume file system to a single volume file system
- Administering SmartTier
- About SmartTier
- Supported SmartTier document type definitions
- Placement classes
- Administering placement policies
- File placement policy grammar
- File placement policy rules
- Calculating I/O temperature and access temperature
- Multiple criteria in file placement policy rule statements
- Multiple file selection criteria in SELECT statement clauses
- Multiple placement classes in <ON> clauses of CREATE statements and in <TO> clauses of RELOCATE statements
- Multiple placement classes in <FROM> clauses of RELOCATE and DELETE statements
- Multiple conditions in <WHEN> clauses of RELOCATE and DELETE statements
- File placement policy rule and statement ordering
- File placement policies and extending files
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Deduplicating data
- Compressing files
- About compressing files
- Compressing files with the vxcompress command
- Interaction of compressed files and other commands
- Interaction of compressed files and other features
- Interaction of compressed files and applications
- Use cases for compressing files
- Section X. Administering and protecting storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Using vxnotify to monitor configuration changes
- Performing online relayout
- Adding a mirror to a volume
- Configuring SmartMove
- Removing a mirror
- Setting tags on volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Removing a disk from a disk group
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Setting up configuration database copies (metadata) for a disk group
- Renaming a disk group
- Handling conflicting configuration copies
- Disabling a disk group
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Working with existing ISP disk groups
- Managing plexes and subdisks
- Erasure coding in Veritas InfoScale storage environments
- Using Distributed Parity
- Allocating logs on different disks
- Limitations of erasure coded volumes
- Erasure coding deployment scenarios
- I/O operations on erasure coded volumes
- Recovery of erasure coded volumes
- Relocation of faulted storage containing erasure coded volumes
- Initializing an erasure coded volume
- Resizing an erasure coded volume
- Customized failure domain
- Decommissioning storage
- Rootability
- Root Disk Encapsulation (RDE) is not supported
- Encapsulating a disk
- Device name format changes in RHEL 7 environments after encapsulation
- Rootability
- Restrictions on using rootability with Linux
- Sample supported root disk layouts for encapsulation
- Example 1: supported root disk layouts for encapsulation
- Example 2: supported root disk layouts for encapsulation
- Example 3: supported root disk layouts for encapsulation
- Example 4: supported root disk layouts for encapsulation
- Sample unsupported root disk layouts for encapsulation
- Example 1: unsupported root disk layouts for encapsulation
- Example 2: unsupported root disk layouts for encapsulation
- Example 3: unsupported root disk layouts for encapsulation
- Example 4: unsupported root disk layouts for encapsulation
- Booting root volumes
- Boot-time volume restrictions
- Creating redundancy for the root disk
- Creating an archived back-up root disk for disaster recovery
- Encapsulating and mirroring the root disk
- Upgrading the kernel on a root encapsulated system
- Administering an encapsulated boot disk
- Unencapsulating the root disk
- Quotas
- About Veritas File System quota limits
- About quota files on Veritas File System
- About Veritas File System quota commands
- About quota checking with Veritas File System
- Using Veritas File System quotas
- Turning on Veritas File System quotas
- Turning on Veritas File System quotas at mount time
- Editing Veritas File System quotas
- Modifying Veritas File System quota time limits
- Viewing Veritas File System disk quotas and usage
- Displaying blocks owned by users or groups
- Turning off Veritas File System quotas
- Support for 64-bit Quotas
- File Change Log
- Support for protection against ransomware
- About support for protection against ransomware
- Write Once, Read Many (WORM) storage
- Secure clock
- Audit logging
- Non-modifiable storage checkpoints
- Post-upgrade tasks to enable the use of non-modifiable checkpoints
- Creating non-modifiable checkpoints
- Setting retention periods for non-modifiable checkpoints
- Making an existing checkpoint non-modifiable
- Deletion of non-modifiable checkpoints
- Compatibility of WORM flag with relevant checkpoint operations
- Restrictions and limitations on the promote operation for checkpoints
- Restrictions and limitations on the mount operations
- Soft WORM storage
- Secure file system
- Secure File System for Oracle Single Instance
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- About tuning Storage Foundation Cluster File System High Availability
- Tuning the VxFS file system
- DMP tunable parameters
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- About AMF tunable parameters
- Appendix C. Command reference
- Appendix D. Creating a starter database
- Appendix E. Executive Order logging
Veritas File System features
Table: Veritas File System features lists the Veritas File System (VxFS) features.
The below mentioned table lists the Veritas File System (VxFS) features. The description provided in the table also mentions if the feature is supported for SFCFSHA or not.
Table: Veritas File System features
Feature | Description |
|---|---|
Access Control Lists | An Access Control List (ACL) stores a series of entries that identify specific users or groups and their access privileges for a directory or file. A file may have its own ACL or may share an ACL with other files. ACLs have the advantage of specifying detailed access permissions for multiple users and groups. On Linux, ACLs are supported on cluster file systems. This feature is supported in SFCFSHA. |
Cache advisories | Cache advisories are set with the mount command on individual file systems, but are not propagated to other nodes of a cluster. Caching advisories are not set only with the mount command. Caching advisories can be set on per file basis (using VX_SETCACHE ioctl). |
Commands that depend on file access times | File access times may appear different across nodes because the atime file attribute is not closely synchronized in a cluster file system. So utilities that depend on checking access times may not function reliably. |
Cross-platform data sharing |
Cross-platform data sharing (CDS) allows data to be serially shared among heterogeneous systems where each system has direct access to the physical devices that hold the data. This feature can be used only in conjunction with Veritas Volume Manager (VxVM). This feature is supported in SFCFSHA. See the Veritas InfoScale Solutions Guide. |
Data deduplication |
You can perform post-process periodic deduplication in a file system to eliminate duplicate data without any continuous cost. You can verify whether data is duplicated on demand, and then efficiently and securely eliminate the duplicates. This feature is available with both Veritas InfoScale Storage and Veritas InfoScale Enterprise licenses. This feature is supported in SFCFSHA. |
Defragmentation | You can perform defragmentation to remove unused space from directories, make all small files contiguous, and consolidate free blocks for file system use. This feature is supported in SFCFSHA. |
Enhanced data integrity modes |
VxFS has the following mount command options to enable the enhanced data integrity modes:
This feature is supported in SFCFSHA. |
Enhanced performance mode | The default VxFS logging mode, mount -o delaylog, increases performance by delaying the logging of some structural changes. However, delaylog does not provide the equivalent data integrity as the enhanced data integrity modes because recent changes may be lost during a system failure. This option provides at least the same level of data accuracy that traditional UNIX file systems provide for system failures, along with fast file system recovery. See the |
Enhanced security | RHEL provides user-level security functionalities and features for file systems. These security functionalities can be availed if you enable SELinux at the OS-level.
See the |
Extent attributes |
VxFS allocates disk space to files in groups of one or more adjacent blocks called extents. VxFS defines an application interface that allows programs to control various aspects of the extent allocation for a given file. The extent allocation policies associated with a file are referred to as extent attributes. This feature is supported in SFCFSHA. |
Extent-based allocation |
An extent is a contiguous area of storage in a computer file system, reserved for a file. When starting to write to a file, a whole extent is allocated. When writing to the file again, the data continues where the previous write left off. This reduces or eliminates file fragmentation. An extent is presented as an address-length pair, which identifies the starting block address and the length of the extent (in file system or logical blocks). Since VxFS is an extent-based file system, addressing is done through extents (which can consist of multiple blocks) rather than in single-block segments. Extents can therefore enhance file system throughput. This feature is supported in SFCFSHA. See About extents. |
Extended mount options |
The VxFS file system provides the following enhancements to the mount command:
This feature is supported in SFCFSHA. |
Fast file system recovery |
Most file systems rely on full structural verification by the fsck utility as the only means to recover from a system failure. For large disk configurations, this involves a time-consuming process of checking the entire structure, verifying that the file system is intact, and correcting any inconsistencies. VxFS provides fast recovery with the VxFS intent log and VxFS intent log resizing features. This feature is supported in SFCFSHA. |
File Change Log |
The VxFS File Change Log (FCL) tracks changes to files and directories in a file system. The File Change Log can be used by applications such as backup products, webcrawlers, search and indexing engines, and replication software that typically scan an entire file system searching for modifications since a previous scan. FCL functionality is available on all the four Veritas InfoScale licenses: Veritas InfoScale™ Storage, Veritas InfoScale™ Availability, Veritas InfoScale™ Foundation, and Veritas InfoScale™ Enterprise . This feature is supported in SFCFSHA. |
File compression |
Compressing files reduces the space used by files, while retaining the accessibility of the files and being transparent to applications. Compressed files look and behave almost exactly like uncompressed files: the compressed files have the same name, and can be read and written as with uncompressed files. Reads cause data to be uncompressed in memory, only; the on-disk copy of the file remains compressed. In contrast, after a write, the new data is uncompressed on disk. This feature is supported in SFCFSHA. |
File replication |
You can perform cost-effective periodic replication of data over IP networks, giving organizations an extremely flexibile storage independent data availability solution for disaster recovery and off-host processing. This feature is supported in SFCFSHA. See the Veritas InfoScale Replication Administrator's Guide.. |
File system snapshots |
VxFS provides online data backup using the snapshot feature. An image of a mounted file system instantly becomes an exact read-only copy of the file system at a specific point in time. The original file system is called the snapped file system, while the copy is called the snapshot. When changes are made to the snapped file system, the old data is copied to the snapshot. When the snapshot is read, data that has not changed is read from the snapped file system, changed data is read from the snapshot. Backups require one of the following methods:
This feature is supported in SFCFSHA. |
FileSnaps |
A FileSnap is a space-optimized copy of a file in the same name space, stored in the same file system. VxFS supports FileSnaps on file systems with disk layout Version 8 or later. This feature is supported in SFCFSHA. See About FileSnaps. |
Freezing and thawing file systems | Freezing a file system is a necessary step for obtaining a stable and consistent image of the file system at the volume level. Consistent volume-level file system images can be obtained and used with a file system snapshot tool. This feature is supported in SFCFSHA. Synchronizing operations, which require freezing and thawing file systems, are done on a cluster-wide basis. |
Improved synchronous writes | VxFS provides superior performance for synchronous write applications. The mount -o datainlog option greatly improves the performance of small synchronous writes. The mount -o convosync=dsync option improves the performance of applications that require synchronous data writes but not synchronous inode time updates. See the Warning: The use of the -o convosync=dsync option violates POSIX semantics. |
Locking | For the F_GETLK command, if there is a process holding a conflicting lock, the l_pid field returns the process ID of the process holding the conflicting lock. The nodeid-to-node name translation can be done by examining the /etc/llthosts file or with the fsclustadm command. This feature is supported in SFCFSHA except for mandatory locking, and deadlock detection supported by traditional fcntl locks. |
maxlink support | Added support for more than 64K sub-directories. If By default To enable the # mkfs -t vxfs -o maxlink /dev/vx/rdsk/testdg/vol1 To disable the # mkfs -t vxfs -o nomaxlink /dev/vx/rdsk/testdg/vol1 To enable the # fsadm -t vxfs -o maxlink /mnt1 To disable the # fsadm -t vxfs -o nomaxlink /mnt1 See the mkfs_vxfs(1M) and fsadm_vxfs(1M) manual pages. |
Memory mapping | You can use the mmap() function to establish shared memory mapping. This feature is supported in SFCFSHA. |
Multi-volume file systems |
The multi-volume file system (MVFS) feature allows several volumes to be represented by a single logical object. All I/O to and from an underlying logical volume is directed by way of volume sets. You can create a single VxFS file system on this multi-volume set. This feature can be used only in conjunction with VxVM. MVFS functionality is available on all the four Veritas InfoScale licenses: Veritas InfoScale™ Storage, Veritas InfoScale™ Availability, Veritas InfoScale™ Foundation, and Veritas InfoScale™ Enterprise. |
Nested Mounts | You can use a directory on a cluster mounted or local mounted file system as a mount point for a local file system or another cluster file system. This feature is supported in SFCFSHA. |
NFS mounts | You export the NFS file systems from the cluster. You can NFS export CFS file systems in a distributed highly available way. This feature is supported in SFCFSHA. |
Partitioned directories |
Parallel threads that access a large volume and perform access and updates on a directory that commonly exist in a file system, suffer from an exponentially longer wait time for the threads. This feature creates partitioned directories to improve the directory performance of file systems. When any directory crosses the tunable threshold, this feature takes an exclusive lock on the directory inode and redistributes the entries into various respective hash directories. These hash directories are not visible in the name-space view of the user or operating system. For every new create, delete, or lookup thread, this feature performs a lookup for the respective hashed directory (depending on the target name) and performs the operation in that directory. This leaves the parent directory inode and its other hash directories unobstructed for access, which vastly improves file system performance. This feature operates only on disk layout Version 8 or later file systems. This feature is supported in SFCFSHA. See the |
Quotas |
VxFS supports quotas, which allocate per-user and per-group quotas and limit the use of two principal resources: files and data blocks. You can assign quotas for each of these resources. Each quota consists of two limits for each resource: hard limit and soft limit. The hard limit represents an absolute limit on data blocks or files. A user can never exceed the hard limit under any circumstances. The soft limit is lower than the hard limit and can be exceeded for a limited amount of time. This allows users to exceed limits temporarily as long as they fall under those limits before the allotted time expires. This feature is supported in SFCFSHA. |
Reverse path name lookup |
The reverse path name lookup feature obtains the full path name of a file or directory from the inode number of that file or directory. The reverse path name lookup feature can be useful for a variety of applications, such as for clients of the VxFS File Change Log feature, in backup and restore utilities, and for replication products. Typically, these applications store information by inode numbers because a path name for a file or directory can be very long, thus the need for an easy method of obtaining a path name. This feature is supported in SFCFSHA. |
SmartIO | The SmartIO feature of Storage Foundation and High Availability Solutions (SFHA Solutions) enables data efficiency on SSDs or other supported devices through I/O caching. Using SmartIO to improve efficiency, you can optimize the cost per IOPS. SmartIO uses advanced, customizable heuristics to determine what data to cache and how that data gets removed from the cache. The heuristics take advantage of SFHA Solutions' knowledge of the characteristics of the workload. SmartIO uses a cache area on the target device or devices. The cache area is the storage space that SmartIO uses to store the cached data and the metadata about the cached data. The type of the cache area determines whether it supports VxFS caching or VxVM caching. This feature is supported in SFCFSHA. See the Veritas InfoScale SmartIO for Solid State Drives Solutions Guide. |
SmartTier |
The SmartTier option is built on a multi-volume file system. Using SmartTier, you can map more than one volume to a single file system. You can then configure policies that automatically relocate files from one volume to another, or relocate files by running file relocation commands. Having multiple volumes lets you determine where files are located, which can improve performance for applications that access specific types of files. SmartTier functionality is available with both Veritas InfoScale Storage and Veritas InfoScale Enterprise licenses. Note: In the previous VxFS 5.x releases, SmartTier was known as Dynamic Storage Tiering. This feature is supported in SFCFSHA. See About SmartTier. |
Storage Checkpoints |
To increase availability, recoverability, and performance, VxFS offers on-disk and online backup and restore capabilities that facilitate frequent and efficient backup strategies. Backup and restore applications can leverage a Storage Checkpoint, a disk- and I/O-efficient copying technology for creating periodic frozen images of a file system. Storage Checkpoints present a view of a file system at a point in time, and subsequently identifies and maintains copies of the original file system blocks. Instead of using a disk-based mirroring method, Storage Checkpoints save disk space and significantly reduce I/O overhead by using the free space pool available to a file system. Storage Checkpoint functionality is available with both Veritas InfoScale Storage and Veritas InfoScale Enterprise licenses. This feature is supported in SFCFSHA. |
Support for large files and large file systems | VxFS supports files larger than two gigabytes and large file systems up to 256 terabytes. Warning: Some applications and utilities might not work on large files. |
Swap files | Swap files are not supported on cluster-mounted file systems. |
Temporary file system mode | On most UNIX systems, temporary file system directories, such as See the See tmplog mount option. |
Thin Reclamation |
The Thin Reclamation feature allows you to release free data blocks of a VxFS file system to the free storage pool of a Thin Storage LUN. This feature is only supported on file systems created on a VxVM volume. |
More Information