Storage Foundation Cluster File System High Availability 7.2 Administrator's Guide - Solaris
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Storage Foundation Cluster File System High Availability
- About Dynamic Multi-Pathing (DMP)
- About Veritas Volume Manager
- About Veritas File System
- About Storage Foundation Cluster File System (SFCFS)
- About Veritas InfoScale Operations Manager
- Use cases for Storage Foundation Cluster File System High Availability
- How Dynamic Multi-Pathing works
- How DMP works
- Veritas Volume Manager co-existence with Oracle Automatic Storage Management disks
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Hot-relocation
- Dirty region logging
- Volume snapshots
- FastResync
- Volume sets
- Configuration of volumes on SAN storage
- How VxVM handles hardware clones or snapshots
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- How Storage Foundation Cluster File System High Availability works
- When to use Storage Foundation Cluster File System High Availability
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About Cluster Server architecture
- About the Storage Foundation Cluster File System High Availability namespace
- About asymmetric mounts
- About primary and secondary cluster nodes
- Determining or moving primaryship
- About synchronizing time on Cluster File Systems
- About file system tunables
- About setting the number of parallel fsck threads
- Storage Checkpoints
- About Storage Foundation Cluster File System High Availability backup strategies
- About parallel I/O
- About the I/O error handling policy for Cluster Volume Manager
- About recovering from I/O failures
- About single network link and reliability
- Split-brain and jeopardy handling
- About I/O fencing
- About I/O fencing for SFCFSHA in virtual machines that do not support SCSI-3 PR
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About I/O fencing configuration files
- How I/O fencing works in different event scenarios
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- Storage Foundation Cluster File System High Availability and Veritas Volume Manager cluster functionality agents
- Veritas Volume Manager cluster functionality
- How Cluster Volume Manager works
- About the cluster functionality of VxVM
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Availability of shared disk group configuration copies
- About redirection of application I/Os with CVM I/O shipping
- Storage disconnectivity and CVM disk detach policies
- About the types of storage connectivity failures
- About disk detach policies
- How CVM handles local storage disconnectivity with the global detach policy
- How CVM handles local storage disconnectivity with the local detach policy
- Guidelines for choosing detach policies
- How CVM detach policies interact with I/O shipping
- CVM storage disconnectivity scenarios that are policy independent
- Availability of cluster nodes and shared disk groups
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Setting default values for vxassist
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Management of the use and require type of persistent attributes
- Creating volumes of a specific layout
- Creating a volume on specific disks
- Creating volumes on specific media types
- Specifying ordered allocation of storage to volumes
- Site-based allocation
- Changing the read policy for mirrored volumes
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Converting a file system to VxFS
- Mounting a VxFS file system
- log mount option
- delaylog mount option
- tmplog mount option
- logiosize mount option
- nodatainlog mount option
- blkclear mount option
- mincache mount option
- convosync mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- cio mount option
- mntlock mount option
- ckptautomnt mount option
- Combining mount command options
- Unmounting a file system
- Resizing a file system
- Displaying information on mounted file systems
- Identifying file system types
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Displaying details about an Array Support Library
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Making devices invisible to VxVM
- Making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Low Impact Path Probing (LIPP)
- Configuring Subpaths Failover Groups (SFG)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- DMP coexistence with native multi-pathing
- Managing DMP devices for the ZFS root pool
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Manually replacing a host bus adapter on an M5000 server
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- About disk installation and formatting
- Adding and removing disks
- Renaming a disk
- Event monitoring
- About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
- Fabric Monitoring and proactive error detection
- Dynamic Multi-Pathing (DMP) automated device discovery
- Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
- DMP event logging
- Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- About Storage Foundation Cluster File System High Availability administration
- Administering CFS
- Adding CFS file systems to a VCS configuration
- Uses of cfsmount to mount and cfsumount to unmount CFS file system
- Removing CFS file systems from VCS configuration
- Resizing CFS file systems
- Verifying the status of CFS file system nodes and their mount points
- Verifying the state of the CFS port
- CFS agents and AMF support
- CFS agent log files
- CFS commands
- About the mount, fsclustadm, and fsadm commands
- Synchronizing system clocks on all nodes
- Growing a CFS file system
- About the /etc/vfstab file
- When the CFS primary node fails
- About Storage Checkpoints on SFCFSHA
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- Listing all the CVM shared disks
- Viewing all available disks in a cluster
- Establishing CVM cluster membership manually
- Methods to control CVM master selection
- About setting cluster node preferences for master failover
- Cluster node preference for master failover
- Considerations for setting CVM node preferences
- Setting the cluster node preference using the CVMCluster agent
- Setting the cluster node preference value for master failover using the vxclustadm command
- Example of setting the cluster node preference value for master failover
- About changing the CVM master manually
- Enabling the application isolation feature in CVM environments
- Disabling the application isolation feature in a CVM cluster
- Changing the disk group master manually
- Setting the sub-cluster node preference value for master failover
- Importing a shared disk group manually
- Deporting a shared disk group manually
- Mapping remote storage to a node in the cluster
- Removing remote storage mappings from a node in the cluster
- Starting shared volumes manually
- Evaluating the state of CVM ports
- Verifying if CVM is running in an SFCFSHA cluster
- Verifying CVM membership state
- Verifying the state of CVM shared disk groups
- Verifying the activation mode
- CVM log files
- Requesting node status and discovering the master node
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Enabling I/O shipping for shared disk groups
- Setting the detach policy for shared disk groups
- Controlling the CVM tolerance to storage disconnectivity
- Handling cloned disks in a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering Flexible Storage Sharing
- About Flexible Storage Sharing disk support
- About the volume layout for Flexible Storage Sharing disk groups
- Setting the host prefix
- Exporting a disk for Flexible Storage Sharing
- Setting the Flexible Storage Sharing attribute on a disk group
- Using the host disk class and allocating storage
- Administering mirrored volumes using vxassist
- Displaying exported disks and network shared disk groups
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- General guidelines for using the vxfentsthdw utility
- About the vxfentsthdw command options
- Testing the coordinator disk group using the -c option of vxfentsthdw
- Performing non-destructive testing on the disks using the -r option
- Testing the shared disks using the vxfentsthdw -m option
- Testing the shared disks listed in a file using the vxfentsthdw -f option
- Testing all the disks in a disk group using the vxfentsthdw -g option
- Testing a disk with existing keys
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- CP server operations (cpsadm)
- Adding and removing SFCFSHA cluster entries from the CP server database
- Adding and removing a SFCFSHA cluster node from the CP server database
- Adding or removing CP server users
- Listing the CP server users
- Listing the nodes in all the SFCFSHA clusters
- Listing the membership of nodes in the SFCFSHA cluster
- Preempting a node
- Registering and unregistering a node
- Enable and disable access for a user to a SFCFSHA cluster
- Starting and stopping CP server outside VCS control
- Checking the connectivity of CP servers
- Adding and removing virtual IP addresses and ports for CP servers at run-time
- Taking a CP server database snapshot
- Replacing coordination points for server-based fencing in an online cluster
- Refreshing registration keys on the coordination points for server-based fencing
- Deployment and migration scenarios for CP server
- Migrating from non-secure to secure setup for CP server and SFCFSHA cluster communication
- About migrating between disk-based and server-based fencing configurations
- Migrating from disk-based to server-based fencing in an online cluster
- Migrating from server-based to disk-based fencing in an online cluster
- Migrating between fencing configurations using response files
- Sample response file to migrate from disk-based to server-based fencing
- Sample response file to migrate from server-based fencing to disk-based fencing
- Sample response file to migrate from single CP server-based fencing to server-based fencing
- Response file variables to migrate between fencing configurations
- Enabling or disabling the preferred fencing policy
- About I/O fencing log files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Using Clustered NFS
- About Clustered NFS
- Sample use cases
- Requirements for Clustered NFS
- Understanding how Clustered NFS works
- cfsshare manual page
- Configure and unconfigure Clustered NFS
- Reconciling major and minor numbers for NFS shared disks
- Administering Clustered NFS
- Displaying the NFS shared CFS file systems
- Sharing a CFS file system previously added to VCS
- Unsharing the previous shared CFS file system
- Adding an NFS shared CFS file system to VCS
- Deleting the NFS shared CFS file system from VCS
- Adding a virtual IP address to VCS
- Deleting a virtual IP address from VCS
- Adding an IPv6 virtual IP address to VCS
- Deleting an IPv6 virtual IP address from VCS
- Changing the share options associated with an NFS share
- Sharing a file system checkpoint
- Samples for configuring a Clustered NFS
- Sample main.cf file
- How to mount an NFS-exported file system on the NFS clients
- Disabling SMF for NFS daemons
- Debugging Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Recovering from a loss of site connectivity
- Recovering from host failure
- Recovering from storage failure
- Recovering from site failure
- Recovering from disruption of connectivity to storage at the remote sites from hosts on all sites
- Recovering from disruption to connectivity to storage at all sites from the hosts at a site
- Automatic site reattachment
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Storage Foundation Cluster File System High Availability
- About Oracle Disk Manager and Oracle Managed Files
- Setting up Veritas Extension for Oracle Disk Manager
- Configuring Veritas Extension for Oracle Disk Manager
- Preparing existing database storage for Oracle Disk Manager
- Converting Quick I/O files to Oracle Disk Manager files
- Verifying that Oracle Disk Manager is configured
- Disabling the Oracle Disk Manager feature
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- About point-in-time copies
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- About volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Creating and managing space-optimized instant snapshots
- Creating and managing full-sized instant snapshots
- Creating and managing third-mirror break-off snapshots
- Creating and managing linked break-off snapshot volumes
- Creating multiple instant snapshots
- Creating instant snapshots of volume sets
- Adding snapshot mirrors to a volume
- Removing a snapshot mirror
- Removing a linked break-off snapshot volume
- Adding a snapshot to a cascaded snapshot hierarchy
- Refreshing an instant space-optimized snapshot
- Reattaching an instant full-sized or plex break-off snapshot
- Reattaching a linked break-off snapshot volume
- Restoring a volume from an instant space-optimized snapshot
- Dissociating an instant snapshot
- Removing an instant snapshot
- Splitting an instant snapshot hierarchy
- Displaying instant snapshot information
- Controlling instant snapshot synchronization
- Listing the snapshots created on a cache
- Tuning the autogrow attributes of a cache
- Monitoring and displaying cache usage
- Growing and shrinking a cache
- Removing a cache
- Creating instant snapshots
- Linked break-off snapshots
- Cascaded snapshots
- Creating multiple snapshots
- Restoring the original volume from a snapshot
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- About Storage Checkpoints
- Storage Checkpoint administration
- Storage Checkpoint space management considerations
- Restoring from a Storage Checkpoint
- Storage Checkpoint quotas
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Displaying VxFS file system usage on thin reclamation LUNs
- Reclaiming space on a file system
- Reclaiming space on a disk, disk group, or enclosure
- About the reclamation log file
- Monitoring Thin Reclamation using the vxtask command
- Configuring automatic reclamation
- Veritas InfoScale 4k sector device support solution
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- About multi-volume file systems
- About volume types
- Features implemented using multi-volume file system (MVFS) support
- Creating multi-volume file systems
- Converting a single volume file system to a multi-volume file system
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Reporting file extents
- Load balancing
- Converting a multi-volume file system to a single volume file system
- Administering SmartTier
- About SmartTier
- Supported SmartTier document type definitions
- Placement classes
- Administering placement policies
- File placement policy grammar
- File placement policy rules
- Calculating I/O temperature and access temperature
- Multiple criteria in file placement policy rule statements
- Multiple file selection criteria in SELECT statement clauses
- Multiple placement classes in <ON> clauses of CREATE statements and in <TO> clauses of RELOCATE statements
- Multiple placement classes in <FROM> clauses of RELOCATE and DELETE statements
- Multiple conditions in <WHEN> clauses of RELOCATE and DELETE statements
- File placement policy rule and statement ordering
- File placement policies and extending files
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Deduplicating data on Solaris SPARC
- Compressing files
- About compressing files
- Compressing files with the vxcompress command
- Interaction of compressed files and other commands
- Interaction of compressed files and other features
- Interaction of compressed files and applications
- Use cases for compressing files
- Section X. Administering storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Using vxnotify to monitor configuration changes
- Performing online relayout
- Adding a mirror to a volume
- Configuring SmartMove
- Removing a mirror
- Setting tags on volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Removing a disk from a disk group
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Setting up configuration database copies (metadata) for a disk group
- Renaming a disk group
- Handling conflicting configuration copies
- Disabling a disk group
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Working with existing ISP disk groups
- Managing plexes and subdisks
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Administering an encapsulated boot disk
- Unencapsulating the root disk
- Quotas
- About Veritas File System quota limits
- About quota files on Veritas File System
- About Veritas File System quota commands
- About quota checking with Veritas File System
- Using Veritas File System quotas
- Turning on Veritas File System quotas
- Turning on Veritas File System quotas at mount time
- Editing Veritas File System quotas
- Modifying Veritas File System quota time limits
- Viewing Veritas File System disk quotas and usage
- Displaying blocks owned by users or groups
- Turning off Veritas File System quotas
- Support for 64-bit Quotas
- File Change Log
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- About tuning Storage Foundation Cluster File System High Availability
- Tuning the VxFS file system
- DMP tunable parameters
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- About AMF tunable parameters
- Appendix C. Veritas File System disk layout
- Appendix D. Command reference
- Appendix E. Creating a starter database
Manually replacing a host bus adapter on an M5000 server
This section contains the procedure to replace an online host bus adapter (HBA) when DMP is managing multi-pathing in a Cluster File System (CFS) cluster. The HBA World Wide Port Name (WWPN) changes when the HBA is replaced. Following are the prerequisites to replace an online host bus adapter:
A single node or two or more node CFS or RAC cluster.
I/O running on CFS file system.
An M5000 server with at least two HBAs in separate PCIe slots and recommended Solaris patch level for HBA replacement.
To replace an online host bus adapter on an M5000 server
- Identify the HBAs on the M5000 server. For example, to identify Emulex HBAs, enter the following command:
/usr/platform/sun4u/sbin/prtdiag -v | grep emlx 00 PCIe 0 2, fc20, 10df 119, 0, 0 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@0,600000/pci@0/pci@9/SUNW,emlxs@0 00 PCIe 0 2, fc20, 10df 119, 0, 1 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1 00 PCIe 3 2, fc20, 10df 2, 0, 0 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@3,700000/SUNW,emlxs@0 00 PCIe 3 2, fc20, 10df 2, 0, 1 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@3,700000/SUNW,emlxs@0,1
- Identify the HBA and its WWPN(s), which you want to replace using the cfgadm command.
To identify the HBA, enter the following:
# cfgadm -al | grep -i fibre iou#0-pci#1 fibre/hp connected configured ok iou#0-pci#4 fibre/hp connected configured ok
To list all HBAs, enter the following:
# luxadm -e port /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0/fp@0,0:devctl NOT CONNECTED /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0:devctl CONNECTED /devices/pci@3,700000/SUNW,emlxs@0/fp@0,0:devctl NOT CONNECTED /devices/pci@3,700000/SUNW,emlxs@0,1/fp@0,0:devctl CONNECTED
To select the HBA to dump the portap and get the WWPN, enter the following:
# luxadm -e dump_map /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1/ fp@0,0:devctl 0 304700 0 203600a0b847900c 200600a0b847900c 0x0 (Disk device) 1 30a800 0 20220002ac00065f 2ff70002ac00065f 0x0 (Disk device) 2 30a900 0 21220002ac00065f 2ff70002ac00065f 0x0 (Disk device) 3 560500 0 10000000c97c3c2f 20000000c97c3c2f 0x1f (Unknown Type) 4 560700 0 10000000c97c9557 20000000c97c9557 0x1f (Unknown Type) 5 560b00 0 10000000c97c34b5 20000000c97c34b5 0x1f (Unknown Type) 6 560900 0 10000000c973149f 20000000c973149f 0x1f (Unknown Type,Host Bus Adapter)
Alternately, you can run the fcinfo hba-port Solaris command to get the WWPN(s) for the HBA ports.
- Ensure you have a compatible spare HBA for hot-swap.
- Stop the I/O operations on the HBA port(s) and disable the DMP subpath(s) for the HBA that you want to replace.
# vxdmpadm disable ctrl=ctrl#
- Dynamically unconfigure the HBA in the PCIe slot using the cfgadm command.
# cfgadm -c unconfigure iou#0-pci#1
Look for console messages to check if the cfgadm command is unsuccessful. If the cfgadm command is unsuccessful, proceed to troubleshooting using the server hardware documentation. Check the Solaris 11 patch level recommended for dynamic reconfiguration operations and contact SUN support for further assistance.
console messages Oct 24 16:21:44 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2): card is removed from the slot iou 0-pci 1
- Verify that the HBA card that is being replaced in step 5 is not in the configuration. Enter the following command:
# cfgadm -al | grep -i fibre iou 0-pci 4 fibre/hp connected configured ok
- Mark the fiber cable(s).
- Remove the fiber cable(s) and the HBA that you must replace.
For more information, see the HBA replacement procedures in SPARC Enterprise M4000/M5000/M8000/M9000 Servers Dynamic Reconfiguration (DR) User's Guide.
- Replace the HBA with a new compatible HBA of similar type in the same slot. The reinserted card shows up as follows:
console messages iou 0-pci 1 unknown disconnected unconfigured unknown
- Bring the replaced HBA back into the configuration. Enter the following:
# cfgadm -c configure iou 0-pci 1 console messages Oct 24 16:21:57 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2): card is inserted in the slot iou#0-pci#1 (pci dev 0)
- Verify that the reinserted HBA is in the configuration. Enter the following:
# cfgadm -al | grep -i fibre iou#0-pci 1 fibre/hp connected configured ok <==== iou#0-pci 4 fibre/hp connected configured ok
- Modify fabric zoning to include the replaced HBA WWPN(s).
- Enable LUN security on storage for the new WWPN(s).
- Perform an operating system device scan to re-discover the LUNs. Enter the following:
# cfgadm -c configure c3
- Clean up the device tree for old LUNs. Enter the following:
# devfsadm -Cv
Note:
Sometimes replacing an HBA creates new devices. Perform cleanup operations for the LUN only when new devices are created.
- If SFCFSHA does not show a ghost path for the removed HBA path, enable the path using the vxdmpadm command. This performs the device scan for that particular HBA subpath(s). Enter the following:
# vxdmpadm enable ctrl=ctrl#
- Verify if I/O operations are scheduled on that path. If I/O operations are running correctly on all paths, the dynamic HBA replacement operation is complete.