Veritas InfoScale™ 7.3.1 Installation Guide - AIX
- Section I. Introduction to Veritas InfoScale
- Introducing Veritas InfoScale
- Licensing Veritas InfoScale
- Section II. Planning and preparation
- System requirements
- Hardware requirements
- Preparing to install
- Setting up the private network
- Setting up shared storage
- Planning the installation setup for SF Oracle RAC systems
- Planning your network configuration
- Planning the storage
- Planning the storage for Oracle RAC
- System requirements
- Section III. Installation of Veritas InfoScale
- Installing Veritas InfoScale using the installer
- Installing Veritas InfoScale using response files
- Installing Veritas Infoscale using operating system-specific methods
- Installing Veritas InfoScale using NIM and the installer
- Completing the post installation tasks
- Section IV. Uninstallation of Veritas InfoScale
- Uninstalling Veritas InfoScale using the installer
- Uninstalling Veritas InfoScale using response files
- Section V. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Troubleshooting installation issues
Planning the storage
Veritas InfoScale provides the following options for shared storage:
CVM provides native naming (OSN) as well as enclosure-based naming (EBN).
Use enclosure-based naming for easy administration of storage. Enclosure-based naming guarantees that the same name is given to a shared LUN on all the nodes, irrespective of the operating system name for the LUN.
For SF Oracle RAC: Local storage
With FSS, local storage can be used as shared storage. The local storage can be in the form of Direct Attached Storage (DAS) or internal disk drives.
For SF Oracle RAC:Oracle ASM over CVM
The following recommendations ensure better performance and availability of storage.
Use multiple storage arrays, if possible, to ensure protection against array failures. The minimum recommended configuration is to have two HBAs for each host and two switches.
Design the storage layout keeping in mind performance and high availability requirements. Use technologies such as striping and mirroring.
Use appropriate stripe width and depth to optimize I/O performance.
Use SCSI-3 persistent reservations (PR) compliant storage.
Provide multiple access paths to disks with HBA/switch combinations to allow DMP to provide high availability against storage link failures and to provide load balancing.