Veritas InfoScale™ 7.4 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Use cases for Veritas InfoScale products
- Feature support across Veritas InfoScale 7.4 products
- Using SmartMove and Thin Provisioning with Sybase databases
- Running multiple parallel applications within a single cluster using the application isolation feature
- Scaling FSS storage capacity with dedicated storage nodes using application isolation feature
- Finding Veritas InfoScale product use cases information
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- About the atomic write I/O
- Requirements for atomic write I/O
- Restrictions on atomic write I/O functionality
- How the atomic write I/O feature of Storage Foundation helps MySQL databases
- VxVM and VxFS exported IOCTLs
- Configuring atomic write I/O support for MySQL on VxVM raw volumes
- Configuring atomic write I/O support for MySQL on VxFS file systems
- Dynamically growing the atomic write capable file system
- Disabling atomic write I/O support
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability Solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Just in time availability solution for vSphere
- Section IX. Veritas InfoScale 4K sector device support solution
- Section X. Reference
About the migration
Veritas InfoScale supports real-time migration of standalone and Oracle RAC databases hosted on Oracle ASM disks to VxFS file systems mounted on VxVM disks.
The migration requires a source system where the database is hosted on Oracle ASM disks and a target that serves as a standby during the migration. That target contains VxVM disks on which the Veritas File System is mounted. The target disks can be on the same host as the source database or on a different host.
The migration is performed by the script asm2vxfs.pl. The script creates the target database on the designated VxFS mount point and automates most of the necessary configuration tasks, such as preparing the source and target databases for migration, configuring the listener on the target and other configuration changes. You can migrate multiple instances of database at a time in a RAC environment.
Applications can continue to access the database while the migration is in progress. Once the source and target databases are synchronized, another script switchover.pl switches the role of the source database to standby and that of the target database to primary. All applications connected to the source database must be manually stopped before the transition begins. After the roles of the source and target databases are switched, the applications must be started manually. This is the only downtime incurred during the migration process.
The total migration time depends on the following factors:
The amount of redo information (load) being generated on the source system
The amount of system resources available for the new target database
The size of the source database
You can run the migration script on the command line using a configuration file.
Figure: Migration with the target storage on the same host as the source illustrates the migration process with the target storage on the same host as the source.
Figure: Migration with the target storage on a host different from the source illustrates the migration process with the target storage on a different host.
Table: Configuration file options lists the configuration file options that are used with the migration script.
Table: Configuration file options
Configuration file parameters (if you are using the configuration file) | Description |
|---|---|
ORACLE_BASE | Path of the Oracle base directory on the target system. |
PRIMARY | Name of the source database. |
PRIMARY_INSTANCE | Any instance name of the source database. |
STANDBY | Name of the target database. |
STANDBY_INSTANCES | Instances of the target database. Note: Multiple database instances are specified as a comma-separated list. |
PRIMARYHOST | Host name of the instance, specified in the parameter of PRIMARY_INSTANCE. |
STANDBY_HOST | Host name of all the target instances. Note: Multiple host names are specified as a comma-separated list. |
DATA_MNT | VxFS mount point. |
RECOVERY_MNT | Destination path of the recovery files (on top of VxFS). |
SYSTEM_PASSWORD | Password of the system user of the source database. |
SYS_PASSWORD | Password of the SYS user of the source database. |
Set the parameter values in the following format:
PARAMETER=value
A sample configuration file is as follows:
ORACLE_BASE=/oracle_base PRIMARY=sourcedb PRIMARY_INSTANCE=sourcedb1 STANDBY=std STANDBY_INSTANCES=std,std2 PRIMARYHOST=example.com STANDBY_HOSTS=example2.com DATA_MNT=/data_mntpt RECOVERY_MNT=/data_mntpt/recover_dest SYSTEM_PASSWORD=system123 SYS_PASSWORD=sys123