InfoScale™ 9.0 Solutions Guide - AIX
- Section I. Introducing InfoScale
- Section II. Solutions for InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- Tasks for setting up Quick I/O in a database environment
- Creating DB2 database containers as Quick I/O files using qiomkfile Creating Sybase files as Quick I/O files using qiomkfile
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Extending a Quick I/O file
- Disabling Quick I/O
- Improving database performance with Cached Quick I/O
- Improving database performance with Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration of native volumes and file systems to VxVM and VxFS
- About converting LVM, JFS and JFS2 configurations
- Initializing unused LVM physical volumes as VxVM disks
- Converting LVM volume groups to VxVM disk groups
- Volume group conversion limitations
- Conversion process summary
- Conversion of JFS and JFS2 file systems to VxFS
- Conversion steps explained
- Identify LVM disks and volume groups for conversion
- Analyze an LVM volume group to see if conversion is possible
- Take action to make conversion possible if analysis fails
- Back up your LVM configuration and user data
- Plan for new VxVM logical volume names
- Stop application access to volumes in the volume group to be converted
- Conversion and reboot
- Convert a volume group
- Take action if conversion fails
- Implement changes for new VxVM logical volume names
- Restart applications on the new VxVM volumes
- Tailor your VxVM configuration
- Restoring the LVM volume group configuration
- Examples of using vxconvert
- About test cases
- Converting LVM, JFS and JFS2 to VxVM and VxFS
- Online migration of native LVM volumes to VxVM volumes
- About online migration from Logical Volume Manager (LVM) volumes to VxVM volumes
- Online migration from LVM volumes in standalone environment to VxVM volumes
- Administrative interface for online migration from LVM in standalone environment to VxVM
- Preparing for online migration from LVM in standalone environment to VxVM
- Migrating from LVM in standalone environment to VxVM
- Reconfiguring the application to use VxVM volume device path
- Backing out online migration of LVM in standalone environment to VxVM
- Do's and Don'ts for online migration from LVM in standalone environment to VxVM
- Scenarios not supported for migration from LVM in standalone environment to VxVM
- Online migration from LVM volumes in VCS HA environment to VxVM volumes
- About online migration from LVM in VCS HA environment to VxVM
- Administrative interface for online migration from LVM in VCS HA environment to VxVM
- Preparing for online migration from LVM in VCS HA environment to VxVM
- Migrating from LVM in VCS HA environment to VxVM
- Migrating configurations with multiple volume groups
- Backing out online migration of LVM in VCS HA environment to VxVM
- Do's and Don'ts for online migration from LVM in VCS HA environment to VxVM
- Scenarios not supported for migration from LVM VCS HA environment to VxVM
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Section VIII. InfoScale 4K sector device support solution
Creating DB2 database containers as Quick I/O files using qiomkfile Creating Sybase files as Quick I/O files using qiomkfile
The best way to preallocate space for tablespace containers and to make them accessible using the Quick I/O interface is to use the qiomkfile. You can use the qiomkfile to create the Quick I/O files for either temporary or permanent tablespaces.
For DB2, you can create Database Managed Space (DMS) containers with the type 'DEVICE' using Quick I/O.
Prerequisites |
|
Usage notes |
|
Warning:
Exercise caution when using absolute path names. Extra steps may be required during database backup and restore procedures to preserve symbolic links. If you restore files to directories different from the original paths, you must change the symbolic links that use absolute path names to point to the new path names before the database is restarted.
To create a DB2 container as a Quick I/O file using qiomkfile
- Create a Quick I/O-capable file using the qiomkfile command:
# /opt/VRTS/bin/qiomkfile -s file_size /mount_point/filename
For example, to show how to create a 100MB Quick I/O-capable file named dbfile on the VxFS file system /db01 using a relative path name:
# /opt/VRTS/bin/qiomkfile -s 100m /db01/dbfile # ls -al -rw-r--r-- 1 db2inst1 db2iadm1 104857600 Oct 2 13:42 .dbfile lrwxrwxrwx 1 db2inst1 db2iadm1 19 Oct 2 13:42 dbfile -> \ .dbfile::cdev:vxfs:
- Create tablespace containers using this file with the following SQL statements:
$ db2 connect to database $ db2 create tablespace tbsname managed by database using \ ( DEVICE /mount_point/filename size ) $ db2 terminate
In the example from 1, qiomkfile creates a regular file named /db01/dbfile, which has the real space allocated. Then, qiomkfile creates a symbolic link named /db01/dbfile. This symbolic link is a relative link to the Quick I/O interface for /db01/.dbfile, that is, to the .dbfile::cdev:vxfs: file. The symbolic link allows .dbfile to be accessed by any database or application using its Quick I/O interface.
We can then add the file to the DB2 database PROD:
$ db2 connect to PROD $ db2 create tablespace NEWTBS managed by database using \ ( DEVICE '/db01/dbfile' 100m ) $ db2 terminate
To create a Sybase database file as a Quick I/O file using qiomkfile
- Create a database file using the qiomkfile command:
# /opt/VRTS/bin/qiomkfile -s file_size /mount_point/filename
For example, to show how to create a 100MB database file named dbfile on the VxFS file system /db01 using a relative path name:
# /opt/VRTS/bin/qiomkfile -s 100m /db01/dbfile $ ls -al -rw-r--r-- 1 sybase sybase 104857600 Oct 2 13:42 .dbfile lrwxrwxrwx 1 sybase sybase 19 Oct 2 13:42 dbfile -> \ .dbfile::cdev:vxfs:
- Add a device to the Sybase dataserver device pool for the Quick I/O file using the disk init command:
$ isql -Usa -Psa_password -Sdataserver_name > disk init > name="device_name", > physname="/mount_point/filename", > vdevno="device_number", > size=51200 > go > alter database production on new_device=file_size > go
The size is in 2K units. The Enterprise Reference manual contains more information on the disk init command.
In the example from 1, qiomkfile creates a regular file named /db01/.dbfile, which has the real space allocated. Then, qiomkfile creates a symbolic link named /db01/dbfile. The symbolic link is a relative link to the Quick I/O interface for /db01/.dbfile, that is, to the .dbfile::cdev:vxfs: file. The symbolic link allows .dbfile to be accessed by any database or application using its Quick I/O interface.
The device size is a multiple of 2K pages. In the example, 51200 times 2K pages is 104857600 bytes. The qiomkfile command must use this size.
An example to show how to add a 100MB Quick I/O file named dbfile to the list of devices used by database production, using the disk init command:
$ isql -Usa -Psa_password -Sdataserver_name > disk init > name="new_device", > physname="/db01/dbfile", > vdevno="device_number", > size=51200 > go > alter database production on new_device=100 > go
See the Sybase Adaptive Server Enterprise Reference Manual.
- Use the file to create a new segment or add to an existing segment.
To add a new segment:
$ isql -Usa -Psa_password -Sdataserver_name > sp_addsegment new_segment, db_name, device_name > go
To extend a segment:
$ isql -Usa -Psa_password -Sdataserver_name > sp_extendsegment segment_name, db_name, device_name > go
An example to show how to create a new segment, named segment2, for device dbfile on database production:
$ isql -Usa_password -Sdataserver_name > sp_addsegment segment2, production, dbfile > go
An example to show how to extend a segment, named segment1, for device dbfile on database production:
$ isql -Usa_password -Sdataserver_name > sp_extendsegment segment1, production, dbfile > go
See the Sybase Adaptive Server Enterprise Reference Manual.