InfoScale™ 9.0 Solutions Guide - AIX
- Section I. Introducing InfoScale
- Section II. Solutions for InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- Tasks for setting up Quick I/O in a database environment
- Creating DB2 database containers as Quick I/O files using qiomkfile Creating Sybase files as Quick I/O files using qiomkfile
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Extending a Quick I/O file
- Disabling Quick I/O
- Improving database performance with Cached Quick I/O
- Improving database performance with Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration of native volumes and file systems to VxVM and VxFS
- About converting LVM, JFS and JFS2 configurations
- Initializing unused LVM physical volumes as VxVM disks
- Converting LVM volume groups to VxVM disk groups
- Volume group conversion limitations
- Conversion process summary
- Conversion of JFS and JFS2 file systems to VxFS
- Conversion steps explained
- Identify LVM disks and volume groups for conversion
- Analyze an LVM volume group to see if conversion is possible
- Take action to make conversion possible if analysis fails
- Back up your LVM configuration and user data
- Plan for new VxVM logical volume names
- Stop application access to volumes in the volume group to be converted
- Conversion and reboot
- Convert a volume group
- Take action if conversion fails
- Implement changes for new VxVM logical volume names
- Restart applications on the new VxVM volumes
- Tailor your VxVM configuration
- Restoring the LVM volume group configuration
- Examples of using vxconvert
- About test cases
- Converting LVM, JFS and JFS2 to VxVM and VxFS
- Online migration of native LVM volumes to VxVM volumes
- About online migration from Logical Volume Manager (LVM) volumes to VxVM volumes
- Online migration from LVM volumes in standalone environment to VxVM volumes
- Administrative interface for online migration from LVM in standalone environment to VxVM
- Preparing for online migration from LVM in standalone environment to VxVM
- Migrating from LVM in standalone environment to VxVM
- Reconfiguring the application to use VxVM volume device path
- Backing out online migration of LVM in standalone environment to VxVM
- Do's and Don'ts for online migration from LVM in standalone environment to VxVM
- Scenarios not supported for migration from LVM in standalone environment to VxVM
- Online migration from LVM volumes in VCS HA environment to VxVM volumes
- About online migration from LVM in VCS HA environment to VxVM
- Administrative interface for online migration from LVM in VCS HA environment to VxVM
- Preparing for online migration from LVM in VCS HA environment to VxVM
- Migrating from LVM in VCS HA environment to VxVM
- Migrating configurations with multiple volume groups
- Backing out online migration of LVM in VCS HA environment to VxVM
- Do's and Don'ts for online migration from LVM in VCS HA environment to VxVM
- Scenarios not supported for migration from LVM VCS HA environment to VxVM
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Section VIII. InfoScale 4K sector device support solution
Preparing for the replica database
To prepare a snapshot for a replica database on the primary host
- If you have not already done so, prepare the host to use the snapshot volume that contains the copy of the database tables. Set up any new database logs and configuration files that are required to initialize the database. On the master node, verify that the volume has an instant snap data change object (DCO) and DCO volume, and FastResync is enabled on the volume:
# vxprint -g database_dg -F%instant database_vol
# vxprint -g database_dg -F%fastresync database_vol
If both commands return the value as ON, proceed to step 3. Otherwise, continue with step 2.
- Use the following command to prepare a volume for instant snapshots:
# vxsnap -g database_dg prepare database_vol [regionsize=size] \ [ndcomirs=number] [alloc=storage_attributes]
- Use the following command to make a full-sized snapshot, snapvol, of the tablespace volume by breaking off plexes from the original volume:
# vxsnap -g database_dg make \ source=volume/newvol=snapvol/nmirror=N
The nmirror attribute specifies the number of mirrors, N, in the snapshot volume.
If the volume does not have any available plexes, or its layout does not support plex break-off, prepare an empty volume for the snapshot.
- Use the vxprint command on the original volume to find the required size for the snapshot volume.
# LEN=`vxprint [-g diskgroup] -F%len volume`
Note:
The command shown in this and subsequent steps assumes that you are using a Bourne-type shell such as sh, ksh or bash. You may need to modify the command for other shells such as csh or tcsh. These steps are valid only for an instant snap DCO.
- Use the vxprint command on the original volume to discover the name of its DCO:
# DCONAME=`vxprint [-g diskgroup] -F%dco_name volume`
- Use the vxprint command on the DCO to discover its region size (in blocks):
# RSZ=`vxprint [-g diskgroup] -F%regionsz $DCONAME`
- Use the vxassist command to create a volume, snapvol, of the required size and redundancy. You can use storage attributes to specify which disks should be used for the volume. The init=active attribute makes the volume available immediately.
# vxassist [-g diskgroup] make snapvol $LEN \ [layout=mirror nmirror=number] init=active \ [storage_attributes]
- Prepare the snapshot volume for instant snapshot operations as shown here:
# vxsnap [-g diskgroup] prepare snapvol [ndcomirs=number] \ regionsz=$RSZ [storage_attributes]
It is recommended that you specify the same number of DCO mirrors (ndcomirror) as the number of mirrors in the volume (nmirror).
- To create the snapshot, use the following command:
# vxsnap -g database_dg make source=volume/snapvol=snapvol
If a database spans more than one volume, specify all the volumes and their snapshot volumes as separate tuples on the same line, for example:
# vxsnap -g database_dg make \ source=vol1/snapvol=svol1/nmirror=2 \ source=vol2/snapvol=svol2/nmirror=2 \ source=vol3/snapvol=svol3/nmirror=2
If you want to save disk space, you can use the following command to create a space-optimized snapshot instead:
# vxsnap -g database_dg make \ source=volume/newvol=snapvol/cache=cacheobject
The argument cacheobject is the name of a pre-existing cache that you have created in the disk group for use with space-optimized snapshots. To create the cache object, follow step 10 through step 13.
If several space-optimized snapshots are to be created at the same time, these can all specify the same cache object as shown in this example:
# vxsnap -g database_dg make \ source=vol1/newvol=svol1/cache=dbaseco \ source=vol2/newvol=svol2/cache=dbaseco \ source=vol3/newvol=svol3/cache=dbaseco
Decide on the following characteristics that you want to allocate to the cache volume that underlies the cache object:
The size of the cache volume should be sufficient to record changes to the parent volumes during the interval between snapshot refreshes. A suggested value is 10% of the total size of the parent volumes for a refresh interval of 24 hours.
If redundancy is a desired characteristic of the cache volume, it should be mirrored. This increases the space that is required for the cache volume in proportion to the number of mirrors that it has.
If the cache volume is mirrored, space is required on at least as many disks as it has mirrors. These disks should not be shared with the disks used for the parent volumes. The disks should also be chosen to avoid impacting I/O performance for critical volumes, or hindering disk group split and join operations.
- Having decided on its characteristics, use the vxassist command to create the volume that is to be used for the cache volume. The following example creates a mirrored cache volume, cachevol, with size 1GB in the disk group, mydg, on the disks disk16 and disk17:
# vxassist -g mydg make cachevol 1g layout=mirror \ init=active disk16 disk17
The attribute init=active is specified to make the cache volume immediately available for use.
- Use the vxmake cache command to create a cache object on top of the cache volume that you created in the previous step:
# vxmake [-g diskgroup] cache cache_object \ cachevolname=volume [regionsize=size] [autogrow=on] \ [highwatermark=hwmk] [autogrowby=agbvalue] \ [maxautogrow=maxagbvalue]]
If you specify the region size, it must be a power of 2, and be greater than or equal to 16KB (16k). If not specified, the region size of the cache is set to 64KB.
Note:
All space-optimized snapshots that share the cache must have a region size that is equal to or an integer multiple of the region size set on the cache. Snapshot creation also fails if the original volume's region size is smaller than the cache's region size.
If the cache is not allowed to grow in size as required, specify autogrow=off. By default, the ability to automatically grow the cache is turned on.
In the following example, the cache object, cobjmydg, is created over the cache volume, cachevol, the region size of the cache is set to 32KB, and the autogrow feature is enabled:
# vxmake -g mydg cache cobjmydg cachevolname=cachevol \ regionsize=32k autogrow=on
- Having created the cache object, use the following command to enable it:
# vxcache [-g diskgroup] start cache_object
For example to start the cache object, cobjmydg:
# vxcache -g mydg start cobjmydg
Note:
This step sets up the snapshot volumes, and starts tracking changes to the original volumes.