Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Solutions Guide - AIX
Last Published:
2025-08-01
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: AIX
- Section I. Introducing InfoScale
- Section II. Solutions for InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- Tasks for setting up Quick I/O in a database environment
- Creating DB2 database containers as Quick I/O files using qiomkfile Creating Sybase files as Quick I/O files using qiomkfile
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Extending a Quick I/O file
- Disabling Quick I/O
- Improving database performance with Cached Quick I/O
- Improving database performance with Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration of native volumes and file systems to VxVM and VxFS
- About converting LVM, JFS and JFS2 configurations
- Initializing unused LVM physical volumes as VxVM disks
- Converting LVM volume groups to VxVM disk groups
- Volume group conversion limitations
- Conversion process summary
- Conversion of JFS and JFS2 file systems to VxFS
- Conversion steps explained
- Identify LVM disks and volume groups for conversion
- Analyze an LVM volume group to see if conversion is possible
- Take action to make conversion possible if analysis fails
- Back up your LVM configuration and user data
- Plan for new VxVM logical volume names
- Stop application access to volumes in the volume group to be converted
- Conversion and reboot
- Convert a volume group
- Take action if conversion fails
- Implement changes for new VxVM logical volume names
- Restart applications on the new VxVM volumes
- Tailor your VxVM configuration
- Restoring the LVM volume group configuration
- Examples of using vxconvert
- About test cases
- Converting LVM, JFS and JFS2 to VxVM and VxFS
- Online migration of native LVM volumes to VxVM volumes
- About online migration from Logical Volume Manager (LVM) volumes to VxVM volumes
- Online migration from LVM volumes in standalone environment to VxVM volumes
- Administrative interface for online migration from LVM in standalone environment to VxVM
- Preparing for online migration from LVM in standalone environment to VxVM
- Migrating from LVM in standalone environment to VxVM
- Reconfiguring the application to use VxVM volume device path
- Backing out online migration of LVM in standalone environment to VxVM
- Do's and Don'ts for online migration from LVM in standalone environment to VxVM
- Scenarios not supported for migration from LVM in standalone environment to VxVM
- Online migration from LVM volumes in VCS HA environment to VxVM volumes
- About online migration from LVM in VCS HA environment to VxVM
- Administrative interface for online migration from LVM in VCS HA environment to VxVM
- Preparing for online migration from LVM in VCS HA environment to VxVM
- Migrating from LVM in VCS HA environment to VxVM
- Migrating configurations with multiple volume groups
- Backing out online migration of LVM in VCS HA environment to VxVM
- Do's and Don'ts for online migration from LVM in VCS HA environment to VxVM
- Scenarios not supported for migration from LVM VCS HA environment to VxVM
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Section VIII. InfoScale 4K sector device support solution
Setting up a filesystem for storage tiering with SmartTier
In the use case examples, the following circumstances apply:
The database containers are in the file system /DBdata
The database archived logs are in the file system /DBarch
To create required filesystems for SmartTier
- List the disks:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS fas30700_0 auto:cdsdisk fas30700_0 --- online thin fas30700_1 auto:cdsdisk fas30700_1 --- online thin fas30700_2 auto:cdsdisk fas30700_2 --- online thin fas30700_3 auto:cdsdisk fas30700_3 --- online thin fas30700_4 auto:cdsdisk fas30700_4 --- online thin fas30700_5 auto:cdsdisk fas30700_5 --- online thin fas30700_6 auto:cdsdisk fas30700_6 --- online thin fas30700_7 auto:cdsdisk fas30700_7 --- online thin fas30700_8 auto:cdsdisk fas30700_8 --- online thin
Assume there are 3 LUNs on each tier.
- Create the disk group.
# vxdg init DBdg fas30700_0 fas30700_1 fas30700_2 \ fas30700_3 fas30700_4 fas30700_5 fas30700_6 fas30700_7 \ fas30700_8
- Create the volumes datavol and archvol.
# vxassist -g DBdg make datavol 200G alloc=fas30700_3,\ fas30700_4,fas30700_5 # vxassist -g DBdg make archvol 50G alloc= fas30700_3,\ fas30700_4,fas30700_5
Tag datavol and archvol as tier-1.
# vxassist -g DBdg settag datavol vxfs.placement_class.tier1 # vxassist -g DBdg settag archvol vxfs.placement_class.tier1
- Create the Tier-0 volumes.
# vxassist -g DBdg make tier0_vol1 50G alloc= fas30700_0,\ fas30700_1,fas30700_2 # vxassist -g DBdg make tier0_vol2 50G alloc= fas30700_0,\ fas30700_1,fas30700_2 # vxassist -g DBdg settag tier0_vol1 vxfs.placement_class.tier0 # vxassist -g DBdg settag tier0_vol2 vxfs.placement_class.tier0
- Create the Tier-2 volumes.
# vxassist -g DBdg make tier2_vol1 50G alloc= fas30700_6,\ fas30700_7,fas30700_8 # vxassist -g DBdg make tier2_vol2 50G alloc= fas30700_6,\ fas30700_7,fas30700_8 # vxassist -g DBdg settag tier2_vol1 vxfs.placement_class.tier2 # vxassist -g DBdg settag tier2_vol2 vxfs.placement_class.tier2
- Convert datavol and archvol to a volume set.
# vxvset -g DBdg make datavol_mvfs datavol # vxvset -g DBdg make archvol_mvfs archvol
- Add the volumes Tier-0 and Tier-2 to datavol_mvfs.
# vxvset -g DBdg addvol datavol_mvfs tier0_vol1 # vxvset -g DBdg addvol datavol_mvfs tier2_vol1
- Add the volume Tier-2 to archvol_mvfs
# vxvset -g DBdg archvol_mvfs tier2_vol2
- Make the file system and mount datavol_mvfs and archvol_mvfs.
# mkfs -V vxfs /dev/vx/rdsk/DBdg/datavol_mvfs
- Mount the DBdata file system
# mount -V vxfs /dev/vx/dsk/DBdg/datavol_mvfs /DBdata
- Mount the DBarch filesystem
# mount -V vxfs /dev/vx/dsk/DBdg/archvol_mvfs /DBarch
- Migrate the database into the newly created, SmartTier-ready file system. You can migrate the database either by restoring from backup or copying appropriate files into respective filesystems.
See the database documentation for more information.