Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Solutions Guide - AIX
Last Published:
2025-08-01
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: AIX
- Section I. Introducing InfoScale
- Section II. Solutions for InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- Tasks for setting up Quick I/O in a database environment
- Creating DB2 database containers as Quick I/O files using qiomkfile Creating Sybase files as Quick I/O files using qiomkfile
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Extending a Quick I/O file
- Disabling Quick I/O
- Improving database performance with Cached Quick I/O
- Improving database performance with Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration of native volumes and file systems to VxVM and VxFS
- About converting LVM, JFS and JFS2 configurations
- Initializing unused LVM physical volumes as VxVM disks
- Converting LVM volume groups to VxVM disk groups
- Volume group conversion limitations
- Conversion process summary
- Conversion of JFS and JFS2 file systems to VxFS
- Conversion steps explained
- Identify LVM disks and volume groups for conversion
- Analyze an LVM volume group to see if conversion is possible
- Take action to make conversion possible if analysis fails
- Back up your LVM configuration and user data
- Plan for new VxVM logical volume names
- Stop application access to volumes in the volume group to be converted
- Conversion and reboot
- Convert a volume group
- Take action if conversion fails
- Implement changes for new VxVM logical volume names
- Restart applications on the new VxVM volumes
- Tailor your VxVM configuration
- Restoring the LVM volume group configuration
- Examples of using vxconvert
- About test cases
- Converting LVM, JFS and JFS2 to VxVM and VxFS
- Online migration of native LVM volumes to VxVM volumes
- About online migration from Logical Volume Manager (LVM) volumes to VxVM volumes
- Online migration from LVM volumes in standalone environment to VxVM volumes
- Administrative interface for online migration from LVM in standalone environment to VxVM
- Preparing for online migration from LVM in standalone environment to VxVM
- Migrating from LVM in standalone environment to VxVM
- Reconfiguring the application to use VxVM volume device path
- Backing out online migration of LVM in standalone environment to VxVM
- Do's and Don'ts for online migration from LVM in standalone environment to VxVM
- Scenarios not supported for migration from LVM in standalone environment to VxVM
- Online migration from LVM volumes in VCS HA environment to VxVM volumes
- About online migration from LVM in VCS HA environment to VxVM
- Administrative interface for online migration from LVM in VCS HA environment to VxVM
- Preparing for online migration from LVM in VCS HA environment to VxVM
- Migrating from LVM in VCS HA environment to VxVM
- Migrating configurations with multiple volume groups
- Backing out online migration of LVM in VCS HA environment to VxVM
- Do's and Don'ts for online migration from LVM in VCS HA environment to VxVM
- Scenarios not supported for migration from LVM VCS HA environment to VxVM
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Section VIII. InfoScale 4K sector device support solution
Migrating a snapshot volume
This example demonstrates how to migrate a snapshot volume containing a VxFS file system from a Solaris SPARC system (big endian) to a Linux system (little endian) or HP-UX system (big endian) to a Linux system (little endian).
To migrate a snapshot volume
- Create the instant snapshot volume, snapvol, from an existing plex in the volume, vol, in the CDS disk group, datadg:
# vxsnap -g datadg make source=vol/newvol=snapvol/nmirror=1
- Quiesce any applications that are accessing the volume. For example, suspend updates to the volume that contains the database tables. The database may have a hot backup mode that allows you to do this by temporarily suspending writes to its tables.
- Refresh the plexes of the snapshot volume using the following command:
# vxsnap -g datadg refresh snapvol source=yes syncing=yes
- The applications can now be unquiesced.
If you temporarily suspended updates to the volume by a database in 2, release all the tables from hot backup mode.
- Use the vxsnap syncwait command to wait for the synchronization to complete:
# vxsnap -g datadg syncwait snapvol
- Check the integrity of the file system, and then mount it on a suitable mount point:
# fsck -F vxfs /dev/vx/rdsk/datadg/snapvol # mount -F vxfs /dev/vx/dsk/datadg/snapvol /mnt
- Confirm whether the file system can be converted to the target operating system:
# fscdstask validate Linux /mnt
- Unmount the snapshot:
# umount /mnt
- Convert the file system to the opposite endian:
# fscdsconv -e -f recoveryfile -t target_specifiers special
For example:
# fscdsconv -e -f /tmp/fs_recov/recov.file -t Linux \ /dev/vx/dsk/datadg/snapvol
This step is only required if the source and target systems have the opposite endian configuration.
- Split the snapshot volume into a new disk group, migdg, and deport that disk group:
# vxdg split datadg migdg snapvol # vxdg deport migdg
- Import the disk group, migdg, on the Linux system:
# vxdg import migdg
It may be necessary to reboot the Linux system so that it can detect the disks.
- Use the following commands to recover and restart the snapshot volume:
# vxrecover -g migdg -m snapvol
- Check the integrity of the file system, and then mount it on a suitable mount point:
# fsck -t vxfs /dev/vx/dsk/migdg/snapvol # mount -t vxfs /dev/vx/dsk/migdg/snapvol /mnt