InfoScale™ 9.0 Storage and Availability Management for DB2 Databases - AIX, Linux
- Section I. Storage Foundation High Availability (SFHA) management solutions for DB2 databases
- Overview of Storage Foundation for Databases
- Introducing Storage Foundation High Availability (SFHA) Solutions for DB2
- About the File System component
- About the Volume Manager component
- About Dynamic Multi-Pathing (DMP)
- About Cluster Server
- About Cluster Server agents
- About InfoScale Operations Manager
- Feature support for DB2 across InfoScale products
- Use cases for InfoScale products
- Overview of Storage Foundation for Databases
- Section II. Deploying DB2 with InfoScale products
- Deployment options for DB2 in a Storage Foundation environment
- DB2 deployment options in an InfoScale environment
- DB2 on a single system with Storage Foundation
- DB2 on a single system with off-host in a Storage Foundation environment
- DB2 in a highly available cluster with Storage Foundation High Availability
- DB2 in a parallel cluster with SF Cluster File System HA
- Deploying DB2 and Storage Foundation in a virtualization environment
- Deploying DB2 with Storage Foundation SmartMove and Thin Provisioning
- Deploying DB2 with Storage Foundation
- Deploying DB2 in an off-host configuration with Storage Foundation
- Deploying DB2 with High Availability
- Deployment options for DB2 in a Storage Foundation environment
- Section III. Configuring Storage Foundation for Database (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- About the Storage Foundation for Databases (SFDB) repository
- Requirements for Storage Foundation for Databases (SFDB) tools
- Storage Foundation for Databases (SFDB) tools availability
- Configuring the Storage Foundation for Databases (SFDB) tools repository
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Removing the Storage Foundation for Databases (SFDB) repository
- Configuring authentication for Storage Foundation for Databases (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- Section IV. Improving DB2 database performance
- About database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- How Quick I/O improves database performance
- Tasks for setting up Quick I/O in a database environment
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Converting DB2 containers to Quick I/O files
- About sparse files
- Displaying Quick I/O status and file attributes
- Extending a Quick I/O file
- Monitoring tablespace free space with DB2 and extending tablespace containers
- Recreating Quick I/O files after restoring a database
- Disabling Quick I/O
- Improving DB2 database performance with VxFS Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- About point-in-time copies
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Point-in-time copy solutions supported by SFDB tools
- About snapshot modes supported by Storage Foundation for Databases (SFDB) tools
- Volume-level snapshots
- Storage Checkpoints
- Considerations for DB2 point-in-time copies
- Administering third-mirror break-off snapshots
- Administering Storage Checkpoints
- About Storage Checkpoints
- Database Storage Checkpoints for recovery
- Creating a Database Storage Checkpoint
- Deleting a Database Storage Checkpoint
- Mounting a Database Storage Checkpoint
- Unmounting a Database Storage Checkpoint
- Creating a database clone using a Database Storage Checkpoint
- Restoring database from a Database Storage Checkpoint
- Gathering data for offline-mode Database Storage Checkpoints
- Backing up and restoring with Netbackup in an SFHA environment
- Understanding point-in-time copy methods
- Section VI. Optimizing storage costs for DB2
- Section VII. Storage Foundation for Databases administrative reference
- Storage Foundation for Databases command reference
- Tuning for Storage Foundation for Databases
- Troubleshooting SFDB tools
Enabling Concurrent I/O for DB2
Because you do not need to extend name spaces and present the files as devices, you can enable Concurrent I/O on regular files.
For DB2, you can enable an entire file system to use Concurrent I/O or you can enable specific SMS containers to use Concurrent I/O. If you enable a specific SMS container, the rest of the file system will use the regular buffer I/O.
Before enabling Concurrent I/O, review the following:
Prerequisites |
|
Usage notes |
|
For DB2, /mount_point is the directory in which you can put data containers of the SMS tablespaces using the Concurrent I/O feature.
Note:
This applies to both creating a new tablespace to use Concurrent I/O or enabling an existing tablespace to use Concurrent I/O.
For example for DB2 to mount a file system named /datavol on a mount point named /db2data:
# /usr/sbin/mount -V vxfs -o cio /dev/vx/dsk/db2dg/datavol \ /db2data
# /usr/sbin/mount -t vxfs -o cio /dev/vx/dsk/db2dg/datavol \ /db2data
To enable Concurrent I/O on a new SMS container using the namefs -o cio option
- Using the mountcommand, mount the directory in which you want to put data containers of the SMS tablespaces using the Concurrent I/O feature.
# /usr/sbin/mount -Vt namefs -o cio /path_name /new_mount_point
where:
/path_name is the directory in which the files that will be using Concurrent I/O reside
/new_mount_point is the new target directory that will use the Concurrent I/O feature
The following is an example of mounting a directory (where the new SMS containers are located) to use Concurrent I/O.
To mount an SMS container named /container1 on a mount point named /mysms:
# /usr/sbin/mount -Vt namefs -o cio /datavol/mysms/container1 /mysms
To enable Concurrent I/O on an existing SMS container using the namefs -o cio option
- Stop the DB2 instance using the db2stop command.
- Make the directory that will have Concurrent I/O turned on available using the mv command.
# mv /mydb/mysmsdir /mydb/mysmsdir2
- Remount /mydb/mysmsdir2 on /mydb/mysmsdir using the mount command with the -o cio option.
# mount -Vt namefs -o cio /mydb/mysmsdir2 /mydb/mysmsdir
- Start the DB2 instance using the db2start command.
# db2stop # mv /mydb/mysmsdir /mydb/mysmsdir2 # mount -Vt namefs -o cio /mydb/mysmsdir2 /mydb/mysmsdir # db2start
This example shows how to mount a directory for an existing SMS container to use Concurrent I/O.
To enable Concurrent I/O on a DB2 tablespace when creating the tablespace
- Use the db2 -v "create regular tablespace..." command with the no file system caching option.
- Set all other parameters according to your system requirements.
To enable Concurrent I/O on an existing DB2 tablespace
- Use the DB2 no file system caching option as follows:
# db2 -v "alter tablespace tablespace_name no file system caching"
where tablespace_name is the name of the tablespace for which you are enabling Concurrent I/O.
To verify that Concurrent I/O has been set for a particular DB2 tablespace
- Use the DB2 get snapshot option to check for Concurrent I/O.
# db2 -v "get snapshot for tablespaces on dbname"
where dbname is the database name.
- Find the tablespace you want to check and look for the File system caching attribute. If you see File system caching = No, then Concurrent I/O is enabled.