InfoScale™ 9.0 Storage and Availability Management for DB2 Databases - AIX, Linux
- Section I. Storage Foundation High Availability (SFHA) management solutions for DB2 databases
- Overview of Storage Foundation for Databases
- Introducing Storage Foundation High Availability (SFHA) Solutions for DB2
- About the File System component
- About the Volume Manager component
- About Dynamic Multi-Pathing (DMP)
- About Cluster Server
- About Cluster Server agents
- About InfoScale Operations Manager
- Feature support for DB2 across InfoScale products
- Use cases for InfoScale products
- Overview of Storage Foundation for Databases
- Section II. Deploying DB2 with InfoScale products
- Deployment options for DB2 in a Storage Foundation environment
- DB2 deployment options in an InfoScale environment
- DB2 on a single system with Storage Foundation
- DB2 on a single system with off-host in a Storage Foundation environment
- DB2 in a highly available cluster with Storage Foundation High Availability
- DB2 in a parallel cluster with SF Cluster File System HA
- Deploying DB2 and Storage Foundation in a virtualization environment
- Deploying DB2 with Storage Foundation SmartMove and Thin Provisioning
- Deploying DB2 with Storage Foundation
- Deploying DB2 in an off-host configuration with Storage Foundation
- Deploying DB2 with High Availability
- Deployment options for DB2 in a Storage Foundation environment
- Section III. Configuring Storage Foundation for Database (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- About the Storage Foundation for Databases (SFDB) repository
- Requirements for Storage Foundation for Databases (SFDB) tools
- Storage Foundation for Databases (SFDB) tools availability
- Configuring the Storage Foundation for Databases (SFDB) tools repository
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Removing the Storage Foundation for Databases (SFDB) repository
- Configuring authentication for Storage Foundation for Databases (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- Section IV. Improving DB2 database performance
- About database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- How Quick I/O improves database performance
- Tasks for setting up Quick I/O in a database environment
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Converting DB2 containers to Quick I/O files
- About sparse files
- Displaying Quick I/O status and file attributes
- Extending a Quick I/O file
- Monitoring tablespace free space with DB2 and extending tablespace containers
- Recreating Quick I/O files after restoring a database
- Disabling Quick I/O
- Improving DB2 database performance with VxFS Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- About point-in-time copies
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Point-in-time copy solutions supported by SFDB tools
- About snapshot modes supported by Storage Foundation for Databases (SFDB) tools
- Volume-level snapshots
- Storage Checkpoints
- Considerations for DB2 point-in-time copies
- Administering third-mirror break-off snapshots
- Administering Storage Checkpoints
- About Storage Checkpoints
- Database Storage Checkpoints for recovery
- Creating a Database Storage Checkpoint
- Deleting a Database Storage Checkpoint
- Mounting a Database Storage Checkpoint
- Unmounting a Database Storage Checkpoint
- Creating a database clone using a Database Storage Checkpoint
- Restoring database from a Database Storage Checkpoint
- Gathering data for offline-mode Database Storage Checkpoints
- Backing up and restoring with Netbackup in an SFHA environment
- Understanding point-in-time copy methods
- Section VI. Optimizing storage costs for DB2
- Section VII. Storage Foundation for Databases administrative reference
- Storage Foundation for Databases command reference
- Tuning for Storage Foundation for Databases
- Troubleshooting SFDB tools
PREFETCHSIZE and EXTENTSIZE
Prefetching is a behavior that increases database performance in DSS type environments, or environments where data are large enough that they cannot be maintained in the database memory. The extentsize is important in environments where DB2 tablespaces and containers reside upon RAID devices. In general, the EXTENTSIZE should always be equal to or a multiple of the RAID stripe size
By setting DB2_PARALLEL_IO, the tablespace PREFETCHSIZE takes on special meaning. PREFETCHSIZE is divided by the EXTENTSIZE to arrive at the degree of I/O parallelism. Without this environment variable set, the degree of I/O parallelism is normally derived from the number of containers. Because RAID often has only one container, it is important to set the PREFETCHSIZE as a multiple of the EXTENTSIZE, to provide a sufficient number of IO_SERVERS (at least one per physical disk), and to assign the tablespace to a bufferpool that is sufficiently large to accommodate to prefetch requests.
In the general case, we calculate EXTENTSIZE based on the physical attributes of the volume. PREFETCHSIZE should be at least EXTENTSIZE * the number of containers in order to obtain a good I/O parallelism. When dealing with RAID devices however, we may have only a single container within a tablespace and so the number of containers would be substituted with the number of devices or columns in the volume.
When using DMS device containers, such as Quick I/O files, the operating system does not perform any prefetching or caching.
When you need to have a greater control over when and where memory is allocated to caching and prefetching of DB2 tablespace data, use Cached Quick I/O.
If you prefer to assign more system memory permanently to DB2 bufferpools, set PREFETCHSIZE and the DB2_PARALLEL_IO settings for tablespaces.
For example, we have a VxVM RAID0 volume striped across 10 physical disks with a stripe column size of 64k. We have created a VxFS file system on this volume and are about to create a tablespace of DMS containers:
$ qiomkfile -s 1G /db2_stripe/cont001
$ db2 connect to PROD
$ db2 create tablespace DATA1 managed by database \
using (device '/db2_stripe/cont001' 128000 ) \ pagesize 8k extentsize 8 prefetchsize 80
using (FILE '/db2_stripe/cont001' 128000) \ pagesize 8k extentsize 8 prefetchsize 80 \ no file system caching
$ db2 terminate
In this example, we ensure that each read of an extent will span 1 physical drive (column width is 64k and our extentsize is 8 * 8k pagesize). When prefetching, we take a full stripe read at a time (there are 10 disks in the stripe, so 10 * an extent is 80 pages). Observe that the PREFETCHSIZE remains a multiple of the EXTENTSIZE. These settings would provide a good environment for a database which in general uses clusters of data around 640k or less. For larger database objects or more aggressive prefetch on data, the specified PREFETCHSIZE can be multiplied.
If the database's main workload requires good sequential I/O performance, such as a DSS workload, then the settings for Cached Quick I/O and PREFETCHSIZE becomes even more important.
There are some cases where setting the PREFETCHSIZE to large values or having prefetching at all may degrade performance. In OLTP environments where data access is very random, you may need to turn off prefetching on a tablespace, or minimize the effect by setting PREFETCHSIZE equal to EXTENTSIZE.
It is still very important in these types of environment to ensure that access to indexes is very fast and preferably all heavily accessed indexes are cached by Cached Quick I/O or in bufferpool memory.