Veritas InfoScale™ 7.4 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Use cases for Veritas InfoScale products
- Feature support across Veritas InfoScale 7.4 products
- Using SmartMove and Thin Provisioning with Sybase databases
- Running multiple parallel applications within a single cluster using the application isolation feature
- Scaling FSS storage capacity with dedicated storage nodes using application isolation feature
- Finding Veritas InfoScale product use cases information
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- About the atomic write I/O
- Requirements for atomic write I/O
- Restrictions on atomic write I/O functionality
- How the atomic write I/O feature of Storage Foundation helps MySQL databases
- VxVM and VxFS exported IOCTLs
- Configuring atomic write I/O support for MySQL on VxVM raw volumes
- Configuring atomic write I/O support for MySQL on VxFS file systems
- Dynamically growing the atomic write capable file system
- Disabling atomic write I/O support
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability Solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Just in time availability solution for vSphere
- Section IX. Veritas InfoScale 4K sector device support solution
- Section X. Reference
Preparing a space-optimized snapshot for a database backup
If a snapshot volume is to be used on the same host, and will not be moved to another host, you can use space-optimized instant snapshots rather than full-sized instant snapshots. Depending on the application, space-optimized snapshots typically require 10% of the disk space that is required for full-sized instant snapshots.
To prepare a space-optimized snapshot for a backup of an online database
Decide on the following characteristics that you want to allocate to the cache volume that underlies the cache object:
The size of the cache volume should be sufficient to record changes to the parent volumes during the interval between snapshot refreshes. A suggested value is 10% of the total size of the parent volumes for a refresh interval of 24 hours.
If redundancy is a desired characteristic of the cache volume, it should be mirrored. This increases the space that is required for the cache volume in proportion to the number of mirrors that it has.
If the cache volume is mirrored, space is required on at least as many disks as it has mirrors. These disks should not be shared with the disks used for the parent volumes. The disks should also be chosen to avoid impacting I/O performance for critical volumes, or hindering disk group split and join operations.
- Having decided on its characteristics, use the vxassist command to create the volume that is to be used for the cache volume. The following example creates a mirrored cache volume, cachevol, with size 1GB in the disk group, database_dg, on the disks disk16 and disk17:
# vxassist -g database_dg make cachevol 1g layout=mirror \ init=active disk16 disk17
The attribute init=active is specified to make the cache volume immediately available for use.
- Use the vxmake cache command to create a cache object on top of the cache volume that you created in the previous step:
# vxmake [-g diskgroup] cache cache_object \ cachevolname=cachevol [regionsize=size] [autogrow=on] \ [highwatermark=hwmk] [autogrowby=agbvalue] \ [maxautogrow=maxagbvalue]]
If you specify the region size, it must be a power of 2, and be greater than or equal to 16KB (16k). If not specified, the region size of the cache is set to 64KB.
Note:
All space-optimized snapshots that share the cache must have a region size that is equal to or an integer multiple of the region size set on the cache. Snapshot creation also fails if the original volume's region size is smaller than the cache's region size.
If the cache is not allowed to grow in size as required, specify autogrow=off. By default, the ability to automatically grow the cache is turned on.
In the following example, the cache object, cache_object, is created over the cache volume, cachevol, the region size of the cache is set to 32KB, and the autogrow feature is enabled:
# vxmake -g database_dg cache cache_object cachevolname=cachevol \ regionsize=32k autogrow=on
- Having created the cache object, use the following command to enable it:
vxcache [-g diskgroup] start cache_object
For example, start the cache object cache_object:
# vxcache -g database_dg start cache_object
- Create a space-optimized snapshot with your cache object.
# vxsnap -g database_dg make \ source=database_vol1/newvol=snapvol1/cache=cache_object
- If several space-optimized snapshots are to be created at the same time, these can all specify the same cache object as shown in this example:
# vxsnap -g database_dg make \ source=database_vol1/newvol=snapvol1/cache=cache_object \ source=database_vol2/newvol=snapvol2/cache=cache_object \ source=database_vol3/newvol=snapvol3/cache=cache_object
Note:
This step sets up the snapshot volumes, prepares for the backup cycle, and starts tracking changes to the original volumes.
When you are ready to make a backup, proceed to make a backup of an online database on the same host