Veritas InfoScale™ 7.4 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Use cases for Veritas InfoScale products
- Feature support across Veritas InfoScale 7.4 products
- Using SmartMove and Thin Provisioning with Sybase databases
- Running multiple parallel applications within a single cluster using the application isolation feature
- Scaling FSS storage capacity with dedicated storage nodes using application isolation feature
- Finding Veritas InfoScale product use cases information
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- About the atomic write I/O
- Requirements for atomic write I/O
- Restrictions on atomic write I/O functionality
- How the atomic write I/O feature of Storage Foundation helps MySQL databases
- VxVM and VxFS exported IOCTLs
- Configuring atomic write I/O support for MySQL on VxVM raw volumes
- Configuring atomic write I/O support for MySQL on VxFS file systems
- Dynamically growing the atomic write capable file system
- Disabling atomic write I/O support
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability Solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Migrating a snapshot volume
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Just in time availability solution for vSphere
- Section IX. Veritas InfoScale 4K sector device support solution
- Section X. Reference
Backing up a snapshot of a mounted file system with shared access
While you can run the commands in the following steps from any node, Veritas recommends running them from the master node.
To back up a snapshot of a mounted file system which has shared access
- On any node, refresh the contents of the snapshot volumes from the original volume using the following command:
# vxsnap -g database_dg refresh snapvol source=database_vol \ [snapvol2 source=database_vol2]... syncing=yes
The syncing=yes attribute starts a synchronization of the snapshot in the background.
For example, to refresh the snapshot snapvol:
# vxsnap -g database_dg refresh snapvol source=database_vol \ syncing=yes
This command can be run every time you want to back up the data. The vxsnap refresh command will resync only those regions which have been modified since the last refresh.
- On any node of the cluster, use the following command to wait for the contents of the snapshot to be fully synchronous with the contents of the original volume:
# vxsnap -g database_dg syncwait snapvol
For example, to wait for synchronization to finish for the snapshots snapvol:
# vxsnap -g database_dg syncwait snapvol
Note:
You cannot move a snapshot volume into a different disk group until synchronization of its contents is complete. You can use the vxsnap print command to check on the progress of synchronization.
- On the master node, use the following command to split the snapshot volume into a separate disk group, snapvoldg, from the original disk group, database_dg:
# vxdg split volumedg snapvoldg snapvol
For example, to place the snapshot of the volume database_vol into the shared disk group splitdg:
# vxdg split database_dg splitdg snapvol
- On the master node, deport the snapshot volume's disk group using the following command:
# vxdg deport snapvoldg
For example, to deport the disk group splitdg:
# vxdg deport splitdg
- On the OHP host where the backup is to be performed, use the following command to import the snapshot volume's disk group:
# vxdg import snapvoldg
For example, to import the disk group splitdg:
# vxdg import splitdg
- VxVM will recover the volumes automatically after the disk group import unless it is set not to recover automatically. Check if the snapshot volume is initially disabled and not recovered following the split.
If a volume is in the DISABLED state, use the following command on the OHP host to recover and restart the snapshot volume:
# vxrecover -g snapvoldg -m snapvol
For example, to start the volume snapvol:
# vxrecover -g splitdg -m snapvol
- On the OHP host, use the following commands to check and locally mount the snapshot volume:
# fsck -t vxfs /dev/vx/rdsk/database_dg/database_vol
# mount -t vxfs /dev/vx/dsk/database_dg/database_vol mount_point
For example, to check and mount the volume snapvol in the disk group splitdg for shared access on the mount point, /bak/mnt_pnt:
# fsck -t vxfs /dev/vx/rdsk/splitdg/snapvol # mount -t vxfs /dev/vx/dsk/splitdg/snapvol /bak/mnt_pnt
- Back up the file system at this point using a command such as bpbackup in Veritas NetBackup. After the backup is complete, use the following command to unmount the file system.
# umount mount_point
- On the off-host processing host, use the following command to deport the snapshot volume's disk group:
# vxdg deport snapvoldg
For example, to deport splitdg:
# vxdg deport splitdg
- On the master node, re-import the snapshot volume's disk group as a shared disk group using the following command:
# vxdg -s import snapvoldg
For example, to import splitdg:
# vxdg -s import splitdg
- On the master node, use the following command to rejoin the snapshot volume's disk group with the original volume's disk group:
# vxdg join snapvoldg database_dg
For example, to join disk group splitdg with database_dg:
# vxdg join splitdg database_dg
- VxVM will recover the volumes automatically after the join unless it is set not to recover automatically. Check if the snapshot volumes are initially disabled and not recovered following the join.
If a volume is in the DISABLED state, use the following command on the primary host to recover and restart the snapshot volume:
# vxrecover -g database_dg -m snapvol
- When the recover is complete, use the following command to refresh the snapshot volume, and make its contents refreshed from the primary volume:
# vxsnap -g database_dg refresh snapvol source=database_vol \ syncing=yes
# vxsnap -g database_dg syncwait snapvol
When synchronization is complete, the snapshot is ready to be re-used for backup.
Repeat the entire procedure each time that you need to back up the volume.