Please enter search query.
Search <book_title>...
Storage Foundation for Sybase ASE CE 7.4.1 Configuration and Upgrade Guide - Linux
Last Published:
2019-06-18
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: Linux
- Section I. Configuring SF Sybase ASE CE
- Preparing to configure SF Sybase CE
- Configuring SF Sybase CE
- About configuring SF Sybase CE
- Configuring the SF Sybase CE components using the script-based installer
- Configuring the SF Sybase CE cluster
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SF Sybase CE in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Performing an automated SF Sybase CE configuration
- Performing an automated I/O fencing configuration using response files
- Configuring a cluster under VCS control using a response file
- Section II. Post-installation and configuration tasks
- Section III. Upgrade of SF Sybase CE
- Planning to upgrade SF Sybase CE
- Performing a full upgrade of SF Sybase CE using the product installer
- Performing an automated full upgrade of SF Sybase CE using response files
- Performing a phased upgrade of SF Sybase CE
- About phased upgrade
- Performing a phased upgrade of SF Sybase CE from version 6.2.1 and later release
- Step 1: Performing pre-upgrade tasks on the first half of the cluster
- Step 2: Upgrading the first half of the cluster
- Step 3: Performing pre-upgrade tasks on the second half of the cluster
- Step 4: Performing post-upgrade tasks on the first half of the cluster
- Step 5: Upgrading the second half of the cluster
- Step 6: Performing post-upgrade tasks on the second half of the cluster
- Performing a rolling upgrade of SF Sybase CE
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Before installing Sybase ASE CE
- Preparing for local mount point on VxFS for Sybase ASE CE binary installation
- Preparing for shared mount point on CFS for Sybase ASE CE binary installation
- Installing Sybase ASE CE software
- Preparing to create a Sybase ASE CE cluster
- Creating the Sybase ASE CE cluster
- Preparing to configure the Sybase instances under VCS control
- Configuring a Sybase ASE CE cluster under VCS control using the SF Sybase CE installer
- Upgrading Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Section V. Adding and removing nodes
- Adding a node to SF Sybase CE clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- After adding the new node
- Configuring the ClusterService group for the new node
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the new instance to the Sybase ASE CE cluster
- Removing a node from SF Sybase CE clusters
- Adding a node to SF Sybase CE clusters
- Section VI. Configuration of disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Sample installation and configuration values
- Appendix C. Tunable files for installation
- About setting tunable parameters using the installer or a response file
- Setting tunables for an installation, configuration, or upgrade
- Setting tunables with no other installer-related operations
- Setting tunables with an un-integrated response file
- Preparing the tunables file
- Setting parameters for the tunables file
- Tunables value parameter definitions
- Appendix D. Configuration files
- About sample main.cf files
- Sample main.cf files for Sybase ASE CE configurations
- Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
- Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
- Sample main.cf for a primary CVM VVR site
- Sample main.cf for a secondary CVM VVR site
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. High availability agent information
Step 3: Performing pre-upgrade tasks on the second half of the cluster
Perform the following pre-upgrade steps on the second half of the cluster.
To perform the pre-upgrade tasks on the second half of the cluster
- Stop all applications that are not configured under VCS but dependent on Sybase ASE CE or resources controlled by VCS. Use native application commands to stop the application.
Note:
The downtime starts now.
- Stop the applications configured under VCS. Take the Sybase database group offline.
# hagrp -offline sybase_group -sys sys3
# hagrp -offline sybase_group -sys sys4
- Unmount the CFS file systems that are not managed by VCS.
Make sure that no processes are running which make use of mounted shared file system. To verify that no processes use the VxFS or CFS mount point:
# mount | grep vxfs | grep cluster
# fuser -cu /mount_point
Unmount the non-system VxFS file system:
# umount /mount_point
- Stop VCS on each of the nodes in the second half of the cluster:
# hastop -local
- Unmount the VxFS file systems that are not managed by VCS.
Make sure that no processes are running which make use of mounted shared file system. To verify that no processes use the VxFS or CFS mount point:
# mount | grep vxfs
# fuser -cu /mount_point
Unmount the non-system VxFS file system:
# umount /mount_point
- Verify that no VxVM volumes (other than VxVM boot volumes) remain open. Stop any open volumes that are not managed by VCS.
# vxvol -g diskgroup stopall # vxprint -Aht -e v_open
- If a cache area is online, you must take the cache area offline before upgrading the VxVM RPM. On the nodes in the second subcluster, use the following command to take the cache area offline:
# sfcache offline cachename
- Stop all the ports using installer as follows:
For 7.0 and later.
# /opt/VRTS/install/installer -stop sys1 sys2