InfoScale™ 9.0 Virtualization Guide - Solaris
- Section I. Overview of InfoScale solutions in Solaris virtualization environments
- Section II. Zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- About Solaris Zones
- About VCS support for zones
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- Cluster FileSystem mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About InfoScale SFRAC component support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting a InfoScale SFRAC component in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of InfoScale support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Arctera Volume Manager operations can cause volume device names to go out of sync
- Solaris branded zone support
- InfoScale Enterprise Solutions support for Solaris Native Zones
- Section III. Oracle VM Server for SPARC
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Arctera InfoScale Enterprise solutions in Oracle VM server for SPARC
- Features
- Split InfoScale stack model
- Guest-based InfoScale stack model
- Layered InfoScale stack model
- System requirements
- Product release notes
- Product licensing
- Installing InfoScale in a Oracle VM Server for SPARC environment
- Exporting a VxVM volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Arctera Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SFRAC support for Oracle VM Server for SPARC environments
- About deploying SFRAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SFRAC in logical domain environments
- SFRAC with Oracle RAC database on I/O domains of two hosts
- SFRAC with Oracle RAC database on guest domains of two hosts
- SFRAC with Oracle RAC database on guest domains of single host
- SFRAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Using SmartIO in the virtualized environment
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Veritas Extension for Oracle Disk Manager
The Veritas Extension for Oracle Disk Manager (VxODM) is specifically designed for file management and disk I/O throughput. The features of VxODM are best suited for databases that reside in an InfoScale File System (VxFS). VxODM allows users to improve database throughput for I/O intensive workloads with special I/O optimization.
VxODM is supported in non-global zones. To run Oracle 11g Release 2 on a non-global zone and use VxODM, the Oracle software version must be 11.4.
For more information on supported versions, refer to the InfoScale Support Matrix for IBM DB2.
Care must be taken when installing and removing packages when working with the VRTSodm package, for more information refer to the following:
This section describes how to enable ODM file access from non-global zones with VxFS:
On Solaris 11:
If there is no existing zone.
If there is existing zone.
On Solaris 11: To enable ODM file access from non-global zones with VxFS, if there is no existing zone
- Install SF in the global zone.
See the Storage Foundation Configuration and Upgrade Guide.
- Set publisher for installation after going in the
pkgsfolder of the same installer:global# pkg set-publisher -P -g VRTSpkgs.p5p Veritas
- Create a zone with the following configuration:
zonecfg:myzone> create create: Using system default template 'SYSdefault' zonecfg:myzone> set zonepath=/export/home/myzone zonecfg:myzone> set fs-allowed=default,vxfs,odm zonecfg:myzone> add fs zonecfg:myzone:fs> set dir=/etc/vx/licenses/lic zonecfg:myzone:fs> set special=/etc/vx/licenses/lic zonecfg:myzone:fs> set type=lofs zonecfg:myzone:fs> end zonecfg:myzone:fs> remove anet linkname=net0 zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/vxportal zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/fdd zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/odm zonecfg:myzone:device> end zonecfg:myzone> verify zonecfg:myzone> commit
- Install the zone:
global# zoneadm -z myzone install
- Boot the zone.
global# zoneadm -z myzone boot
- Configure the zone:
global# zlogin -C myzone
- Install VRTSvxfs, VRTSodm, and VRTSvlic in the zone:
myzone# pkg install --accept VRTSvxfs VRTSodm VRTSvlic
- Enable the
vxodmservice inside the zone:myzone# svcadm enable vxodm
- Execute the mount -p | grep odm in the local zone and confirm the output looks similar to the following:
/dev/odm - /dev/odm odm - no nodevices,smartsync,zone=myzone, sharezone=5
- Unset the publisher after coming inside the global zone:
global# pkg unset-publisher Veritas
On Solaris 11: To enable ODM file access from non-global zones with VxFS, if there is existing zone
- Check whether, SF is installed in the global zone or not. If not, Install SF in the global zone.
See the Storage Foundation Configuration and Upgrade Guide.
- 2. Set publisher for installation after going in the
pkgsfolder of the same installer with which SF was installed in the global zone.global# pkg set-publisher -P -g VRTSpkgs.p5p Veritas
- Check whether the zone is in running or in installed state. If it is running then shut it down:
global# zoneadm -z myzone shutdown
- Set
fs-allowedto bedefault,vxfs,odm:zonecfg -z myzone zonecfg:myzone> set fs-allowed=default,vxfs,odm zonecfg:myzone> verify zonecfg:myzone> commit
- Add license directory as
fsin zone configuration file:zonecfg -z myzone zonecfg:myzone> add fs zonecfg:myzone:fs> set dir=/etc/vx/licenses/lic zonecfg:myzone:fs> set special=/etc/vx/licenses/lic zonecfg:myzone:fs> set type=lofs zonecfg:myzone:fs> end zonecfg:myzone> verify zonecfg:myzone> commit
- Add three devices
vxportal,fdd, andodmin the zone configuration file:zonecfg -z myzone zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/vxportal zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/fdd zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/odm zonecfg:myzone:device> end zonecfg:myzone> verify zonecfg:myzone> commit
- Boot the zone:
global# zoneadm -z myzone boot
- Install VRTSvxfs, VRTSodm, and VRTSvlic in the zone:
myzone# pkg install --accept VRTSvxfs VRTSodm VRTSvlic
- Enable the
vxodmservice inside the zone:myzone# svcadm enable vxodm
- Execute the
mount - p | grep odmin the local zone and confirm the output looking similar to the following:/dev/odm - /dev/odm odm - no nodevices,smartsync,zone=myzone,sharezone=5
- 11. Unset the publisher after coming inside the global zone:
global# pkg unset-publisher Veritas