InfoScale™ 9.0 Virtualization Guide - Solaris
- Section I. Overview of InfoScale solutions in Solaris virtualization environments
- Section II. Zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- About Solaris Zones
- About VCS support for zones
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- Cluster FileSystem mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About InfoScale SFRAC component support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting a InfoScale SFRAC component in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of InfoScale support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Arctera Volume Manager operations can cause volume device names to go out of sync
- Solaris branded zone support
- InfoScale Enterprise Solutions support for Solaris Native Zones
- Section III. Oracle VM Server for SPARC
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Arctera InfoScale Enterprise solutions in Oracle VM server for SPARC
- Features
- Split InfoScale stack model
- Guest-based InfoScale stack model
- Layered InfoScale stack model
- System requirements
- Product release notes
- Product licensing
- Installing InfoScale in a Oracle VM Server for SPARC environment
- Exporting a VxVM volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Arctera Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SFRAC support for Oracle VM Server for SPARC environments
- About deploying SFRAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SFRAC in logical domain environments
- SFRAC with Oracle RAC database on I/O domains of two hosts
- SFRAC with Oracle RAC database on guest domains of two hosts
- SFRAC with Oracle RAC database on guest domains of single host
- SFRAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Using SmartIO in the virtualized environment
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configuring Solaris non-global zones for disaster recovery
Solaris Zones can be configured for disaster recovery by replicating the zone root using replication methods like Hitachi TrueCopy, EMC SRDF, Volume Replicator (VVR), and so on. The network configuration for the Zone in the primary site may not be effective in the secondary site if the two sites are in different IP subnets. Hence, you need to make these additional configuration changes to the Zone resource.
To configure the non-global zone for disaster recovery, configure VCS on both the sites in the global zones with GCO option.
Refer to the Cluster Server Administrator's Guide for more information about global clusters, their configuration, and their use.
To set up the non-global zone for disaster recovery
On the primary site, create the non-global Zone and configure its network parameters.
Create the non-global zone on the primary site using the zonecfg command.
Add the network adapters to the non-global zone's configuration if the zone is configured as an exclusive IP zone. Assign an IP address to the network adapter along with the Netmask and Gateway.
After the zone boots, set up the other network-related information such as the HostName, DNS Servers, DNS Domain, and DNS Search Path in the appropriate files (/etc/hostname, /etc/resolve.conf).
- On the primary site, shut down the zone.
- Use replication-specific commands to failover the replication to the secondary site from primary site.
- Repeat step 1 on the secondary site.
- Perform step 6, step 7, step 8, and step 9 on the primary cluster and secondary clusters.
- Create a VCS service group with a VCS Zone resource for the non-global zone.
Configure the DROpts association attribute on the Zone resource with the following keys and site-specific values for each: HostName, DNSServers, DNSSearchPath, and DNSDomain. If the non-global zone is an exclusive IP zone in this site, configure the following keys in the DROpts association attribute on the Zone resource: Device (network adapter name), IPAddress, Netmask, and Gateway.
- Add the appropriate Mount resources and DiskGroup resources for the File System and DiskGroup on which the non-global zone's zoneroot resides. Add a resource dependency from the Zone resource to the Mount resource and another dependency from the Mount resource to the Diskgroup resource.
- For VVR-based replication, add the RVGPrimary resource to the service group.
Add one of the following VCS replication resources to the service group for managing the replication.
A hardware replication agent
Examples of these agents include SRDF for EMC SRDF, HTC for Hitachi TrueCopy, MirrorView for EMC MirrorView, etc. Refer to the appropriate VCS replication agent guide for configuring the replication resource.
The VVRPrimary agent
Refer to the following manuals for more information:
For information about configuring VVR-related resources, see the InfoScale Replication Administrator's Guide.
For information about the VVR-related agents, see the Cluster Server Bundled Agents Reference Guide.
- Add a dependency from the DiskGroup resource to the replication resource.
When the resource is online in a site, the replication resource makes sure of the following:
The underlying replicated devices are in primary mode, and the underlying storage and eventually the zone root goes into read-write mode.
The remote devices are in secondary mode.
Thus, when the Zone resource goes online the resource modifies the appropriate files inside the non-global zone to apply the disaster recovery-related parameters to the non-global zone.