Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
 - Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
 - About VCS support for zones
- Overview of how VCS works with zones
 - About the ContainerInfo service group attribute
 - About the ContainerOpts resource type attribute
 - About the ResContainerInfo resource type attribute
 - Zone-aware resources
 - About the Mount agent
 - About networking agents
 - About the Zone agent
 - About configuring failovers among physical and virtual servers
 
 - Configuring VCS in zones
- Prerequisites for configuring VCS in zones
 - Deciding on the zone root location
 - Performing the initial internal zone configuration
 - About installing applications in a zone
 - Configuring the service group for the application
 - Configuring a zone resource in a failover service group with the hazonesetup utility
 - Configuring zone resource in a parallel service group with the hazonesetup utility
 - Configuring multiple zone resources using same VCS user for password less communication
 - Modifying the service group configuration
 - Verifying the zone configuration
 - Synchronizing the zone configuration across cluster nodes
 - Performing maintenance tasks
 - Troubleshooting zones
 - Configuring for physical to virtual and virtual to physical failovers - a typical setup
 
 - Adding VxFS file systems to a non-global zone
 - Mounting VxFS as lofs into a non-global zone
 - Mounting VxFS directly into a non-global zone from global zone
 - Mounting VxFS as VxFS inside a non-global zone
 - Adding a direct mount to a zone's configuration
 - Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
 - SFCFSHA mounts
 - Concurrent I/O access in non-global zones
 - Veritas Extension for Oracle Disk Manager
 - Exporting VxVM volumes to a non-global zone
 - About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
 - Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
 - Issue with VCS agents
 - Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
 - Error message displayed for PrivNIC resource if zone is not running
 - Warning messages displayed when VCS restarts
 - The installer log of non-global zone contains warning messages
 - Issue with CFS mounts
 
 
 - Configuring Solaris non-global zones for disaster recovery
 - Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
 - VxFS file system is not supported as the root of a non-global zone
 - QIO and CQIO are not supported
 - Package installation in non-global zones
 - Package removal with non-global zone configurations
 - Root volume cannot be added to non-global zones
 - Some Veritas Volume Manager operations can cause volume device names to go out of sync
 
 
 - Storage Foundation and High Availability Solutions support for Solaris Projects
 
 - Storage Foundation and High Availability Solutions support for Solaris Zones
 - Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
 - Terminology for Oracle VM Server for SPARC
 - Oracle VM Server for SPARC deployment models
 - Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
 - Features
 - Split Storage Foundation stack model
 - Guest-based Storage Foundation stack model
 - Layered Storage Foundation stack model
 - System requirements
 - Product release notes
 - Product licensing
 - Installing Storage Foundation in a Oracle VM Server for SPARC environment
 - Exporting a Veritas volume to a guest domain from the control domain
 - Provisioning storage for a guest domain
 - Using Veritas Volume Manager snapshots for cloning logical domain boot disks
 - Support of live migration for Solaris LDOMs with fencing configured in DMP mode
 - Configuring Oracle VM Server for SPARC guest domains for disaster recovery
 - Software limitations
 - Known issues
 
 - Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
 - VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
 - About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
 - Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
 
 - Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
 - Overview of a live migration
 - Prerequisites before you perform domain migration
 - Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
 - Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
 - Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
 - Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
 - Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
 
 - About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
 - Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
 - Identify supported storage and network services
 - Determine the number of nodes to form VCS cluster
 - Install and configure VCS inside the control domain and alternate I/O domain
 - Configuring storage services
 - Configure storage service groups
 - Configure network service groups
 - Configure a service group to monitor services from multiple I/O domains
 - Configure the AlternateIO resource
 - Configure the service group for a Logical Domain
 - Failover scenarios
 - Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
 - Sample VCS configuration for AlternateIO resource configured as a fail over type
 
 - Configuring VCS on logical domains to manage applications using services from multiple I/O domains
 
 - SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
 - Sample configuration scenarios
 - Preparing to deploy SF Oracle RAC in logical domain environments
 - SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
 - SF Oracle RAC with Oracle RAC database on guest domains of two hosts
 - SF Oracle RAC with Oracle RAC database on guest domains of single host
 - SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
 
 - Support for live migration in FSS environments
 
 - Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
 - Section IV. Reference
 
Mounting VxFS directly into a non-global zone from global zone
To direct mount a VxFS file system in a non-global zone, the directory to mount must be in the non-global zone and the mount must take place from the global zone. The following procedure mounts the directory dirmnt in the non-global zone newzone with a mount path of /zonedir/newzone/root/dirmnt.
Note:
VxFS entries in the global zone /etc/vfstab file for non-global zone direct mounts are not supported, as the non-global zone may not yet be booted at the time of /etc/vfstab execution.
Once a file system has been delegated to a non-global zone through a direct mount, the mount point will be visible in the global zone through the mount command, but not through the df command.
To direct mount a VxFS file system in a non-global zone
- Log in to the zone and make the mount point:
global# zlogin newzone newzone# mkdir dirmnt newzone# exit
 Mount the file system from the global zone:
Non-cluster file system:
global# mount -F vxfs /dev/vx/dsk/dg/vol1 /zonedir/zone1\ /root/dirmnt
Cluster file system:
global# mount -F vxfs -o cluster /dev/vx/dsk/dg/vol1 \ /zonedir/zone1/root/dirmnt
- Log in to the non-global zone and ensure that the file system is mounted:
global# zlogin newzone newzone# df | grep dirmnt /dirmnt (/dirmnt):142911566 blocks 17863944 files