Veritas InfoScale™ 7.4.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
- About VCS support for zones
- Overview of how VCS works with zones
- About the ContainerInfo service group attribute
- About the ContainerOpts resource type attribute
- About the ResContainerInfo resource type attribute
- Zone-aware resources
- About the Mount agent
- About networking agents
- About the Zone agent
- About configuring failovers among physical and virtual servers
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Configuring for physical to virtual and virtual to physical failovers - a typical setup
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- SFCFSHA mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Veritas Volume Manager operations can cause volume device names to go out of sync
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Product release notes
- Product licensing
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Exporting a Veritas volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Veritas Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SF Oracle RAC in logical domain environments
- SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of single host
- SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configuring a zone resource in a failover service group with the hazonesetup utility
The hazonesetup utility helps you configure a zone under VCS. This section covers typical scenarios based on where the zone root is located.
Two typical setups for zone configuration in a failover scenario follow:
Zone root on local storage
Zone root on shared storage
Consider an example in a two-node cluster (sysA and sysB). Zone local-zone is configured on both the nodes.
To configure a zone under VCS control using the hazonesetup utility when the zone root is on local storage
- Boot the non-global zone on first node outside VCS.
sysA# zoneadm -z local-zone boot
- To use the hazonesetup utility, ensure you have a IP configured for the non-global zone and hostname of the global zone is resolvable from non-global zone.
# zlogin local-zone # ping sysA
- Run the hazonesetup utility with correct arguments on the first node. This adds failover zone service group and zone resource in VCS configuration.
sysA# hazonesetup -g zone_grp -r zone_res -z local-zone\ -p password -a -s sysA,sysB
Note:
If you want to use a particular user for password-less communication use -u option of the hazonesetup utility. If -u option is not specified a default user is used for password-less communication.
- Switch the zone service group to next node in the cluster.
sysA# hagrp -switch zone_grp -to sysB
- Run the hazonesetup utility with correct arguments on the node. The hazonesetup utlity detects that the zone service group and zone resource are already present in VCS configuration and update the configuration accordingly for password-less communication.
sysB# hazonesetup -g zone_grp -r zone_res -z local-zone\ -p password -a -s sysA,sysB
- Repeat step 4 and step 5 for all the remaining nodes in the cluster.
To configure a zone under VCS control using the hazonesetup utility when the zone root is on shared storage
- Configure a failover service group with required storage resources (DiskGroup, Volume, Mount, etc.) to mount the zone root on the node. Set the required dependency between storage resources (DiskGroup->Volume->Mount). Make sure you configure all the required attributes of all the storage resources in order to bring them online on cluster node.
sysA# hagrp -add zone_grp sysA# hagrp -modify zone_grp SystemList sysA 0 sysB 1 sysA# hares -add zone_dg DiskGroup zone_grp sysA# hares -add zone_vol Volume zone_grp sysA# hares -add zone_mnt Mount zone_grp sysA# hares -link zone_mnt zone_vol sysA# hares -link zone_vol zone_dg sysA# hares -modify zone_dg DiskGroup zone_dg sysA# hares -modify zone_dg Enabled 1 sysA# hares -modify zone_vol Volume volume_name sysA# hares -modify zone_vol DiskGroup zone_dg sysA# hares -modify zone_vol Enabled 1 sysA# hares -modify zone_mnt MountPoint /zone_mnt sysA# hares -modify zone_mnt BlockDevice /dev/vx/dsk/zone_dg/volume_name sysA# hares -modify zone_mnt FSType vxfs sysA# hares -modify zone_mnt MountOpt rw sysA# hares -modify zone_mnt FsckOpt %-y sysA# hares -modify zone_mnt Enabled 1
When the zone root is on a ZFS file system, use the following commands:
sysA# hagrp -add zone_grp sysA# hagrp -modify zone_grp SystemList sysA 0 sysB 1 sysA# hares -add zone_zpool Zpool zone_grp sysA# hares -modify zone_zpool AltRootPath /zone_root_mnt sysA# hares -modify zone_zpool PoolName zone1_pool sysA# hares -modify zone_zpool Enabled 1
- Bring the service group online on first node. This mounts the zone root on first node.
sysA# hagrp -online zone_grp -sys sysA
- Boot the local zone on first node outside VCS.
sysA# zoneadm -z local-zone boot
- To use the hazonesetup utility, ensure you have a IP configured for the non-global zone and hostname of the global zone is resolvable from non-global zone.
# zlogin local-zone # ping sysA
- Run the hazonesetup utility with correct arguments on the first node. Use the service group configured in step 1. This adds the zone resource to VCS configuration.
sysB# hazonesetup -g zone_grp -r zone_res -z local-zone \ -p password -a -s sysA,sysB
Note:
If you want to use a particular user for password-less communication use -u option of the hazonesetup utility. If -u option is not specified a default user is used for password-less communication.
- Set the proper dependency between the Zone resource and other storage resources. The Zone resource should depend on storage resource (Mount or Zpool ->Zone).
sysA# hares -link zone_res zone_mnt
When the zone root is on a ZFS file system, use following command:
sysA# hares -link zone_res zone_zpool
- Switch the service group to next node in the cluster.
sysA# hagrp -switch zone_grp -to sysB
- Run the hazonesetup utility with correct arguments on the node. The hazonesetup utility detects that the service group and the zone resource are already present in VCS configuration and update the configuration accordingly for password-less communication.
sysB# hazonesetup -g zone_grp -r zone_res -z local-zone\ -p password -a -s sysA,sysB
- Repeat step 7 and step 8 for all the remaining nodes in the cluster