Veritas InfoScale™ 7.4.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
- About VCS support for zones
- Overview of how VCS works with zones
- About the ContainerInfo service group attribute
- About the ContainerOpts resource type attribute
- About the ResContainerInfo resource type attribute
- Zone-aware resources
- About the Mount agent
- About networking agents
- About the Zone agent
- About configuring failovers among physical and virtual servers
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Configuring for physical to virtual and virtual to physical failovers - a typical setup
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- SFCFSHA mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Veritas Volume Manager operations can cause volume device names to go out of sync
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Product release notes
- Product licensing
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Exporting a Veritas volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Veritas Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SF Oracle RAC in logical domain environments
- SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of single host
- SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
Online and offline operations for service groups in the StorageSG attribute
To manually bring online or take offline service groups that are configured in the StorageSG attribute do not use the AlternateIO resource or its service group
Instead, use service groups configured in the StorageSG attribute.
Freeze the service group for the AlternateIO resource
Freeze the AlternateIO service group before you bring online or take offline service groups configured in the StorageSG attribute of the AlternateIO resource. If you do not freeze the service group, the behavior of the Logical Domain is unknown as it is dependent on the AlternateIO service group.
Configuring preonline trigger for storage service groups
You must configure preonline trigger in the following scenario:
When the service groups configured in the StorageSG attribute of the AlternateIO resource are of fail over type, and if you accidentally bring storage service groups online on another physical system in the cluster.
It is possible to bring the storage service groups online on another physical system because resources configured to monitor back-end storage services are present in different service groups on each physical system. Thus, VCS cannot prevent resources coming online on multiple systems. This may cause data corruption.
Note:
Perform this procedure for storage service groups on each node.
To configure preonline trigger for each service group listed in the StorageSG attribute
Run the following commands:
# hagrp -modify stg-sg TriggerPath bin/AlternateIO/StorageSG # hagrp -modify stg-sg TriggersEnabled PREONLINE
where stg-sg is the name of the storage service group
Set connection time out period for virtual disks
When a disk device is not available, I/O services from the guest domain to the virtual disks are blocked.
Veritas recommends to set a connection time out period for each virtual disk so that applications times out after the set period instead of waiting indefinitely. Run the following command:
# ldm add-vdisk timeout=seconds disk_name \ volume_name@service_name ldom
Fail over of LDom service group when all the I/O domains are down. When the attribute SysDownPolicy is set to AutoDisableNoOffline for a service group, the service group state would be changed to OFFLINE|AutoDisabled when the system on which the service group online goes down. Before auto-enabling the service group and online the service group on any other nodes, it is mandatory to ensure that guest domain is stopped on the system (control domain) that is down. This is particularly important when failure-policy of the master-domains is set to ignore.
Consider the following scenario: The DomainFailurePolicy of the LDom resource is set to {primary="stop"} by default.
If the guest domain need to be made available even when primary domain is rebooted or shut down for maintenance.
The DomainFailurePolicy attribute would be changed to {primary=ignore, alternate1=stop} or {primary=ignore, alternate1=ignore}.
Guest domain will not be stopped even when primary domain is rebooted or shutdown
The SysDownPolicy attribute would be set to AutoDisableNoOffline for planned maintenance. VCS will not fail-over the service group when the node is down instead the group would be put in to auto-disabled state.
The guest domain can continue to function normally with the I/O services available through alternate I/O domain when the control domain is taken down for maintenance.
When the control domain is under maintenance, and if the alternate I/O domain fails due one of the following:
DomainFailurePolicy attribute is set to {primary=ignore, alternate1=stop} and only the I/O services from alternate I/O domain are unavailable (i/o domain is active, but n/w or storage loss).
DomainFailurePolicy attribute is set to {primary=ignore, alternate1=ignore} and if the alternate I/O domain is down (domain is in-active).
In this situation guest domain will not be functioning normally and it is not possible to bring down the guest domain as there is no way to access the guest domain. In such scenarios, you must perform the following steps to online the LDom service group on any other available nodes.
To online the LDom service group
- If the primary domain can be brought up, then bring up the primary domain and stop the guest domain:
# ldm stop ldom_name
If this is not possible, power off the physical system from the console so that the guest domain stops.
- Auto enable the service group:
# hagrp -autoenable group -sys system
- Online the LDom service group:
# hagrp -online group -any