Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.4.1 Virtualization Guide - Solaris
Last Published:
2019-02-01
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
- About VCS support for zones
- Overview of how VCS works with zones
- About the ContainerInfo service group attribute
- About the ContainerOpts resource type attribute
- About the ResContainerInfo resource type attribute
- Zone-aware resources
- About the Mount agent
- About networking agents
- About the Zone agent
- About configuring failovers among physical and virtual servers
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Configuring for physical to virtual and virtual to physical failovers - a typical setup
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- SFCFSHA mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Veritas Volume Manager operations can cause volume device names to go out of sync
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Product release notes
- Product licensing
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Exporting a Veritas volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Veritas Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SF Oracle RAC in logical domain environments
- SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of single host
- SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Sample VCS configuration for AlternateIO resource configured as a fail over type
include "types.cf"
cluster altio-cluster (
UserNames = { admin = XXXXXXXXXXX }
Administrators = { admin }
HacliUserLevel = COMMANDROOT
)
system primary1 (
)
system alternate1 (
)
system primary2 (
)
system alternate2 (
)
group aiosg (
SystemList = { primary1 = 0, primary2 = 1 }
AutoStartList = { primary1 }
TriggerPath = "bin/AlternateIO"
TriggersEnabled @primary1 = { PREONLINE }
TriggersEnabled @primary2 = { PREONLINE }
)
AlternateIO altiores (
StorageSG @primary1 = { primary1-strsg = 1 }
StorageSG @primary2 = { primary2-strsg = 1 }
NetworkSG @primary1 = { primary1-nwsg = 0 }
NetworkSG @primary2 = { primary2-nwsg = 0 }
)
// resource dependency tree
//
// group aiosg
// {
// AlternateIO altiores
// }
group ldomsg (
SystemList = { primary1 = 0, primary2 = 1 }
AutoStartList = { primary1 }
SysDownPolicy = { AutoDisableNoOffline }
)
LDom ldmguest (
LDomName = ldg1
)
requires group aiosg online local hard
// resource dependency tree
//
// group ldomsg
// {
// LDom ldg1
// }
group primary1-strsg (
SystemList = { primary1 = 0, alternate1 = 1 }
AutoStart = 0
Parallel = 1
TriggerPath = "bin/AlternateIO/StorageSG"
TriggersEnabled @primary1 = { PREONLINE }
TriggersEnabled @alternate1 = { PREONLINE }
AutoStartList = { primary1, alternate1 }
)
Zpool zpres1 (
PoolName @primary1= zfsprim
PoolName @alternate1 = zfsmirr
ForceOpt = 0
)
// resource dependency tree
//
// group primary1-strsg
// {
// Zpool zpres1
// }
group primary1-nwsg (
SystemList = { primary1 = 0, alternate1 = 1 }
Parallel = 1
)
Phantom ph1 (
)
NIC nicres1 (
Device @primary1 = nxge3
Device @alternate1 = nxge4
)
// resource dependency tree
//
// group primary1-nwsg
// {
// Phantom ph1
// Proxy nicres1
// }
group primary2-strsg (
SystemList = { primary2 = 0, alternate2 = 1 }
Parallel = 1
TriggerPath = "bin/AlternateIO/StorageSG"
TriggersEnabled @primary2 = { PREONLINE }
TriggersEnabled @alternate2 = { PREONLINE }
)
Zpool zpres2 (
PoolName @ primary2 = zfsprim
PoolName @ alternate2 = zfsmirr
ForceOpt = 0
)
// resource dependency tree
//
// group primary2-strsg
// {
// Zpool zpres2
// }
group primary2-nwsg (
SystemList = { primary2 = 0, alternate2 = 1 }
Parallel = 1
)
Phantom ph2 (
)
NIC nicres2 (
Device @primary2 = nxge3
Device @alternate2 = nxge4
)
// resource dependency tree
//
// group primary2-nwsg
// {
// Phantom ph2
// Proxy nicres2
// }