Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.4.1 Virtualization Guide - Solaris
Last Published:
2019-02-01
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
- About VCS support for zones
- Overview of how VCS works with zones
- About the ContainerInfo service group attribute
- About the ContainerOpts resource type attribute
- About the ResContainerInfo resource type attribute
- Zone-aware resources
- About the Mount agent
- About networking agents
- About the Zone agent
- About configuring failovers among physical and virtual servers
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Configuring for physical to virtual and virtual to physical failovers - a typical setup
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- SFCFSHA mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Veritas Volume Manager operations can cause volume device names to go out of sync
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Product release notes
- Product licensing
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Exporting a Veritas volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Veritas Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SF Oracle RAC in logical domain environments
- SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of single host
- SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Mounting VxFS as VxFS inside a non-global zone
You can create a VxFS file system inside non-global zones.
To create the VxFS file system inside non-global zones
- Check the zone status and halt the zone:
global# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 myzone running /zone/myzone solaris shared global# zoneadm -z myzone halt
- Add devices to the zone's configuration:
global# zonecfg -z myzone zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/vxportal zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/fdd zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/vx/rdsk/dg_name/vol_name zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/vx/dsk/dg_name/vol_name zonecfg:myzone:device> end zonecfg:myzone> add fs zonecfg:myzone:fs> set dir=/etc/vx/licenses/lic zonecfg:myzone:fs> set special=/etc/vx/licenses/lic zonecfg:myzone:fs> set type=lofs zonecfg:myzone:fs> end zonecfg:myzone> verify zonecfg:myzone> commit zonecfg:myzone> exit
- On Solaris 11, you must set fs-allowed=vxfs,odm to the zone's configuration:
global# zonecfg -z myzone zonecfg:myzone> set fs-allowed=vxfs,odm zonecfg:myzone> commit zonecfg:myzone> exit
If you want to use ufs, nfs and zfs inside the zone, set fs-allowed=vxfs,odm,nfs,ufs,zfs.
- Boot the zone:
global# zoneadm -z myzone boot
- Login to the non-global zone and create the file system inside the non-global zone:
global# zlogin myzone myzone# mkfs -F vxfs /dev/vx/rdsk/dg_name/vol_name
- Create a mount point inside the non-global zone and mount it:
myzone# mkdir /mnt1 myzone# mount -F vxfs /dev/vx/dsk/dg_name/vol_name /mnt1
Mounting a VxFS file system as a cluster file system from the non-global zone is not supported.