Veritas InfoScale™ 7.4.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
- About VCS support for zones
- Overview of how VCS works with zones
- About the ContainerInfo service group attribute
- About the ContainerOpts resource type attribute
- About the ResContainerInfo resource type attribute
- Zone-aware resources
- About the Mount agent
- About networking agents
- About the Zone agent
- About configuring failovers among physical and virtual servers
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Configuring for physical to virtual and virtual to physical failovers - a typical setup
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- SFCFSHA mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Veritas Volume Manager operations can cause volume device names to go out of sync
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Product release notes
- Product licensing
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Exporting a Veritas volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Veritas Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SF Oracle RAC in logical domain environments
- SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of single host
- SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configuring a direct mount of VxFS file system in a non-global zone with VCS
Typical steps to configure a direct mount inside a non-global zone.
To configure a direct mount inside a non-global zone
Create a VxVM disk group and volume:
Create a VxVM disk group from a device:
global# vxdg init data_dg c0t0d1
Create a volume from a disk group:
global# vxassist -g data_dg make data_vol 5G
For more information, see the Storage Foundation Administrator's Guide.
Create a zone:
Create a root directory for a zone local-zone and change its permission to 700:
global# mkdir -p /zones/local-zone global# chmod 700 /zones/local-zone
On Solaris 11, configure a zone local-zone:
global# zonecfg -z local-zone local-zone: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:local-zone> create zonecfg:local-zone> set zonepath=/zones/local-zone zonecfg:local-zone> set ip-type=shared zonecfg:local-zone> add net zonecfg:local-zone:net> set physical=eri0 zonecfg:local-zone:net> set address=192.168.5.59 zonecfg:local-zone:net> end zonecfg:local-zone > verify zonecfg:local-zone > commit zonecfg:local-zone > exit
The zone is in configured state.
Install the zone:
global# zoneadm -z local-zone install
Login to the zone console to setup the zone from terminal 1:
global# zlogin -C local-zone
Boot the zone from another terminal:
global# zoneadm -z local-zone boot
Follow the steps on terminal 1 on the zone console to setup the zone.
See the Oracle documentation for more information about creating a zone.
Add VxVM volumes to the zone configuration:
Check the zone status and halt the zone, if it is running:
global# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 local-zone running /zones/myzone native shared global# zoneadm -z myzone halt
Add the VxVM devices to the zone's configuration:
global# zonecfg -z local-zone zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/vxportal zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/fdd zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/vx/rdsk/data_dg/data_vol zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/vx/dsk/data_dg/data_vol zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add fs zonecfg:local-zone:fs> set dir=/etc/vx/licenses/lic zonecfg:local-zone:fs> set special=/etc/vx/licenses/lic zonecfg:local-zone:fs> set type=lofs zonecfg:local-zone:fs> end zonecfg:local-zone> verify zonecfg:local-zone> commit zonecfg:local-zone> exit
On Solaris 11, you must set
fs-allowedtovxfsandodmin the zone's configuration:global# zonecfg -z myzone zonecfg:myzone> set fs-allowed=vxfs,odm zonecfg:myzone> commit zonecfg:myzone> exit
Boot the zone:
global# zoneadm -z myzone boot
Create a VxFS file system on the volume inside a non-global zone:
Login to the local-zone:
global# zlogin myzone
Create a VxFS file system on the block device:
bash-3.00# mkfs -F vxfs /dev/vx/dsk/data_dg/data_vol
Create a mount point inside the zone:
Login to the local-zone:
global# zlogin myzone
Create a mount point inside the non-global zone:
bash-3.00# mkdir -p /mydata
Mount the VxFS file system on the mount point:
bash-3.00# mount -F vxfs /dev/vx/dsk/data_dg/data_vol /mydata
- See Configuring a zone resource in a failover service group with the hazonesetup utility.
Configure the zone service group:
On the first node, create the service group with password-less communication with global zone:
global# hazonesetup -g zone_grp -r zone_res -z myzone \ -p password -s sysA,sysB
Switch the service group from the first node to the second node and run the hazonesetup command to setup password-less communication from the next node.
Repeat step 6 for all the nodes in the cluster where the zone can go online.
Create a mount, disk group, and volume resources into the service group:
Add a disk group resource to the service group:
global# hares -add dg_res DiskGroup zone_grp global# hares -modify dg_res DiskGroup data_dg global# hares -modify dg_res Enabled 1
Add a volume resource to the service group:
global# hares -add vol_res Volume zone_grp global# hares -modify vol_res Volume data_vol global# hares -modify vol_res DiskGroup data_dg global# hares -modify vol_res Enabled 1
Add a Mount resource to the service group:
global# hares -add mnt_res Mount zone_grp global# hares -modify mnt_res BlockDevice \ /dev/vx/dsk/data_dg/data_vol global# hares -modify mnt_res MountPoint /mydata global# hares -modify mnt_res FSType vxfs global# hares -modify mnt_res FsckOpt %-y global# hares -modify mnt_res Enabled 1
Create a resource dependency between the resources in the service group:
global# hares -link zone_res vol_res global# hares -link vol_res dg_res global# hares -link mnt_res zone_res
- For information on overriding resource type static attributes: see the Cluster Server Administrator's Guide.
Set the ContainerOpts attribute for the Mount resource for VxFS direct mount:
Override the ContainerOpts attribute at the resource level for mnt_res:
global# hares -override mnt_res ContainerOpts
Set the value of the RunInContainer key to 1:
global# hares -modify mnt_res ContainerOpts RunInContainer \ 1 PassCInfo 0
- Here is a sample configuration for the VxFS direct mount service groups in the
main.cffile:group zone_grp ( SystemList = {sysA = 0, sysB = 1 } ContainerInfo = { Name = local-zone, Type = Zone, Enabled = 1 } Administrators = { z_zoneres_sysA, z_zoneres_sysB } ) Mount mnt_res ( BlockDevice = "/dev/vx/dsk/data_dg/data_vol" MountPoint = "/mydata" FSType = vxfs FsckOpt = "-y" ContainerOpts = { RunInContainer = 1, PassCInfo = 0 } ) DiskGroup dg_res ( DiskGroup = data_dg ) Volume vol_res ( Volume = data_vol DiskGroup = data_dg ) Zone zone_res ( ) zone_res requires vol_res vol_res requires dg_res mnt_res requires zone_res