Veritas InfoScale™ 7.4.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About Solaris Zones
- About VCS support for zones
- Overview of how VCS works with zones
- About the ContainerInfo service group attribute
- About the ContainerOpts resource type attribute
- About the ResContainerInfo resource type attribute
- Zone-aware resources
- About the Mount agent
- About networking agents
- About the Zone agent
- About configuring failovers among physical and virtual servers
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Performing the initial internal zone configuration
- About installing applications in a zone
- Configuring the service group for the application
- Configuring a zone resource in a failover service group with the hazonesetup utility
- Configuring zone resource in a parallel service group with the hazonesetup utility
- Configuring multiple zone resources using same VCS user for password less communication
- Modifying the service group configuration
- Verifying the zone configuration
- Synchronizing the zone configuration across cluster nodes
- Performing maintenance tasks
- Troubleshooting zones
- Configuring for physical to virtual and virtual to physical failovers - a typical setup
- Adding VxFS file systems to a non-global zone
- Mounting VxFS as lofs into a non-global zone
- Mounting VxFS directly into a non-global zone from global zone
- Mounting VxFS as VxFS inside a non-global zone
- Adding a direct mount to a zone's configuration
- Benefits of a VxFS mount in a non-global zone over VxFS mount from global zone into the non-global zone
- SFCFSHA mounts
- Concurrent I/O access in non-global zones
- Veritas Extension for Oracle Disk Manager
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Supported configuration
- Known issues with supporting SF Oracle RAC in a zone environment
- CFS mount agent does not support mounting VxVM devices inside non-global zones
- Issue with VCS agents
- Stopping non-global zones configured with direct-mount file systems from outside VCS causes the corresponding zone resource to fault or go offline
- Error message displayed for PrivNIC resource if zone is not running
- Warning messages displayed when VCS restarts
- The installer log of non-global zone contains warning messages
- Issue with CFS mounts
- Configuring Solaris non-global zones for disaster recovery
- Software limitations of Storage Foundation support of non-global zones
- Administration commands are not supported in non-global zone
- VxFS file system is not supported as the root of a non-global zone
- QIO and CQIO are not supported
- Package installation in non-global zones
- Package removal with non-global zone configurations
- Root volume cannot be added to non-global zones
- Some Veritas Volume Manager operations can cause volume device names to go out of sync
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- About Oracle VM Server for SPARC
- Terminology for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Product release notes
- Product licensing
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Exporting a Veritas volume to a guest domain from the control domain
- Provisioning storage for a guest domain
- Using Veritas Volume Manager snapshots for cloning logical domain boot disks
- Support of live migration for Solaris LDOMs with fencing configured in DMP mode
- Configuring Oracle VM Server for SPARC guest domains for disaster recovery
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a warm migration
- Overview of a live migration
- Prerequisites before you perform domain migration
- Supported deployment models for Oracle VM Server for SPARC domain migration with VCS
- Migrating Oracle VM guest when VCS is installed in the control domain that manages the guest domain
- Migrating Oracle VM guest when VCS is installed in the control domain and single-node VCS is installed inside the guest domain to monitor applications inside the guest domain
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.1 and above
- Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- A typical setup for a Logical Domain with multiple I/O services
- Identify supported storage and network services
- Determine the number of nodes to form VCS cluster
- Install and configure VCS inside the control domain and alternate I/O domain
- Configuring storage services
- Configure storage service groups
- Configure network service groups
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- Failover scenarios
- Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
- Sample VCS configuration for AlternateIO resource configured as a fail over type
- Configuring VCS on logical domains to manage applications using services from multiple I/O domains
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- About deploying SF Oracle RAC in Oracle VM Server for SPARC environments
- Sample configuration scenarios
- Preparing to deploy SF Oracle RAC in logical domain environments
- SF Oracle RAC with Oracle RAC database on I/O domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of two hosts
- SF Oracle RAC with Oracle RAC database on guest domains of single host
- SF Oracle RAC with Oracle RAC database on I/O domain and guest domain of single host
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Using Veritas Volume Manager snapshots for cloning logical domain boot disks
The following highlights the steps to clone the boot disk from an existing logical domain using VxVM snapshots, and makes use of the third-mirror breakoff snapshots.
See Provisioning Veritas Volume Manager volumes as boot disks for guest domains.
Figure: Example of using Veritas Volume Manager snapshots for cloning Logical Domain boot disks illustrates an example of using Veritas Volume Manager snapshots for cloning Logical Domain boot disks.
Before this procedure, ldom1 has its boot disk contained in a large volume, /dev/vx/dsk/boot_dg/bootdisk1-vol.
This procedure involves the following steps:
Cloning the logical domain configuration to form a new logical domain configuration.
This step is a Solaris logical domain procedure, and can be achieved using the following commands:
# ldm list-constraints -x
# ldm add-domain -i
Refer to the Oracle documentation for more information about cloning the logical domain configuration to form a new logical domain configuration.
See the Oracle VMServer for SPARC Administration Guide.
After cloning the configuration, clone the boot disk and provision it to the new logical domain.
To create a new logical domain with a different configuration than that of ldom1, skip this step of cloning the configuration and create the desired logical domain configuration separately.
To clone the boot disk using Veritas Volume Manager snapshots
- Create a snapshot of the source volume bootdisk1-vol. To create the snapshot, you can either take some of the existing ACTIVE plexes in the volume, or you can use the following command to add new snapshot mirrors to the volume:
primary# vxsnap [-b] [-g diskgroup] addmir volume \ [nmirror=N] [alloc=storage_attributes]
By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the
nmirrorattribute to specify a different number of mirrors. The mirrors remain in the SNAPATT state until they are fully synchronized. The -b option can be used to perform the synchronization in the background. Once synchronized, the mirrors are placed in the SNAPDONE state.For example, the following command adds two mirrors to the volume, bootdisk1-vol, on disks mydg10 and mydg11:
primary# vxsnap -g boot_dg addmir bootdisk1-vol \ nmirror=2 alloc=mydg10,mydg11
If you specify the -b option to the vxsnap addmir command, you can use the vxsnap snapwait command to wait for synchronization of the snapshot plexes to complete, as shown in the following example:
primary# vxsnap -g boot_dg snapwait bootdisk1-vol nmirror=2
- To create a third-mirror break-off snapshot, use the following form of the vxsnap make command.
Caution:
Shut down the guest domain before executing the vxsnap command to take the snapshot.
primary# vxsnap [-g diskgroup] make \ source=volume[/newvol=snapvol] \ {/plex=plex1[,plex2,...]|/nmirror=number]}Either of the following attributes may be specified to create the new snapshot volume, snapvol, by breaking off one or more existing plexes in the original volume:
plex
Specifies the plexes in the existing volume that are to be broken off. This attribute can only be used with plexes that are in the ACTIVE state.
nmirror
Specifies how many plexes are to be broken off. This attribute can only be used with plexes that are in the SNAPDONE state. Such plexes could have been added to the volume by using the vxsnap addmir command.
Snapshots that are created from one or more ACTIVE or SNAPDONE plexes in the volume are already synchronized by definition.
For backup purposes, a snapshot volume with one plex should be sufficient.
For example,
primary# vxsnap -g boot_dg make \ source=bootdisk1-vol/newvol=SNAP-bootdisk1-vol/nmirror=1
Here bootdisk1-vol makes source; SNAP-bootdisk1-vol is the new volume and 1 is the nmirror value.
The block device for the snapshot volume will be
/dev/vx/dsk/boot_dg/SNAP-bootdisk1-vol. - Configure a service by exporting the
/dev/vx/dsk/boot_dg/SNAP-bootdisk1-volfile as a virtual disk.primary# ldm add-vdiskserverdevice \ /dev/vx/dsk/boot_dg/SNAP-bootdisk1-vol vdisk2@primary-vds0
- Add the exported disk to ldom1 first.
primary# ldm add-vdisk vdisk2 \ SNAP-bootdisk1-vol@primary-vds0 ldom1
primary# ldm bind ldom1 primary# ldm start ldom1
- Start ldom1 and boot ldom1 from its primary boot disk vdisk1.
primary# ldm bind ldom1 primary# ldm start ldom1
- If the new virtual disk device node entries do not show up in the
/dev/[r]dskdirectories, then run the devfsadm command in the guest domain:ldom1# devfsadm -C
where vdisk2 is the c0d2s# device.
ldom1# ls /dev/dsk/c0d2s* /dev/dsk/c0d2s0 /dev/dsk/c0d2s2 /dev/dsk/c0d2s4 /dev/dsk/c0d2s6 /dev/dsk/c0d2s1 /dev/dsk/c0d2s3 /dev/dsk/c0d2s5 /dev/dsk/c0d2s7
- Mount the root file system of c0d2s0 and modify the
/etc/vfstabentries such that all c#d#s# entries are changed to c0d0s#. You must do this because ldom2 is a new logical domain and the first disk in the operating system device tree is always named as c0d0s#. - Stop and unbind ldom1 from its primary boot disk vdisk1.
primary# ldm stop ldom1 primary# ldm unbind ldom1
- After you change the
vfstabfile, unmount the file system and unbind vdisk2 from ldom1:primary# ldm remove-vdisk vdisk2 ldom1
- Bind vdisk2 to ldom2 and then start and boot ldom2.
primary# ldm add-vdisk vdisk2 vdisk2@primary-vds0 ldom2 primary# ldm bind ldom2 primary# ldm start ldom2
After booting ldom2, appears as ldom1 on the console because the other host-specific parameters like hostname and IP address are still that of ldom1.
ldom1 console login:
- To change the parameters bring ldom2 to single-user mode and run the sys-unconfig command.
- Reboot ldom2.
During the reboot, the operating system prompts you to configure the host-specific parameters such as hostname and IP address, which you must enter corresponding to ldom2.
- After you have specified all these parameters, ldom2 boots successfully.