Mismatched Storage Foundation package versions are installed on Solaris 10, when using local zones

Mismatched Storage Foundation package versions are installed on Solaris 10, when using local zones

Article: 100001058
Last Published: 2013-09-15
Ratings: 0 1
Product(s): InfoScale & Storage Foundation


In a Solaris 10 environment using Solaris local zones, following the documented steps to upgrade Storage Foundation High Availability products to version 5.1 may lead to different versions of packages being installed in global and local zones.


Veritas Technical Support has discovered an issue in the documented steps to upgrade a Solaris 10 environment from a previous version of Storage Foundation (SF) to SF 5.1. In certain Solaris 10 environments using local zones, the packages in the global zone are upgraded, but the local zones are not upgraded. The result is that the global and local zones do not have the same level of packages after the upgrade.

Affected Environments

Solaris 10 systems are affected in the following cases:

* Solaris 10 systems using Solaris local zones whose zone roots are on Veritas Volume Manager (VxVM) volumes.
* Solaris 10 systems using Solaris local zones which are under Veritas Cluster Server (VCS) control regardless of the storage used for their zone roots.

Other Solaris 10 systems are not affected.

Problem Description

The documented upgrade steps advise that the one or more of the following tasks should be completed prior to starting the upgrade to SF 5.1:

* Offline all VCS service groups, if applicable.
* Deport all VxVM disk groups.
* Unmount Veritas File System (VxFS) file systems.

Performing any of these operations (offlining the VCS service groups, deporting disk groups, or unmounting the VxFS file system) will leave local zones unavailable for upgrade. The local zones will be left in one of two states before starting the upgrade:

* Shutdown and detached

If the Solaris local zones are under VCS control, offlining the zone resources will shut down and detach the local zone. The local zone will be in the CONFIGURED state.

* Shutdown and attached but with no mounted zone root

If the Solaris local zones have zone roots on VxVM volumes, the corresponding local zones must be shutdown and the zone roots manually unmounted before the VxVM disk groups can be deported. Unmounting the VxFS file system also requires that the local zone is shut down and unmounted.

When you upgrade SFHA products, the installation removes and installs various packages within the Solaris 10 global zone. Many of these packages have 'SUNW_ALL_ZONES' set to true and therefore, attempt to boot any attached local zone during removal or installation so that the local zones can stay synchronized with the global zone.

In the case of Solaris local zones which have zone roots on VxVM volumes but are not under VCS control, these zones will be attached but will not have their zone root mounted. In this case, any attempt to boot the local zone will fail, cause corresponding package installation to fail, and ultimately causing the SF 5.1 upgrade to fail unexpectedly.

In the case of Solaris local zones under VCS control, these zones will be detached from the global zone and therefore will not attempt to be booted during package installation in the global zone. In this case, the installer will continue as normal; however, SF packages installed within the local zone will not be upgraded to the 5.1 release and may be left at an old version. The result is a loss of synchronization between SF packages installed in the global and local zones.

Workaround for s10u9, or above

The "–u" or "-U" attach arguments can be used to attach and update the local zone that was offline after the Global zone is upgraded. s10u9 introduces the zoneadm "-U" option in addition to the "-u" option.
  • The "-u" argument updates the minimum number of packages within the attached zone to match the higher-revision packages and patches that exist on the new system (Figure 1).
  • The " -U" argument updates all packages in the attached zone that are also installed in the global zone.

Note: The "-U" argument is also available with s10u8 with patch 142909-17 (SPARC) or 142910-17 (x86) installed.

Figure 1 - Using zoneadm with the "-u" argument

# zoneadm -z zone-01 attach -u
Getting the list of files to remove
Removing 2620 files
Remove 18 of 18 packages
Installing 4671 files
Add 20 of 20 packages
Updating editable files
The file </var/sadm/system/logs/update_log> within the zone contains a log of the zone update.


Workaround for s10u8, or below

To avoid the issue described in this document, make sure that any local zones are available to be booted if required on the local node (in the case of a stand alone system) or on one node in the cluster (in the case of a clustered system) during the upgrade to Storage Foundation 5.1. This requires leaving local zones attached with zone roots mounted before starting the upgrade to SF 5.1.

Systems where local zones are controlled by VCS:

Offline the zone resources.
Leave online any Mount or DiskGroup resources which are required to be online in order for the zone root to be mounted.
Before starting the upgrade, attach the local zones to the system with the following  command:

# zoneadm -z zone_name attach -F

Systems where local zones are outside of the control of VCS but have zone roots on a Volume Manager volume:

Shut down the local zones using the zoneadm or shutdown commands, leaving them attached.
Leave the zone root mounted and the underlying disk group imported.

In either case, the local zone should be in a state of  'installed' with the zone root mounted before starting the installation.

For example:

# zoneadm list -vci | grep zone1
 - zone1            installed  /zone1                         native   shared

# mount | grep zone1
/zone1 on /dev/vx/dsk/zonedg23/zonevol1 read/write/setuid/devices/delaylog/largefiles/qio/ioerror=mwdisable/dev=50459d8 on Sat Jan 23 10:44:27 2010

Once the local zone is in the installed state, this will allow the local zone to be booted as required during package removal/installation during the upgrade process and ensures that SF packages installed in the local zone stay synchronized with those in the global zone.

In the case of local zones having a zone root on a VxVM volume, the install program reports an error because the VxVM disk groups are imported during the upgrade. The upgrade continues and completes successfully, but you may see the following messages in the installation logs:

1 10:25:24 VRTSvxvm uninstall failed on rdgv240sol23

      Checking for system volumes:
      swap    ...

ERROR: The following volumes are still open:

       zonevol1 zonevol2 <=== THESE ARE VOLUMES USED FOR ZONEROOT

Please stop these volumes before removing this package.
pkgrm: ERROR: preremove script did not complete successfully

Removal of <VRTSvxvm> failed.

These messages are harmless and can be safely ignored. They will not prevent VxVM  from being upgraded to the 5.1 release

Systems already affected by this issue

If a system was upgraded to SF 5.1 while the local zones were detached, the local zones will still have previous versions of SF packages installed while the global zone will be running the latest 5.1 packages. This difference in package versions may cause unexpected failures of SF software running on the system and needs to be addressed as soon as possible.

To correct this situation, use one of the following methods:

* Downgrade SF in the global zone to the previous release with local zones detached so  that the SF packages installed in the local zones are synchronized with packages in the global zone once more. Repeat the upgrade to SF 5.1 using the steps described in this document to ensure that the local zones are upgraded together with the global zones. Contact Veritas Support to assist with this downgrade.

* Reinstall the affected local zones, causing the local zones root file system to be recreated and package database to be re-initialized using packages currently installed in the global zone. Note that reinstallation will cause the zones root file system to be re-created, and as such any post installation configuration performed in the local zone will be lost and will need to be repeated once zone installation is complete. Reinstallation can be performed with the zoneadm commands as follows:

First any affected local zones need to be attached to the global zone such that they are in a state of 'installed' (note that attaching the local zone also verifies that the zone root is mounted):

 # zoneadm list -vci | grep zone1
 - zone1            configured /zone1                         native   shared

# zoneadm -z zone1 attach -F   

# zoneadm list -vci | grep zone1
 - zone1            installed  /zone1                         native   shared

The local zone can now be uninstalled and reinstalled:

 # zoneadm -z zone1 uninstall -F

# zoneadm -z zone1 install
Preparing to install zone <zone1>.




Was this content helpful?