Feature Category |
Feature |
Details |
Kubernetes Support |
Persistent Volumes |
InfoScale provides three classes of persistent storage (performance, resilient and secure with encryption) for stateful container applications. The creation of persistent volumes ensures that there is no loss of data when containers fail. Consequentially data is always available no matter where the containers are scheduled in the container ecosystem. |
Kubernetes Support | Application availability |
InfoScale manages HA and recovery automation for applications running in containers by monitoring critical application processes and resources. InfoScale also provides fencing and arbitration to prevent application data corruption and speed up recovery. |
Kubernetes Support |
Advanced storage management |
InfoScale supports Kubernetes Container Storage Interface (CSI) plugin to provides high-performance shared storage for the Kubernetes clusters using the fast storage that is directly attached to the Kubernetes cluster nodes. InfoScale Storage provides highly available, persistent storage that conforms to CSI specifications for enterprise applications. It does so by using high-performance parallel storage access on shared storage (SAN) or in Flexible Storage Sharing (FSS) environments. InfoScale also supports Velero, which is a third-party application, to provide snapshot lifecycle management. |
Kubernetes Support |
Application migration |
InfoScale supports moving non-containerized applications into a container environment by copying the application data storage volumes onto the Kubernetes cluster nodes. The InfoScale CSI plugin then presents the same data to the applications that are running in the container. |
Security and Governance |
Support for protection against ransomware |
To ensure compliance with the security and governance requirements of critical data, support for the following features is added to the Veritas File System (VxFS) module:
|
Compliance |
Volume encryption keys for compliance |
For the encrypted volumes that are created with disk group version 300 or later, InfoScale uses the FIPS 140-2 standards to validate the wrapping key (KEK) that secures volume encryption. For the encrypted volumes that are created with disk group version 290 or earlier, you must first upgrade to disk group version 300 or later. Thereafter, you can use the re-key operation to ensure that InfoScale uses the FIPS standards to validate the KEK. |
Communication |
Support for configuring LLT over UDP multiport |
LLT uses UDP sockets for communication among the cluster nodes and creates one UDP socket for each LLT link. In Flexible Storage Sharing (FSS) environments, read-write operations may be performed on remote disks, and one socket per LLT link may not be enough for large data volumes. More sockets are needed to achieve the parallelism and the throughput that is required to meet the needs of applications that generate large volumes of data. |
Cluster Node Upgrades |
Rolling upgrade of cluster nodes with different versions of VCS engine |
The rolling upgrade process minimizes the downtime of a cluster during an upgrade to the amount of time that it takes to fail over a service group. InfoScale 7.4.2 and later versions let you configure clusters with nodes that run different versions of the VCS engine. To support such configurations, the installer provides a rolling upgrade option for InfoScale components that upgrades the kernel RPMs and the VCS agent-related RPMs in the same process. The rolling upgrade may involve some application downtime while a node is upgraded, but there is zero cluster downtime. |
InfoScale -
What's New
InfoScale Feature by Release
Feature Category |
Feature |
Details |
Replication |
DCM Logging in DCO |
In InfoScale 7.4.1 and prior releases, when replication is configured, a Data Change Map (DCM) log is associated as separate log plex to each data volume in the RVG. Starting with version 7.4.2, InfoScale allows you to maintain DCM logs as per-volume Fast Mirror Resync (FMR) maps inside the Data Change Object (DCO) that is associated with the corresponding data volumes. |
Replication |
Adaptive synchronous mode in VVR |
The adaptive synchronous mode in VVR is an enhancement to the existing synchronous override mode. In the adaptive synchronous mode, replication switches from synchronous to asynchronous based on cross-site network latency. This allows replication to take place in synchronous mode when network conditions are good, and automatically switch to asynchronous mode when there is an increase in cross-site network latency. |
Veritas File System |
Changes in VxFS Disk Layout Versions (DLV) |
The following DLV changes are now applicable:
|
Veritas Volume Manager |
Support for disk group level encryption key management and the re-key operation |
InfoScale supports the use of a single KMS key for all the volumes in a disk group. Consequently, you can maintain a common KMS key at the disk group level instead of maintaining an individual KMS key for each volume. When you start an encrypted volume that has a common KMS key with the disk group, VxVM needs to fetch only one key to enable access to the volume. Thus, a common KMS key reduces the network load that is sent to the KMS in the form of multiple requests based on the number of volumes. A single request to KMS lets you to start all the volumes in a single operation. |
Cluster Server Agents |
IMF-aware Mount agent |
IMF for mounts is now supported for VxFS, ext4, XFS, and NFS file system types. |
Cluster Server Agents |
SystemD support for Sybase and SybaseBk agents |
The VCS agents for Sybase and SybaseBk are now supported in SystemD environments, and VCS unit service files are available for the corresponding application services. |
Cluster Server |
Support for configuring LLT over UDP multiport |
LLT uses UDP sockets for communication among the cluster nodes and creates one UDP socket per LLT link. In a Flexible Storage Sharing (FSS) environment, data can be read from and written to remote disks. In such a case, one socket per LLT link may not be enough for large read-write operations. More sockets are needed to achieve parallelism and throughput to meet the needs of the high data-generating applications. Configuring LLT over UDP multiport enables you to create additional sockets per link. These sockets are reserved only for I/O shipping. |
Cluster Server |
Ability to disable CmdServer |
By default, the CmdServer process runs as a daemon. It starts as soon as VCS starts, and you cannot disable the daemon. InfoScale now lets you disable the CmdServer daemon. |
Cluster Server |
Ability to stop VCS without evacuating service groups |
By default, when VCS is stopped as part of a system restart operation, the active service groups on the node are migrated to another cluster node. In some cases, you may not want to evacuate the service groups during a system restart. For example, you may want to avoid administrative intervention during a manual shutdown. InfoScale now lets you choose whether or not to evacuate service groups when VCS is stopped. |
Cluster Server |
Support for starting VCS in a customized environment |
InfoScale provides the following files that let you customize your VCS startup environment and how the VCS engine is started:
|
Cluster Server |
Ability to form a cluster with different versions of the VCS engine |
Starting with version 7.4.2, InfoScale allows for the formation of clusters with nodes that run different versions of the VCS engine. It does so by providing a framework that lets you:
|
Supported Configurations |
Deprecated support for Oracle 11g R2 |
InfoScale no longer supports any configurations with Oracle 11g R2 or earlier. |
Supported Configurations |
Support for Oracle 19c |
InfoScale now supports single-instance configurations with Oracle 19c. |
Security |
Improved password encryption for VCS users and agents |
The VCS component now uses the AES-256 algorithm to encrypt the VCS user and the VCS agent passwords by default, for enhanced security. The vcsencrypt utility and the hauser command generate passwords that are encrypted by using the standard AES-256 algorithm. |
Installation and Upgrades |
Enhanced support for Ansible |
In this release, additional capabilities have been added to the Ansible module provided by Veritas. You can now use Ansible to perform the following operations in an InfoScale environment n Linux:
|
Installation and Upgrades |
Changes in the VRTSperl package |
The VRTSperl 5.30 package is built using the Perl 5.30 source code. Therefore, all the features and fixes of the core Perl 5.30 are available in VRTSperl 5.30. Additionally, the fix for the following issue is now included in VRTSperl 5.30: Unable to set supplementary group IDs #17031: https://github.com/perl/perl5/issues/17031 |
Installation and Upgrades |
Change in upgrade path |
You can upgrade to Veritas InfoScale 7.4.2 only if your currently installed product has one of the following base versions.
|
Feature Category |
Feature |
Details |
Supported Configurations |
Support for CIFS configurations on SUSE 15 |
You may configure CIFS in the user mode, the domain mode, or the ads mode. |
Supported Configurations |
Support for SUSE 15 |
InfoScale now supports SUSELinux Enterprise Server15. The installation files for this releaseis available for download at the same location as the one for theInfoScale7.4.1 GA release.The file names begin with Veritas_InfoScale_7.4.1_SLES15. All the InfoScale capabilities that are available on the RHEL7 and the SUSE12 platforms are also available on SUSE15. The commands that are mentioned in the context of the RHEL platform in the InfoScale documentation also apply to allthe supported RHEL and SUSE compatible distributions. |
Feature Category |
Feature |
Details |
Cluster Server Agents |
Using SystemD attributes for Sybase and SybaseBk |
SystemD attributes are only applicable on SLES 12, RHEL 7, and supported RHEL-compatible distributions. InfoScale provides the following optional attributes to the Sybase and the SybaseBk agents in SystemD environments. |
Cluster Server Agents |
SystemD support for Sybase and SybaseBk agents |
The VCS agents for Sybase and SybaseBk are now supported in SystemD environments, and VCS unit service files are available for the corresponding application services. |
Cluster Server |
Disabling CmdServer |
By default, the CmdServer process runs as a daemon. It starts as soon as VCS starts, and you cannot disable the daemon. InfoScale now lets you disable the CmdServer daemon. |
Cluster Server |
Stopping VCS without evacuating service groups |
By default, when VCS is stopped as part of a system restart operation, the active service groups on the node are migrated to another cluster node. In some cases, you may not want to evacuate the service groups during a system restart. For example, you may want to avoid administrative intervention during a manual shutdown. InfoScale now lets you choose whether or not to evacuate service groups when VCS is stopped. A new environment variable, NOEVACUATE, is introduced to specify whether or not to evacuate service groups when a node is shut down or restarted. |
Supported Configurations |
InfoScale support in Nutanix HCI environments |
InfoScale supports Nutanix hyper-converged infrastructure (HCI) architecture. The Nutanix Acropolis Hypervisor (AHV) can co-exist with the existing storage infrastructure and offload workloads from existing storage platforms to improve the performance, capability, and linear scalability for InfoScale. This capability delivers a unified, scale-out, shared-nothing architecture with no single point of failure (SPOF). You can set up InfoScale clusters on virtual machines (VMs) that are hosted on Nutanix AHV. You can create the following high availability (HA) configurations for applications by using InfoScale components on Nutanix VMs:
You can configure applications for disaster recovery (DR) by using the Volume Replicator (VVR) component and the Global Cluster Option (GCO) feature of InfoScale. InfoScale configurations are supported only with Nutanix AOS 5.10.5 and later. |
Feature Category |
Feature |
Details |
Installation and Upgrades |
Ansible Support |
Ansible is a popular configuration management tool that automates various configuration and deployment operations in your environment. Ansible playbooks are files written in YAML format that contain human-readable code that can define the operations performed in your environment. Veritas now provides Ansible modules that can be used in playbooks to install or upgrade Veritas InfoScale, deploy clusters, or configure features such as Flexible Storage Sharing (FSS), Cluster File System (CFS), and Disk Group Volume. For the Ansible modules, playbook templates, and user's guide for using Ansible in an InfoScale environment visit: |
Installation and Upgrades |
Upgrade Path |
You can upgrade to Veritas InfoScale 7.4.1 only if the base version of your currently installed product is 6.2.1 or later. |
Installation and Upgrades |
Deprecated support for co-existence of Veritas InfoScale products |
Support for co-existence of the following Veritas InfoScale products has been deprecated in 7.4.1:
Veritas no longer supports co-existence of more than one InfoScale product on a system. |
Licensing |
Misc |
Veritas collects licensing and platform related information from InfoScale products as part of the Veritas Product Improvement Program. The information collected helps identify how customers deploy and use the product, and enables Veritas to manage customer licenses more efficiently. The Veritas Telemetry Collector is used to collect licensing and platform related information from InfoScale products as part of the Veritas Product Improvement Program. The Veritas Telemetry Collector sends this information to an edge server. The Veritas Cloud Receiver (VCR) is a pre-configured, cloud-based edge server deployed by Veritas. While installing or upgrading InfoScale, ensure that you configure the Veritas Cloud Receiver (VCR) as your edge server. For more information about setting up and configuring telemetry data collection, see the Veritas InfoScale Installation or the Veritas InfoScale Configuration and Upgrade guides. |
Security |
Support for third-party certificate for entity validation in SSL/TLS Server |
InfoScale supports using a third-party certificate for entity validation in SSL/TLS Server in VxAT on a Linux host. Note: Third-party certificate is not supported for Windows host. In the prior InfoScale releases, the SSL/TLS Server uses a self-signed certificate. This self-signed certificate is not verified by a trusted CertificateAuthority, and hence poses a security threat. With the support for third-party trusted certificates, you can now generate a certificate for the SSL/TLS Server by providing the encrypted passphrase to InfoScale. InfoScale then issues a certificate signing request, which is used to generate a certificate for the SSL/TLS Server. For more information, see the Veritas InfoScale Installation Guide - Linux. |
Security |
Discontinuation of SSL/TLS Server support for TLSv1.0 and TLSv1.1 |
To reduce security vulnerabilities, the TLSv1.0 and TLSv1.1 protocols are not supported by default. However, you can enable these protocols by setting the value of the AT_CLIENT_ALLOW_TLSV1 attribute to 1. |
Security |
Discontinued support |
The following features are no longer supported in this release:
|
Security |
openssl 1.0.2o for enhanced security |
The VxAT server now uses openssl 1.0.2o for SSL communication. |
Supported Configurations |
Support for Oracle 18c |
InfoScale now supports single-instance configurations with Oracle 18c. |
Supported Configurations |
Support for Oracle Enterprise Manager 13c |
InfoScale now provides an OEM plugin for Oracle 13c. |
Cloud Environments |
New high availability agents for Google Cloud Platform (GCP) |
InfoScale has introduced the GoogleIP and the GoogleDisk agents for GCP environments. These agents are bundled with the product.
GoogleIP agent
The agent performs the following tasks:
The GoogleIP resource depends on the IP resource. GoogleDisk agent
The GoogleDisk resource does not depend on any other resources. For more information, see Cluster Server Bundled Agents Reference Guide - Linux. |
Cloud Environments |
Support for file-level tiering to migrate data using cloud connectors |
InfoScale supports file-level tiering to migrate data using cloud connectors. In file-level tiering, a single file is broken in to chunks of definite size and each chunk is stored as a single object. A single file can thus have multiple objects. A relevant metadata is associated with each object, which makes it easy to access the file directly from the cloud. Since a file is broken into individual objects, the read-write performance is improved. Also, the large object size facilitates migration of large files with minimal chunking. For details about migrating data using cloud connectors, refer to the InfoScale Solutions in Cloud Environments document. |
Cloud Environments |
Support for InfoScale configurations in Google Cloud |
InfoScale lets you configure applications for HA and DR in Google Could environments. The GoogleIP and GoogleDisk agents are provided to support IP and disk resources in GCP. The following replication configurations are supported:
The following HA and DR configurations are supported:
For details, refer to the InfoScale Solutions in Cloud Environments document. |
Cluster Server Agents |
Support for cloned Application Agent |
The Application agent is used to make applications highly available when an appropriate ISV agent is not available. To make multiple different applications highly available using a cluster, you must create a service group for each application. InfoScale lets you clone the Application agent so that you can configure a different service group for each application. You must then assign the appropriate operator permissions for each service group for it to function as expected. Note: A cloned Application agent is also IMF-aware. For details, see the Cluster Server Bundled Agents Reference Guide for your platform. |
Cluster Server Agents |
IMF-aware SambaShare agent |
The SambaShare agent is now IMF-aware. |
Cluster Server Agents |
New optional attributes in the SambaServer Agent |
The Samba Server Agent now supports the Interfaces and the BindInterfaceOnly attributes. These attributes enable the agent to listen on all the interfaces strings that are supported by the Samba Server. |
Veritas Volume Manager |
Enhanced performance of the vradmind daemon for collecting consolidated statistics |
You can configure VVR to collect statistics of the VVR components. The collected statistics can be used to monitor the system and diagnose problems with the VVR setup. By default, VVR collects the statistics automatically when the vradmind daemon starts. The vradmind daemon is enhanced by making it a multi-threaded process where one thread is reserved specifically for collecting periodic statistics. Note: If the vradmind daemon is not running, VVR stops collecting the statistics. For details, see Veritas InfoScale Replication Administrator's Guide. |
Veritas Volume Manager |
Changes in hot-relocation in FSS environment |
In FSS environments, hot-relocation employs a policy-based mechanism for healing storage failures. Storage failures may include disk media failure or node failures that render storage inaccessible. However, VxVM could not differentiate between disk media and node failures. As a result, VxVM sets the same value for both the node_reloc_timeout and storage_reloc_timeout tunables. The hot-relocation daemon is now enhanced to differentiate between the disk media failure or node failures. You can now set different value for the node_reloc_timeout and storage_reloc_timeout tunables for hot-relocation in FSS environments. The default values for the storage_reloc_timeout tunable is 30 minutes and for node_reloc_timeout tunable is 120 min. You can modify the tunable values to suit your business needs. |
Veritas File System |
Changes in VxFS Disk Layout Versions (DLV) |
The following DLV changes are now applicable:
With this change, you can create and mount VxFS only on DLV 11 and later. DLV 6 to 10 can be used for local mount only. |
Veritas File System |
Support for SELinux security extended attributes |
The SELinux policy for RHEL 7.6 and later now includes support for VxFS file system as persistent storage of SELinux security extended attributes. With this support, you can now use SELinux security functionalities and features on VxFS files and directories on RHEL 7.6 and later. |
Replication |
Added support to assign a slave node as a logowner |
In a disaster recovery environment, VVR maintains write-order fidelity for the application I/Os received. When replicating in a shared disk group environment, VVR designates one cluster node as a logowner to maintain the order of writes. By default, VVR designates the master node as a logowner. To optimize the master node workload, VVR now enables you to assign any cluster node (slave node) as a logowner. Note: In the following cases, the change in logowner role is not preserved, and the master nodes takes over as a logowner.
For more details about assigning a slave node as a logowner, refer to, Veritas InfoScale™ 7.4.1 Replication Administrator's Guide. |
Replication |
Technology preview: Adaptive synchronous mode in VVR |
When the synchronous attribute of the RLINK in VVR is set to override, the system temporarily switches the replication mode from synchronous to asynchronous whenever RLINK is disconnected. The override option allows VVR to continue receiving writes from the application even when RLINK is disconnected. However, in case of high network latency, replication continues to run in synchronous mode with degraded application performance. The adaptive synchronous mode in VVR is an enhancement to the existing synchronous override mode. In the adaptive synchronous mode, replication switches from synchronous to asynchronous based on cross-site network latency. This allows replication to take place in synchronous mode when network conditions are good, and automatically switch to asynchronous mode when there is an increase in cross-site network latency.
You can also set alerts for when the system undergoes prolonged periods of network deterioration. For more information, see the Veritas InfoScale Replication Administrator's Guide - Linux. |
Feature Category |
Feature |
Details |
Replication |
DCM Logging in DCO |
In InfoScale 7.4.1 and prior releases, when replication is configured, a Data Change Map (DCM) log is associated as separate log plex to each data volume in the RVG. Starting with version 7.4.2, InfoScale allows you to maintain DCM logs as per-volume Fast Mirror Resync (FMR) maps inside the Data Change Object (DCO) that is associated with the corresponding data volumes. |
Replication |
Adaptive synchronous mode in VVR |
The adaptive synchronous mode in VVR is an enhancement to the existing synchronous override mode. In the adaptive synchronous mode, replication switches from synchronous to asynchronous based on cross-site network latency. This allows replication to take place in synchronous mode when network conditions are good, and automatically switch to asynchronous mode when there is an increase in cross-site network latency. |
Veritas File System |
Changes in VxFS Disk Layout Versions (DLV) |
The following DLV changes are now applicable:
|
Veritas Volume Manager |
Support for disk group level encryption key management and the re-key operation |
InfoScale supports the use of a single KMS key for all the volumes in a disk group. Consequently, you can maintain a common KMS key at the disk group level instead of maintaining an individual KMS key for each volume. When you start an encrypted volume that has a common KMS key with the disk group, VxVM needs to fetch only one key to enable access to the volume. Thus, a common KMS key reduces the network load that is sent to the KMS in the form of multiple requests based on the number of volumes. A single request to KMS lets you to start all the volumes in a single operation. |
Cluster Server Agents |
IMF-aware Mount agent |
IMF for mounts is now supported for VxFS, ext4, XFS, and NFS file system types. |
Cluster Server Agents |
SystemD support for Sybase and SybaseBk agents |
The VCS agents for Sybase and SybaseBk are now supported in SystemD environments, and VCS unit service files are available for the corresponding application services. |
Cluster Server |
Support for configuring LLT over UDP multiport |
LLT uses UDP sockets for communication among the cluster nodes and creates one UDP socket per LLT link. In a Flexible Storage Sharing (FSS) environment, data can be read from and written to remote disks. In such a case, one socket per LLT link may not be enough for large read-write operations. More sockets are needed to achieve parallelism and throughput to meet the needs of the high data-generating applications. Configuring LLT over UDP multiport enables you to create additional sockets per link. These sockets are reserved only for I/O shipping. |
Cluster Server |
Ability to disable CmdServer |
By default, the CmdServer process runs as a daemon. It starts as soon as VCS starts, and you cannot disable the daemon. InfoScale now lets you disable the CmdServer daemon. |
Cluster Server |
Ability to stop VCS without evacuating service groups |
By default, when VCS is stopped as part of a system restart operation, the active service groups on the node are migrated to another cluster node. In some cases, you may not want to evacuate the service groups during a system restart. For example, you may want to avoid administrative intervention during a manual shutdown. InfoScale now lets you choose whether or not to evacuate service groups when VCS is stopped. |
Cluster Server |
Support for starting VCS in a customized environment |
InfoScale provides the following files that let you customize your VCS startup environment and how the VCS engine is started:
|
Cluster Server |
Ability to form a cluster with different versions of the VCS engine |
Starting with version 7.4.2, InfoScale allows for the formation of clusters with nodes that run different versions of the VCS engine. It does so by providing a framework that lets you:
|
Supported Configurations |
Deprecated support for Oracle 11g R2 |
InfoScale no longer supports any configurations with Oracle 11g R2 or earlier. |
Supported Configurations |
Support for Oracle 19c |
InfoScale now supports single-instance configurations with Oracle 19c. |
Security |
Improved password encryption for VCS users and agents |
The VCS component now uses the AES-256 algorithm to encrypt the VCS user and the VCS agent passwords by default, for enhanced security. The vcsencrypt utility and the hauser command generate passwords that are encrypted by using the standard AES-256 algorithm. |
Installation and Upgrades |
Enhanced support for Ansible |
In this release, additional capabilities have been added to the Ansible module provided by Veritas. You can now use Ansible to perform the following operations in an InfoScale environment n Linux:
|
Installation and Upgrades |
Changes in the VRTSperl package |
The VRTSperl 5.30 package is built using the Perl 5.30 source code. Therefore, all the features and fixes of the core Perl 5.30 are available in VRTSperl 5.30. Additionally, the fix for the following issue is now included in VRTSperl 5.30: Unable to set supplementary group IDs #17031: https://github.com/perl/perl5/issues/17031 |
Installation and Upgrades |
Change in upgrade path |
You can upgrade to Veritas InfoScale 7.4.2 only if your currently installed product has one of the following base versions.
|
Feature Category |
Feature |
Details |
Supported Configurations |
Support for CIFS configurations on SUSE 15 |
You may configure CIFS in the user mode, the domain mode, or the ads mode. |
Supported Configurations |
Support for SUSE 15 |
InfoScale now supports SUSELinux Enterprise Server15. The installation files for this releaseis available for download at the same location as the one for theInfoScale7.4.1 GA release.The file names begin with Veritas_InfoScale_7.4.1_SLES15. All the InfoScale capabilities that are available on the RHEL7 and the SUSE12 platforms are also available on SUSE15. The commands that are mentioned in the context of the RHEL platform in the InfoScale documentation also apply to allthe supported RHEL and SUSE compatible distributions. |
Feature Category |
Feature |
Details |
Cluster Server Agents |
Using SystemD attributes for Sybase and SybaseBk |
SystemD attributes are only applicable on SLES 12, RHEL 7, and supported RHEL-compatible distributions. InfoScale provides the following optional attributes to the Sybase and the SybaseBk agents in SystemD environments. |
Cluster Server Agents |
SystemD support for Sybase and SybaseBk agents |
The VCS agents for Sybase and SybaseBk are now supported in SystemD environments, and VCS unit service files are available for the corresponding application services. |
Cluster Server |
Disabling CmdServer |
By default, the CmdServer process runs as a daemon. It starts as soon as VCS starts, and you cannot disable the daemon. InfoScale now lets you disable the CmdServer daemon. |
Cluster Server |
Stopping VCS without evacuating service groups |
By default, when VCS is stopped as part of a system restart operation, the active service groups on the node are migrated to another cluster node. In some cases, you may not want to evacuate the service groups during a system restart. For example, you may want to avoid administrative intervention during a manual shutdown. InfoScale now lets you choose whether or not to evacuate service groups when VCS is stopped. A new environment variable, NOEVACUATE, is introduced to specify whether or not to evacuate service groups when a node is shut down or restarted. |
Supported Configurations |
InfoScale support in Nutanix HCI environments |
InfoScale supports Nutanix hyper-converged infrastructure (HCI) architecture. The Nutanix Acropolis Hypervisor (AHV) can co-exist with the existing storage infrastructure and offload workloads from existing storage platforms to improve the performance, capability, and linear scalability for InfoScale. This capability delivers a unified, scale-out, shared-nothing architecture with no single point of failure (SPOF). You can set up InfoScale clusters on virtual machines (VMs) that are hosted on Nutanix AHV. You can create the following high availability (HA) configurations for applications by using InfoScale components on Nutanix VMs:
You can configure applications for disaster recovery (DR) by using the Volume Replicator (VVR) component and the Global Cluster Option (GCO) feature of InfoScale. InfoScale configurations are supported only with Nutanix AOS 5.10.5 and later. |
Feature Category |
Feature |
Details |
Installation and Upgrades |
Ansible Support |
Ansible is a popular configuration management tool that automates various configuration and deployment operations in your environment. Ansible playbooks are files written in YAML format that contain human-readable code that can define the operations performed in your environment. Veritas now provides Ansible modules that can be used in playbooks to install or upgrade Veritas InfoScale, deploy clusters, or configure features such as Flexible Storage Sharing (FSS), Cluster File System (CFS), and Disk Group Volume. For the Ansible modules, playbook templates, and user's guide for using Ansible in an InfoScale environment visit: |
Installation and Upgrades |
Upgrade Path |
You can upgrade to Veritas InfoScale 7.4.1 only if the base version of your currently installed product is 6.2.1 or later. |
Installation and Upgrades |
Deprecated support for co-existence of Veritas InfoScale products |
Support for co-existence of the following Veritas InfoScale products has been deprecated in 7.4.1:
Veritas no longer supports co-existence of more than one InfoScale product on a system. |
Licensing |
Misc |
Veritas collects licensing and platform related information from InfoScale products as part of the Veritas Product Improvement Program. The information collected helps identify how customers deploy and use the product, and enables Veritas to manage customer licenses more efficiently. The Veritas Telemetry Collector is used to collect licensing and platform related information from InfoScale products as part of the Veritas Product Improvement Program. The Veritas Telemetry Collector sends this information to an edge server. The Veritas Cloud Receiver (VCR) is a pre-configured, cloud-based edge server deployed by Veritas. While installing or upgrading InfoScale, ensure that you configure the Veritas Cloud Receiver (VCR) as your edge server. For more information about setting up and configuring telemetry data collection, see the Veritas InfoScale Installation or the Veritas InfoScale Configuration and Upgrade guides. |
Security |
Support for third-party certificate for entity validation in SSL/TLS Server |
InfoScale supports using a third-party certificate for entity validation in SSL/TLS Server in VxAT on a Linux host. Note: Third-party certificate is not supported for Windows host. In the prior InfoScale releases, the SSL/TLS Server uses a self-signed certificate. This self-signed certificate is not verified by a trusted CertificateAuthority, and hence poses a security threat. With the support for third-party trusted certificates, you can now generate a certificate for the SSL/TLS Server by providing the encrypted passphrase to InfoScale. InfoScale then issues a certificate signing request, which is used to generate a certificate for the SSL/TLS Server. For more information, see the Veritas InfoScale Installation Guide - Linux. |
Security |
Discontinuation of SSL/TLS Server support for TLSv1.0 and TLSv1.1 |
To reduce security vulnerabilities, the TLSv1.0 and TLSv1.1 protocols are not supported by default. However, you can enable these protocols by setting the value of the AT_CLIENT_ALLOW_TLSV1 attribute to 1. |
Security |
Discontinued support |
The following features are no longer supported in this release:
|
Security |
openssl 1.0.2o for enhanced security |
The VxAT server now uses openssl 1.0.2o for SSL communication. |
Supported Configurations |
Support for Oracle 18c |
InfoScale now supports single-instance configurations with Oracle 18c. |
Supported Configurations |
Support for Oracle Enterprise Manager 13c |
InfoScale now provides an OEM plugin for Oracle 13c. |
Cloud Environments |
New high availability agents for Google Cloud Platform (GCP) |
InfoScale has introduced the GoogleIP and the GoogleDisk agents for GCP environments. These agents are bundled with the product.
GoogleIP agent
The agent performs the following tasks:
The GoogleIP resource depends on the IP resource. GoogleDisk agent
The GoogleDisk resource does not depend on any other resources. For more information, see Cluster Server Bundled Agents Reference Guide - Linux. |
Cloud Environments |
Support for file-level tiering to migrate data using cloud connectors |
InfoScale supports file-level tiering to migrate data using cloud connectors. In file-level tiering, a single file is broken in to chunks of definite size and each chunk is stored as a single object. A single file can thus have multiple objects. A relevant metadata is associated with each object, which makes it easy to access the file directly from the cloud. Since a file is broken into individual objects, the read-write performance is improved. Also, the large object size facilitates migration of large files with minimal chunking. For details about migrating data using cloud connectors, refer to the InfoScale Solutions in Cloud Environments document. |
Cloud Environments |
Support for InfoScale configurations in Google Cloud |
InfoScale lets you configure applications for HA and DR in Google Could environments. The GoogleIP and GoogleDisk agents are provided to support IP and disk resources in GCP. The following replication configurations are supported:
The following HA and DR configurations are supported:
For details, refer to the InfoScale Solutions in Cloud Environments document. |
Cluster Server Agents |
Support for cloned Application Agent |
The Application agent is used to make applications highly available when an appropriate ISV agent is not available. To make multiple different applications highly available using a cluster, you must create a service group for each application. InfoScale lets you clone the Application agent so that you can configure a different service group for each application. You must then assign the appropriate operator permissions for each service group for it to function as expected. Note: A cloned Application agent is also IMF-aware. For details, see the Cluster Server Bundled Agents Reference Guide for your platform. |
Cluster Server Agents |
IMF-aware SambaShare agent |
The SambaShare agent is now IMF-aware. |
Cluster Server Agents |
New optional attributes in the SambaServer Agent |
The Samba Server Agent now supports the Interfaces and the BindInterfaceOnly attributes. These attributes enable the agent to listen on all the interfaces strings that are supported by the Samba Server. |
Veritas Volume Manager |
Enhanced performance of the vradmind daemon for collecting consolidated statistics |
You can configure VVR to collect statistics of the VVR components. The collected statistics can be used to monitor the system and diagnose problems with the VVR setup. By default, VVR collects the statistics automatically when the vradmind daemon starts. The vradmind daemon is enhanced by making it a multi-threaded process where one thread is reserved specifically for collecting periodic statistics. Note: If the vradmind daemon is not running, VVR stops collecting the statistics. For details, see Veritas InfoScale Replication Administrator's Guide. |
Veritas Volume Manager |
Changes in hot-relocation in FSS environment |
In FSS environments, hot-relocation employs a policy-based mechanism for healing storage failures. Storage failures may include disk media failure or node failures that render storage inaccessible. However, VxVM could not differentiate between disk media and node failures. As a result, VxVM sets the same value for both the node_reloc_timeout and storage_reloc_timeout tunables. The hot-relocation daemon is now enhanced to differentiate between the disk media failure or node failures. You can now set different value for the node_reloc_timeout and storage_reloc_timeout tunables for hot-relocation in FSS environments. The default values for the storage_reloc_timeout tunable is 30 minutes and for node_reloc_timeout tunable is 120 min. You can modify the tunable values to suit your business needs. |
Veritas File System |
Changes in VxFS Disk Layout Versions (DLV) |
The following DLV changes are now applicable:
With this change, you can create and mount VxFS only on DLV 11 and later. DLV 6 to 10 can be used for local mount only. |
Veritas File System |
Support for SELinux security extended attributes |
The SELinux policy for RHEL 7.6 and later now includes support for VxFS file system as persistent storage of SELinux security extended attributes. With this support, you can now use SELinux security functionalities and features on VxFS files and directories on RHEL 7.6 and later. |
Replication |
Added support to assign a slave node as a logowner |
In a disaster recovery environment, VVR maintains write-order fidelity for the application I/Os received. When replicating in a shared disk group environment, VVR designates one cluster node as a logowner to maintain the order of writes. By default, VVR designates the master node as a logowner. To optimize the master node workload, VVR now enables you to assign any cluster node (slave node) as a logowner. Note: In the following cases, the change in logowner role is not preserved, and the master nodes takes over as a logowner.
For more details about assigning a slave node as a logowner, refer to, Veritas InfoScale™ 7.4.1 Replication Administrator's Guide. |
Replication |
Technology preview: Adaptive synchronous mode in VVR |
When the synchronous attribute of the RLINK in VVR is set to override, the system temporarily switches the replication mode from synchronous to asynchronous whenever RLINK is disconnected. The override option allows VVR to continue receiving writes from the application even when RLINK is disconnected. However, in case of high network latency, replication continues to run in synchronous mode with degraded application performance. The adaptive synchronous mode in VVR is an enhancement to the existing synchronous override mode. In the adaptive synchronous mode, replication switches from synchronous to asynchronous based on cross-site network latency. This allows replication to take place in synchronous mode when network conditions are good, and automatically switch to asynchronous mode when there is an increase in cross-site network latency.
You can also set alerts for when the system undergoes prolonged periods of network deterioration. For more information, see the Veritas InfoScale Replication Administrator's Guide - Linux. |