Artificial Intelligence – The Good, The Bad, and The Ugly

Veritas Perspectives March 14, 2023
BlogHeroImage

The rise of Artificial Intelligence (AI) powered chatbots has been evident in recent years, with organisations both public and private implementing conversational AI software to serve a wide variety of purposes, including for customer experience and support, as well as internal helpdesk and troubleshooting services. These AI solutions are effective in reducing the burden on customer services, filtering IT support needs and lowering call centre costs. Yet, many of these solutions are limited in their capabilities and can only address a narrow scope of use cases.

As a result, forward-thinking organisations are exploring the use of AI in a more advanced way by embracing the capabilities of general-purpose large language models (LLMs). The emergence of ChatGPT, an LLM trained by OpenAI, launched 30 November 2022 has given rise to a newfound realisation among organisations and individuals regarding a vast range of applications and use cases. Unlike traditional chatbots, ChatGPT can support a wide variety of purposes, such as writing code, drawing insights from research text, or creating marketing materials such as website copy and product brochures. These services can also be accessed through APIs, which allow organisations to integrate the capabilities of publicly available LLMs into their own apps, products and in-house services based on their particular needs.

Adopting tools such as ChatGPT can help organisations change their processes, enhance their efficiencies, gain a competitive edge, and reduce manual requirements, thereby increasing their revenue. Used effectively, they can also help elevate employee capabilities by providing access to resources that were previously unavailable, thus enhancing an individual's knowledge base and skill set.

 

Balancing innovation and responsibility: Data management considerations around the use of AI for business

The sheer pace of progress in the AI space is putting an added pressure on business decision makers in terms of how these advancements fit into their existing data management strategy. As the implementation of AI in business processes becomes increasingly common, it brings a range of considerations over potential risks and blind spots that can arise.

It is common to see a rush to implement AI technology like ChatGPT, so as not to fall behind competitors, and only after some time do organisations realise the limitations. A similar scenario occurred during the COVID-19 pandemic when organisations moved their data to the cloud to maintain productivity, only later to encounter problems around cost, backups, and compliance that they needed to address retroactively.

When integrating AI into business processes, organisations will typically gather data not only from online sources but also from their own data – potentially including sensitive company information and IP – to train the AI. However, this creates significant security implications for organisations that become dependent on these AI-enabled processes without the proper framework in place to keep that information safe.

Any organisation interacting with these services must ensure that any data used for AI purposes is subject to the same principles and safeguards around security, privacy, and governance as data used for other business purposes. Many organisations are already alerted to the potential dangers. Take for instance Amazon, who recently issued a warning to their employees about ChatGPT. Amazon employees were using ChatGPT to support engineering and research purposes. However, a corporate attorney at Amazon warned employees against it after seeing the AI mimic internal confidential Amazon data.

Organisations must also consider how to ensure the integrity of any data processes that leverage AI and how to secure the data in the event of a data centre outage or a ransomware attack. They must consider the data they feed into the AI engine and its status, as not all information produced by AI is accurate. Moreover, they must ask themselves how they will protect the data produced by AI, ensuring that it complies with local legislation and regulations, and is not at risk of falling into the wrong hands.

A broader consideration is what these developments in AI mean from a security perspective. The tools will be adopted not only for productive use cases but also by bad actors, who will seek to apply the technology to increase the scale and sophistication of the cyberattacks they conduct. It is imperative for individual organisations to recognise the potential harm that AI can cause to their operations and take the necessary steps to protect themselves from cyberattacks and data breaches.

 

Safeguarding your data and infrastructure against cyber threats

While the true potential of AI is yet to be discovered, we know that its applications will be highly data-intensive, creating the need for enterprises to manage it efficiently and responsibly. An organisation’s AI strategy will be a regular and seamless part of its overall data management strategy.

Veritas, with over 25 years of experience in data management, has been a trusted partner in providing highly available and mission-critical applications that minimise downtime for industries with fast failover to data centres and the cloud. Our proactive approach ensures predictable availability, application resilience, and storage efficiency across multi-cloud, virtual, and physical environments for organisations.

Furthermore, our comprehensive suite of software and services can play a vital role in ensuring secure and complaint use of AI data, safeguarding organisations against cost risks and cybersecurity threats. Veritas can help protect an organisation’s critical assets, namely its data and IT infrastructure, and ensure that every part of its IT environment is backed up to immutable storage. With our advanced threat detection capabilities and total visibility across the IT landscape, Veritas can help organisations stay compliant with regulatory requirements and quickly recover from any disruptions.

With AI advancing at a rate faster than most organisations can keep up with, it is essential to partner with a trusted provider like Veritas to mitigate the risks associated with AI and other emerging technologies. By doing so, organisations can protect themselves from potential harm and unlock the full potential of these technologies to drive growth and innovation.

To learn how Veritas can help your organisation harness the power of AI while protecting against emerging cyber risks, please contact me and my team.

blogAuthorImage
Johnny Karam
Managing Director & VP International Emerging Region