Balancing Predictive Text Technology with Security and Privacy: A CISO’s Essential Guide (Part 1)


As the Chief Information Security Officer at Veritas Technologies, I have observed the remarkable evolution of Artificial Intelligence (AI) with keen interest. Predictive text technology, in particular, has garnered attention due to its widespread application from emails to code completion tools. However, a responsible CISO needs to ensure that the agility and efficiency it brings doesn’t overshadow the security and privacy concerns. In this blog, I’ll provide insights on striking that elusive balance between technological innovation and security.

Education and Awareness: The Foundation

Before diving into technology, it’s imperative to educate our team and stakeholders on predictive text and its risks and rewards. As Information Security leaders, we need to ensure that everyone understands the perception vs. reality of AI's security and privacy concerns.



GPT may compromise data privacy due to its training on sensitive information

 GPT models are trained on large datasets, including publicly available text from the internet. However, the models themselves do not retain specific details of the training data. The responsibility lies in the hands of organizations and researchers to ensure appropriate data anonymization and privacy protection measures are in place during the training and deployment of GPT models.

 GPT poses significant security risks and can be easily exploited by attackers.

 While it is true that GPT-based models can be misused for malicious purposes, such as generating convincing phishing emails or automated cyberattacks, the risks can be mitigated with proper security measures and controls. CISOs can implement strategies like data sanitization, access controls, and continuous monitoring to minimize potential security risks.

 GPT models lack transparency, making it difficult to understand their decision-making process.

 GPT models are complex deep learning architectures, making it challenging to fully comprehend their decision-making processes. While the inner workings of GPT models may not be transparent, efforts are being made to develop explainability techniques to shed light on model outputs. Additionally, CISOs can focus on the inputs and outputs of the model and implement safeguards to ensure responsible and accountable use.

 Predictive text models store and retain user data indefinitely.

 Predictive text models typically do not retain specific user data beyond the immediate context of generating responses. The focus is on the model's architecture and parameters rather than preserving individual user information. However, it is crucial for CISOs to assess and validate the data retention and deletion policies of the specific models and platforms being utilized to ensure compliance with privacy regulations and best practices.

 Predictive text models can compromise sensitive or confidential information.

 Predictive text models can generate text based on patterns and examples in the training data. If the training data contains sensitive or confidential information, there is a risk that the model could generate outputs that inadvertently disclose or hint at such information. CISOs must carefully consider the nature of the training data and implement appropriate data anonymization techniques to minimize the exposure of sensitive information.

Predictive text models are a potential target for data exfiltration.

The models themselves typically do not store or retain sensitive data. However, CISOs should still be mindful of potential vulnerabilities in the infrastructure supporting the models, such as the storage systems or APIs used for inference. Adequate security controls, such as encryption, network segregation, and intrusion detection, should be in place to protect against data exfiltration attempts targeting the underlying infrastructure.

As technology advances, predictive text technology has gained widespread popularity, offering convenience and efficiency in various applications.

However, as a Chief Information Security Officer (CISO), it is essential to strike a balance between leveraging predictive text technology and ensuring robust security and privacy measures are in place.   Here are a few suggestions to help get the balance right:

  • Understand the Technology: CISOs must have a thorough understanding of predictive text technology and its underlying mechanisms. This includes paying close attention to the training data, algorithms, and potential biases associated with the model. Awareness of the limitations and risks will help make informed decisions regarding its implementation and proper use.
  • Evaluate Data Privacy and Protection: Data privacy is paramount when utilizing predictive text technology. CISOs should carefully assess the data used to train the model and ensure compliance with relevant industry regulations and pertinent company policies. It is crucial to anonymize or pseudonymize sensitive information and apply stringent access controls to protect user data from unauthorized access or misuse.
  • Secure Model Training and Deployment: Securing the infrastructure used for training and deploying predictive text models is critical. Implement robust security controls, including encryption, secure protocols, and access management, to safeguard the underlying systems and prevent unauthorized modifications or tampering of the models.
  • Implement Ethical Guidelines: Work with your Legal and HR counterparts to establish compliance guidelines for the use of predictive text technology. Proactively address potential issues related to biased or harmful outputs generated by the model. Continuously monitor and evaluate the system's performance and take appropriate actions to rectify biases or mitigate risks to ensure fairness and inclusivity.
  • Strengthen User Awareness and Consent: Educate users about the implications and capabilities of predictive text technology. Obtain informed consent for data collection and usage, clearly communicating the purposes and potential risks involved. Empower users to make informed decisions about opting in or out of predictive text features.
  • Conduct Regular Model Audits and Risk Assessments: Periodically conduct audits and risk assessments of the predictive text models and associated systems. This helps identify vulnerabilities, privacy concerns, or potential risks arising from model updates or changes. Implement necessary safeguards and corrective measures to mitigate emerging risks effectively.
  • Collaborate with Vendors and Researchers: Engage in partnerships with technology vendors and researchers to stay updated on the latest advancements, security patches, and best practices related to predictive text technology. Active collaboration helps address emerging security and privacy challenges more effectively.
  • Develop Incident Response and Continuous Monitoring muscle: Build an incident response plan specific to predictive text technology. Monitor system logs, user feedback, and anomaly detection mechanisms to identify any potential security incidents or privacy breaches promptly. Establish processes to mitigate and respond to such incidents, ensuring minimal impact on users and data.

Balancing the use of generative AI with security and privacy requires a proactive approach from CISOs. By understanding the technology, protecting user privacy, implementing ethical guidelines, and fostering user awareness, organizations can harness the benefits of predictive text technology while mitigating risks. Regular audits, collaboration with experts, and robust incident response capabilities will ensure ongoing security and privacy in an evolving landscape.

Are you ready to embrace AI while ensuring the security and privacy of your data? Veritas is here to support you on this journey.

Contact us today to learn how our security solutions can empower your organization to innovate securely and confidently.


Note: This is the first part of a two-part series titled "Balancing Predictive Text Technology with Security and Privacy: A CISO’s Essential Guide." Stay tuned for part two where we delve deeper into practical strategies for harmonizing AI innovation with security and privacy.

Christos Tulumba
Chief Information Security Officer