“People may learn from and replicate the skewed perspective of an artificial intelligence algorithm, and they carry this bias beyond their interactions with the AI,” wrote Lauren Leffer, former tech reporting fellow at Scientific American
With the scale and pace at which artificial intelligence (AI), and more specifically generative AI (GenAI), perhaps now is the time to call for a pause to reflect on where we are with the technology, what we want to achieve with it, and how we are going about the process.
Consider the case of a Belgian man said to have committed suicide on the advice of an AI chatbot in an app called Chai.
Today, the continued lack of understanding around the limitations of large language models (LLMs) used by publicly available GenAI platforms should cause concerns about the quality of their output. The term AI hallucination has quickly entered the vocabulary at least in the business community.
Other terms you will hear from practitioners are AI bias, machine learning bias or algorithm bias. Bias is embedded in the data itself that LLMs use. Using an example outside of our topic, a vendor whitepaper has the potential to favour the company.
What makes AI bias more dangerous is the scale at which the LLMs, with their potentially biased data pool, are made available. This can lead to misinformed decisions on a scale previously not thought possible. We are nearing the point where we are unable to distinguish between real and fake content on the internet.
In January 2024, the Singapore government outlined what it called a framework aimed at fostering trust around the use and development of GenAI.
FutureCIO contacted David Fairman, chief information officer and chief security officer with Netskope on the risks associated with the unqualified use of AI.
Where are the areas of enterprise where GenAI presents the most obvious risks?
David Fairman: The main risk for organisations using GenAI is a data protection issue; employees leaking confidential information through the channel.
"Employees rarely check the terms and conditions, and we are finding that they are inputting significant quantities of sensitive information to GenAI tools already (for a large organisation, our data shows that a piece of sensitive information is leaked through GenAI tools every hour of the working day)."
David Fairman
More complex issues can arise when organisations start embedding GenAI capabilities in their solutions. Many are prioritising speed over security, but there can be serious security implications, including the security of the embedded solution, its vendor and associated supply chain.
Ensuring those tools behave ethically and are well-trained on data that do not include any personally identifiable information (PII) are other key issues that should be checked.
What are the hidden risks that most employees do not think about when using GenAI tools and platforms?
David Fairman: Beyond the risk of leaking sensitive data, they may fail to see ethical and accuracy issues. GenAI tools can deliver false or biased information, yet some employees have implicit trust in GenAI tools, and fail to question outputs that do not seem right - or even wonder how the system reached the answer it gave.
If threat actors manage to contaminate a GenAI’s supply chain or dev environment, they could also influence its output, directing users to inaccurate sources and delivering malware. These are potential risks users are generally not wary of, but they can be mitigated by developing reflexes among users, reminding them to question the reliability and integrity of answers provided by AI.
How should the security team manage GenAI security risks in the enterprise?
David Fairman: Efforts to ensure the prevention of data and confidential information leak or loss should be prioritised. At the very least, companies should have guidelines for the safe use of GenAI tools, but guidelines can only go so far in preventing incidents, and security teams could consider Data Loss Prevention capabilities.
The best DLP tools allow organisations to inspect interactions between users and popular GenAI tools and enforce policies to automatically warn or block users if they are trying to upload sensitive information.
What are the key considerations when developing a GenAI usage framework?
David Fairman: A framework for GenAI usage should be devised considering the following aspects:
- Security: how do we keep people and data safe
- Regulations: laws and regulations governing GenAI use in the different regions a business operates that will impact workers utilising GenAI-enabled technology (take note of the EU’s new AI Act which will require compliance for organisations operating in the region)
- Accountability and governance: defining ownership and accountability for GenAI
- Ethics: ensuring the ethical and responsible use of GenAI
The right balance needs to be struck between enabling innovation whilst respecting ethical principles in developing the framework.
Who needs to be involved in developing this framework?
David Fairman: Inclusiveness is one of the key principles in successfully developing and governing AI. Designing a GenAI usage framework should take into account the views of various stakeholders and teams within the business by ensuring they are included in the oversight and design process, leading to more holistic approaches and removing blind spots.
This enables the organisation to enjoy the benefits of GenAI to the fullest while keeping everyone safe and informed. This process can also help spread awareness of responsible GenAI usage within the organisation.