Why reliable data is essential for trustworthy AI
In little over two years, generative AI has changed the shape of the technology industry. Now is the time for proper due diligence
After decades where artificial intelligence (AI) was largely confined to research projects, niche applications or even science fiction, it’s now a mainstream business tool.
Driven by applications such as Google’s Bard (now Gemini), Mistral and ChatGPT especially, generative AI (GenAI) is already impacting the workplace.
Industry analyst Gartner, for example, predicts 95% of workers will routinely use GenAI to complete their day-to-day tasks by 2026.
At the same time, more organisations are using GenAI to power “chatbots” and other services that let the public to interact with technology in a more natural way. Large language models (LLMs) allow computers to communicate with users in something resembling human speech, and the models themselves can trawl the vast resources of the internet to find answers to even the most obscure questions. And that’s where the problems can lie.
Unsurprisingly, AI, with its risks and benefits, was a key focus of both Gartner’s Data and Analytics Summit and the 2024 Tech.EU summit, both in London.
GenAI tools stand accused of creating biased results, or even results that are entirely untrue. These hallucinations have led to businesses having to compensate customers, as well as reputational damage.
“Governance is even more critical when delivering AI-infused data products,” Gartner’s Alys Woodward told the firm’s Data and Analytics Summit. “With AI, unintended consequences can emerge rapidly. We’ve already seen some examples of successful implementations of GenAI. These organisations deploy the technology with appropriate guardrails and targeted use cases, but we never know when our AI-infused data products will lead us into trouble.”
Read more Generative AI stories
- Use of advanced artificial intelligence is outpacing organisations’ ability to govern the technology, warns ISACA research.
- Generative AI could help organisations realise the tacit knowledge-sharing benefits that were touted in the late 1990s for the discipline of knowledge management.
Firms are already being held liable by regulators and courts for decisions made using AI. The European Union’s (EU’s) AI Act, which starts to come into force from June, will create new obligations as well as impose new penalties. Fines for the most serious breaches of the law will be as high as 7% of global turnover, more than for breaches of the GDPR.
But if the AI Act is a wake-up call for organisations to be more careful and transparent about their use of AI, it will also prompt them to look more closely at how AI models form the conclusions they do.
This, in turn, relies on the quality of data, both for training models and during the inference – or operational – phase of AI. The current large language models rely primarily on public data, gathered from the internet. And, although there are moves afoot to allow firms to use their own data for training as well as inference [oracle], the actual algorithms used by the AI models themselves remain opaque.
This “black box” approach by AI suppliers has led to concerns about bias and potential discrimination, both when dealing with customers but also in areas such as recruitment. Organisations will also have concerns about whether their proprietary data is being used to train models – the main AI suppliers say they no longer do this – privacy concerns around the use of sensitive information, and whether data, including prompts, could leak out of AI tools.
“When organisations start deploying AI capabilities, the questions of trust, risk and compliance become very important,” said Nader Henein, a vice-president analyst at Gartner specialising in privacy.
However, he added that organisations are increasingly exposed to risks via AI tools they bring in from outside.
These include specific AI tools, such as Gemini or ChatGPT, but also AI functionality built into other applications, from desktop tools and browsers to enterprise packages. “Almost everyone out there is using one or more SaaS [software-as-a-service] tool, and many [now] have AI-enabled capabilities within them,” he said. “The AI Act is pointing to that and saying you need to understand, quantify and own that risk.”
Quality of data
The challenge is identifying where and how AI is being used in the enterprise, as well as the quality of data – especially the data used for training the models. As Gartner’s Henein suggests, AI suffers from the same data problems as any analytics system: garbage in equals garbage out.
But with AI, we are even more likely to take its outputs at face value, he said. “Humans favour suggestions from automated decision-making systems, often ignoring their own better judgement,” said Henein. “But this new generation of hallucinations, with answers that are very detailed, with references, and are extremely eloquent, pushes that automation bias to new heights.”
Much also depends on the type of decision AI is supporting, with some tools posing a greater risk to the enterprise than others.
“This is one of the hardest things,” said Tharishni Arumugam, global privacy technology and operations director at AON. “A lot of times you have people thinking, ‘I want to know about any little use of AI’. Actually, do you really need to know about a little translation service that your third party is using? Probably not, but you want to know when a third party is using your health information to provide predictive analysis to your employees. And so there is a big misunderstanding right now about what we need to know from a vendor perspective.”
This, she said, links directly into data governance, and organisations with mature data governance policies are less likely to fall foul of AI’s pitfalls.
This covers basic data quality, but also, as Gartner puts it, whether the data are both accurate and diverse enough to produce reliable results that are free from bias and hallucinations. This is sometimes termed “AI-ready data”, and Gartner warns that few organisations can really say they have that type of data – yet.
Loss of trust
The problem is made worse when organisations link AI models together across a decision-making process. As each model feeds into the next, confidence levels in the final conclusions will drop. But this might not be obvious to the user, or consumer.
“Very big models have access to insane amounts of data,” said Henein. “A lot of that data is from the internet, and we all know that the internet is not quite as curated from a quality of content perspective as you would like it to be.
“And that’s a fundamental problem,” he said. “It is at the heart of these hallucinations.”
According to Henein, models are not currently giving guidance to their accuracy, either in percentage terms or even on a simple scale such as red, amber and green. “If we had that indication of the accuracy of the response, maybe it would assuage some of the concerns around hallucinations,” he said.
Lineage of data
Confidence also means understanding the lineage of data as it moves between systems.
This includes data that moves from enterprise systems or data warehouses and lakes into AI, as well as – potentially – AI results that are used as inputs into other models, or even, potentially, to train AI. Gartner predicts that, within two years, three-quarters of companies will use GenAI to create synthetic data that, in turn, could be used to train machine learning models.
Data scientists also need to build guardrails into AI systems to reduce risk and prevent abuse of the tools.
This could include limiting or restricting the use of personal identifiable data, health information, intellectual property, or even unchecked and unqualified data sources.
“Ultimately, the data that you’re feeding into the model, the data that you’re using to train your models, is extremely important,” said Junaid Saiyed, chief technology officer at data governance and data intelligence firm Alation.
“If you don’t feed in accurate, trusted data, you’re going to get not-so-good recommendations and predictions. Whatever you’re looking to get out of your AI, whatever you’re looking to get out of your models, trusted data leads to trusted AI.
“People are looking for that confidence code,” he added. “It’s not just the final answer. They want to know the confidence along the way. What is your confidence in the data that was fed into the model, and your confidence in the model itself? You might even be okay with a less sophisticated model, if the answer is explainable.”
Building confidence
Unless chief information security officers and chief data officers can build that confidence, users will be reluctant to use AI tools, and customers will be unlikely to trust their advice or recommendations.
“In business to business, you need to provide this level of trust,” said Daniel Gallego Vico, an AI and machine learning researcher, co-founder of PrivateGPT and business AI service Zylon, and a speaker at the Tech.EU summit.
An engineer, for example, will not use an LLM’s recommendation for a design if he or she does not trust the data. “If I’m building a bridge and the bridge collapses, the lawyers will come after me, not the LLM,” he said. “I need to be sure what the LLM is producing is right.”
For Vico, however powerful the AI tool, humans have to remain part of the workflow. “You have to understand what data sources the LLM has used to generate the answer,” he said. “That way, you can double-check.”