15 AI risks businesses must confront and how to address them
These risks associated with implementing AI systems must be acknowledged by organizations that want to use the technology ethically and with as little liability as possible.
Companies have always had to manage risks associated with the technologies they adopt to build their businesses. They must do the same when it comes to implementing artificial intelligence.
Some of the risks with AI are the same as those when deploying any new technology: poor strategic alignment to business goals, a lack of skills to support initiatives and a failure to get buy-in throughout the ranks of the organization.
For such challenges, executives should lean on the best practices that have guided the effective adoption of other technologies, advised management consultants and AI experts. In the case of AI, this includes the following:
- Identifying areas where AI can help their companies meet organizational objectives.
- Developing strategies to ensure they have the expertise to support AI programs.
- Creating strong change management policies to smooth and speed enterprise adoption.
However, executives are finding that AI in the enterprise also comes with unique risks that need to be acknowledged and addressed head-on.
Here are 15 areas of risk that can arise as organizations implement and use AI technologies in the enterprise.
1. A lack of employee trust can shut down AI adoption
Not all workers are ready to embrace AI.
Professional services firm KPMG, in a partnership with the University of Queensland in Australia, found that 61% of respondents to its "Trust in Artificial Intelligence: Global Insights 2023" report are either ambivalent about or unwilling to trust AI. A 2024 Salesforce survey of 6,000 knowledge workers worldwide found that 56% of AI users found it difficult to get what they wanted from AI, and 54% claimed they don't trust the data used to train AI systems.
This article is part of
What is enterprise AI? A complete guide for businesses
Without that trust, an AI implementation will be unproductive, according to experts.
Consider, for example, what would happen if workers don't trust an AI solution on a factory floor that determines a machine must be shut down for maintenance. Even if the AI system is nearly always accurate, if the user doesn't trust the results it produces then that AI system is a failure.
2. AI can have unintentional biases
At its most basic level, AI takes large volumes of data and then, using algorithms, identifies and learns to perform from the patterns it identifies in the data.
But when the data is biased or problematic, AI produces faulty results.
Similarly, problematic algorithms -- such as those that reflect the biases of the programmers -- can lead AI systems to produce biased results.
"This is not a hypothetical issue," according to "The Civil Rights Implications of Algorithms," a March 2023 report from the Connecticut Advisory Committee to the U.S. Commission on Civil Rights.
The report explained how certain training data could lead to biased results, noting as an example that "in New York City, police officers stopped and frisked over five million people over the past decade. During that time, Black and Latino people were nine times more likely to be stopped than their White counterparts. As a result, predictive policing algorithms trained on data from that jurisdiction will over predict criminality in neighborhoods with predominantly Black and Latino residents."
3. Biases, errors greatly magnified by volume of AI transactions
Human workers, of course, have biases and make mistakes, but the consequences of their errors are limited to the volume of work they do before the errors are caught -- which is often not very much. However, the consequences of AI biases or hidden errors can be exponentially larger.
As experts explained, humans might make dozens of mistakes in a day, but a bot handling millions of transactions a day magnifies by millions any single error.
4. AI might be delusional
Most AI systems are stochastic or probabilistic. This means machine learning algorithms, deep learning, predictive analytics and other technologies work together to analyze data and produce the most probable response in each scenario. That's in contrast to deterministic AI environments, in which an algorithm's behavior can be predicted from the input.
But most real-world AI environments are stochastic or probabilistic, and they're not 100% accurate.
"They return their best guess to what you're prompting," explained Bill Wong, principal research director at Info-Tech Research Group.
In fact, inaccurate results are common enough -- particularly with more and more people using ChatGPT -- that there's a term for the problem: AI hallucinations.
"So, just like you can't believe everything on the internet, you can't believe everything you hear from a chatbot; you have to vet it," Wong advised.
5. AI can create unexplainable results, thereby damaging trust
Explainability, or the ability to determine and articulate how and why an AI system reached its decisions or predictions, is another term frequently used when talking about AI.
Although explainability is critical to validate results and build trust in AI overall, it's not always possible -- particularly when dealing with sophisticated AI systems that are continuously learning as they operate.
For example, Wong said, AI experts often don't know how AI systems reached those faulty conclusions labeled as hallucinations.
Such situations can stymie the adoption of AI, despite the benefits it can bring to many organizations.
But achieving explainable AI is not easy and in itself carries risks, including reduced accuracy and the exposure of proprietary algorithms to bad actors, as noted in this discussion of why businesses need to work at AI transparency.
6. AI can have unintended consequences
Similarly, the use of AI can have consequences that enterprise leaders either fail to consider or were unable to contemplate, Wong said.
The risk of unintended consequences is so significant that U.N. Secretary-General António Guterres called it out in his remarks to the World Economic Forum in Davos, Switzerland, in January 2024, saying "every new iteration of generative AI increases the risk of serious unintended consequences. This technology has enormous potential for sustainable development -- but as the International Monetary Fund has just warned us, it is very likely to worsen inequality in the world. And some powerful tech companies are already pursuing profits with a clear disregard for human rights, personal privacy and social impact."
An April 2024 report to the president from the President's Council of Advisors on Science and Technology also called out the dangers of unintended consequences. The report, "Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges," noted that "Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision."
Such concerns are not new. A 2022 report posted by the White House, "The Impact of Artificial Intelligence on the Future of Workforces in the European Union and the United States of America," spoke to this point and cited the findings of Google researchers who studied "how natural-language models interpret discussions of disabilities and mental illness and found that various sentiment models penalized such discussions, creating bias against even positive phrases such as 'I will fight for people with mental illness.'"
7. AI can behave unethically, illegally
Some uses of AI might result in ethical dilemmas for their users, said Jordan Rae Kelly, senior managing director and head of cybersecurity for the Americas at FTI Consulting.
Jordan Rae KellySenior managing director and head of cybersecurity for the Americas, FTI Consulting
"There is a potential ethical impact to how you use AI that your internal or external stakeholders might have a problem with," she said. Workers, for instance, might find the use of an AI-based monitoring system both an invasion of privacy and corporate overreach, Kelly added.
As generative AI has been embraced by consumers and businesses, concerns about the ethical and legal use of copyrighted material to train large language models have come to the fore. In December 2023, The New York Times sued OpenAI and Microsoft, alleging the tech companies used its copyrighted content without authorization to train AI models. The suit, still in early stages, is considered a test case on copyright protection in the age of AI.
8. Employee use of AI can evade or escape enterprise control
Generative AI is also fueling growth in shadow IT -- i.e., software used by employees to do their jobs without permission or oversight from the IT department. A 2024 research report from Productiv stated that, while the use of unauthorized software has dropped from 2021 to 2023, "ChatGPT has jumped to the top of the shadow IT chart" as employees embraced generative AI apps.
Other studies documented the adoption of shadow AI. The "2024 Work Trend Index Annual Report" from Microsoft and LinkedIn, released in May 2024, found that 78% of AI users are bringing their own AI tools to work -- a trend that's even more common at small and midsize companies, where the figure jumps to 80%.
Info-Tech Research Group's Wong said enterprise leaders are developing a range of policies to govern enterprise use of AI tools, including ChatGPT. However, he said companies that prohibited its use are finding that such restrictions aren't popular or even feasible to enforce. As a result, some are reworking their policies to allow use of such tools in certain cases and with nonproprietary and nonrestricted data.
9. Liability issues are unsettled and undetermined
Legal questions have emerged around accountability as organizations use AI systems to make decisions and as they embed AI into the products and services they sell, with the question of who would be liable for bad results remaining undetermined.
For example, FTI Consulting's Kelly said it's unclear who -- or what -- would or should be faulted if AI writes a bad piece of computer code that causes problems. That issue leaves executives, along with lawyers, courts and lawmakers, to move forward with AI use cases with a high degree of uncertainty.
Such uncertainty played out in the Canadian judicial system in early 2024 in Moffatt v. Air Canada. In that case, the British Columbia Civil Resolution Tribunal found that Air Canada was liable for the misinformation given to a consumer by an AI chatbot on its website. Air Canada had argued against being held responsible for the chatbot's actions.
10. Enterprise use could run afoul of proposed laws and expected regulations
Governments around the world are looking at whether they should put laws in place to regulate the use of AI and what those laws should be. Legal and AI experts said they expect governments to start passing new rules in the coming years.
Organizations might then need to adjust their AI roadmaps, curtail their planned implementations or even eliminate some of their AI uses if they run afoul of any forthcoming legislation, Kelly said.
Executives could find that challenging, she added, as AI is often embedded in the technologies and services they purchase from vendors. This means enterprise leaders will have to review their internally developed AI initiatives and the AI in the products and services bought from others to ensure they're not breaking any laws.
11. Key skills might be at risk of being eroded by AI
After two plane crashes involving Boeing 737 Max jets, one in late 2018 and one in early 2019, some experts expressed concern that pilots were losing basic flying skills as they relied more and more on increasing amounts of automation in the cockpit.
Although those incidents are extreme cases, experts said AI will erode other key skills that enterprises might want to preserve in their human workforce.
"We're going to let go of people who know how to do things without technology," said MIT professor Yossi Sheffi, a global supply chain expert, director of the MIT Center for Transportation & Logistics and author of The Magic Conveyer Belt: Supply Chains, A.I. and the Future of Work.
12. AI could lead to societal unrest
Worker anxiety over being replaced by AI systems or trepidation about how their jobs will be changed by AI automation is not a new phenomenon, but the increasing integration of AI into business processes has made those fears more palpable.
The 2024 Microsoft and LinkedIn report found that 45% of professionals worry that AI will replace their jobs. A May 2023 survey titled "AI, Automation and the Future of Workplaces," from workplace software maker Robin, found that 61% of respondents believe AI-driven tools will make some jobs obsolete.
Sheffi said such technology-driven changes in the labor market in the past have led to labor unrest and could possibly do so again.
Even if such a scenario doesn't happen with AI, Sheffi and others said organizations will need to adjust job responsibilities, as well as help employees learn to use AI tools and accept new ways of working.
13. Poor training data, lack of monitoring can sabotage AI systems
The poster bot for this type of risk is the infamous Tay, released by Microsoft on Twitter back in 2016. Engineers had designed the bot to engage in online interactions and then learn patterns of language so that she -- yes, Tay was designed to mimic the speech of a female teenager -- would sound natural on the internet.
Instead, trolls taught Tay racist, misogynistic and antisemitic language, with her language becoming so hostile and offensive within hours that Microsoft suspended the account. Microsoft's experience highlighted another big risk with building and using AI: It must be taught well to work right.
Google AI Overviews, a new search feature that uses generative AI to deliver short synopses of topics, shows the continued challenges related to creating reliable and safe AI systems. Rolled out to U.S. users in May 2024, the feature has had its share of glitches, including an AI Overview recommendation to use nontoxic glue as pizza sauce to make the cheese adhere better.
14. Hackers can use AI to create more sophisticated attacks
Bad actors are using AI to increase the sophistication of their attacks, make their attacks more effective and improve the likelihood of their attacks successfully penetrating their victims' defenses.
"AI can speed up the effectiveness of the bad guys," Kelly said.
Experienced hackers aren't the only ones leveraging AI. Wong said AI -- and generative AI in particular -- lets inexperienced would-be hackers develop malicious code with relative ease and speed.
"You can have a dialogue with ChatGPT to find out how to be a hacker," Wong said. "You can just ask ChatGPT to write the code for you. You just have to know how to ask the right questions."
15. Poor decisions around AI use could damage reputations
After a February 2023 shooting at a private Nashville school, Vanderbilt University's Peabody Office of Equity, Diversity and Inclusion responded to the tragic event with an email that included, at its end, a note saying the message had been written using ChatGPT. Students and others quickly criticized the technology's use in such circumstances, leading the university to apologize for "poor judgement."
The incident highlights the risk that organizations face when using AI: How they opt to use the technology could affect how their employees, customers, partners and the public view them.
Organizations that use AI in ways that some believe is biased, invasive, manipulative or unethical might face backlash and reputational harm. "It could change the perception of their brand in a way they don't want it to," Kelly added.
How to manage risks
The risks stemming from or associated with the use of AI can't be eliminated, but they can be managed.
Organizations must first recognize and understand these risks, according to multiple experts in AI and executive leadership. From there, they need to implement policies to help minimize the likelihood of such risks negatively affecting their organizations. Those policies should ensure the use of high-quality data for training and require testing and validation to root out unintended biases.
Policies should also mandate ongoing monitoring to keep biases from creeping into systems, which learn as they work, and to identify any unexpected consequences that arise through use.
And although organizational leaders might not be able to foresee every ethical consideration, experts said enterprises should have frameworks to ensure their AI systems contain the policies and boundaries to create ethical, transparent, fair and unbiased results -- with human employees monitoring these systems to confirm the results meet the organization's established standards.
Organizations seeking to be successful in such work should involve the board and the C-suite. As Wong said, "This is not just an IT problem, so all executives need to get involved in this."
Mary K. Pratt is an award-winning freelance journalist with a focus on covering enterprise IT and cybersecurity management.