AI is not new, but the speed at which it is forcing organisations to act and react certainly is. In an age when a lie is around the world before the truth has its trousers on, companies are having to pay attention to its potential for both good and ill.
A straw poll of attendees at a special roundtable hosted by The Weber Shandwick Collective (TWSC) at the PRWeek Crisis Communications Summit revealed organisations are at various stages in their AI journeys, from dormant to cutting edge.
“This variation is not uncommon,” said Barnaby Fry, head of crisis and issues EMEA, Weber Shandwick, citing client feedback and TWSC research, ‘UN/PREDICTIONS 2024: Expert Analysis and Stakeholder Insights about Megatrends in Policy, Technology, Media and Culture’1. Only 1% of senior executives believe AI-linked misinformation impacts them, with just 6% considering it a top priority.
It’s here now
By contrast, three-quarters of consumers are concerned about AI, particularly given the rise of sophisticated deep fakes. Recently, WPP CEO Mark Read fell victim to an attempted AI scam and a global engineering company lost £20m to fraud via a fake AI voice call.
“It’s not a case of worrying about AI coming. It’s already here and affecting business,” said Fry. “It’s gone from being funny to being worrying.”
Reputation is increasingly at stake as AI can be weaponised or inadvertently damage businesses.
For instance, Chevrolet’s chatbots recommended competitor Tesla cars to consumers, and an AI customer service app for Air Canada mistakenly offered a bereaved customer free repatriation of a loved one.
AI for good
Fry outlined how TWSC leverages AI to help clients mitigate risk, forecast momentum of a threat and conduct message testing.
“In crisis situations, speed is crucial,” he noted. “In Mexico, we developed the Gallery of Lies, for Animal Político, an independent Mexican media. This AI tool verified information during elections plagued by disinformation.”
“We aim to use AI positively,” Fry continued. “For example, we adapted the Gallery of Lies for a fintech client to combat misinformation.”
Testing messages
The company is already using tools that provide adversarial viewpoints to speed up message development.
“It’s a really clever tool that speeds up processes,” said Fry. In a crisis, it helps you develop statements and ensures your messages are consistent.”
Elizabeth Gladwin, head of analytics and intelligence, EMEA, The Weber Shandwick Collective, noted AI tools can pre-emptively assess risk and highlight potential opportunities within organisations.
This helps with scenario planning and creating a messaging framework. The company has developed trackers and dashboards that use AI.
“For a tech company that was getting slammed for supply chain problems, we used it to test what messages would affect reputation. You are able to ground scenarios in actual data,” said Gladwin.
Spotting patterns
AI assists in monitoring multiple platforms and tailoring content accordingly, as demonstrated by TWSC's use of the Blackbird AI tool. This tool can monitor forums for misinformation, pre-empting media dissemination with counternarratives.
Other use cases include tailoring comms activity to particular media brands and journalists, identifying key opinion formers, and understanding what are real or AI bot led conversations. “Ultimately, it is about making comms people more relevant in a crisis and giving them more time to think,” said Fry.
Crucially, Fry stressed, AI should be seen as a tool for guiding decisions rather than making them outright. “Human interpretation remains essential,” he concluded, “so we can provide better counsel to the C-suite.”
For more information about Weber Shandwick’s crisis, reputation and leadership expertise, click here.