Active Stocks
Wed Jul 31 2024 14:04:54
  1. Tata Steel share price
  2. 164.85 0.49%
  1. Tata Motors share price
  2. 1,154.00 -0.71%
  1. NTPC share price
  2. 416.10 2.26%
  1. ITC share price
  2. 492.50 0.51%
  1. ICICI Bank share price
  2. 1,214.85 0.37%
Business News/ Technology / Fighting disinformation gets harder, just when it matters most
BackBack

Fighting disinformation gets harder, just when it matters most

The Economoist

Researchers and governments need to co-ordinate; tech companies need to open up

Researchers studying disinformation have been subjected to lawsuits, attacks from political groups and even death threats. (Image: Pixabay)Premium
Researchers studying disinformation have been subjected to lawsuits, attacks from political groups and even death threats. (Image: Pixabay)

In February 2024 America’s State Department revealed that it had uncovered a Russian operation designed to discredit Western-run health programmes in Africa. The operation included spreading rumours that dengue fever, a mosquito-borne illness, was created by an American NGO, and that Africans who received treatment were being used as test subjects by American military researchers. The campaign, based around a Russian-funded news site, was intended to sow division and harm America’s reputation. Discouraging Africans from seeking health care was collateral damage along the way.

The campaign was brought to light through the work of the Global Engagement Centre, an agency in the US State Department. Once a false story is detected, the agency works with local partners, including academics, journalists and civil-society groups to spread the word about the source—a technique known as “psychological inoculation" or “pre-bunking". The idea is that if people are made aware that a particular false narrative is in circulation, they are more likely to view it sceptically if they encounter it in social-media posts, news articles or in person.

Pre-bunking is just one of many countermeasures that have been proposed and deployed against deceptive information. But how effective are they? In a study published last year, the International Panel on the Information Environment (IPIE), a non-profit group, drew up a list of 11 categories of proposed countermeasures, based on a meta-analysis of 588 peer-reviewed studies, and evaluated the evidence for their effectiveness. The measures include: blocking or labelling specific users or posts on digital platforms; providing media-literacy education (such as pre-bunking) to enable people to identify misinformation and disinformation; tightening verification requirements on digital platforms; supporting fact-checking organisations and other publishers of corrective information; and so on.

The IPIE analysis found that only four of the 11 countermeasures were widely endorsed in the research literature: content labelling (such as adding tags to accounts or items of content to flag that they are disputed); corrective information (ie, fact-checking and debunking); content moderation (downranking or removing content, and suspending or blocking accounts); and media literacy (educating people to identify deceptive content, for example through pre-bunking). Of these various approaches, the evidence was strongest for content labelling and corrective information.

Such countermeasures are of course already being implemented in different ways around the world. On social platforms, users can report posts for containing “false information" on Facebook and Instagram, and “misinformation" on TikTok, so that warning labels can be applied. X does not have such a category, but allows “Community notes" to be added to problematic posts to provide corrections or context.

Lies, damned lies and social media

In many countries academics, civil-society groups, governments and intelligence agencies flag offending posts to tech platforms, which also have their own in-house efforts. Meta, for example, co-operates with about 100 independent fact-checking outfits in more than 60 languages, all of which are members of the International Fact-Checking Network, established by the Poynter Institute, an American non-profit group. Various organisations and governments work to improve media literacy; Finland is famed for its national training initiative, launched in 2014 in response to Russian disinformation. Media literacy can also be taught through gaming: Tilt Studio, from the Netherlands, has worked with the British government, the European Commission and NATO to create games that help identify misleading content.

To be able to fight disinformation, academics, platforms and governments must understand it. But research on disinformation is limited in several key respects—studies tend to look only at campaigns in a single language, or on a single subject, for instance. And most glaringly of all, there is still no consensus on the real-life impact of exposure to deceptive content. Some studies find little evidence linking disinformation to the outcomes of elections and referendums. But others find that Kremlin talking points are repeated by right-wing politicians in America and Europe. Opinion polls also find that enough European citizens tend to agree with Russian lines of disinformation to suggest that Russia’s campaign to sow doubt about the truth might be working.

A big obstacle for researchers is the lack of access to data. The best data is not in public hands, but is “sitting in private networks in Silicon Valley," says Phil Howard, an expert on democracy and technology at Oxford University and a co-founder of the IPIE. And collecting relevant data is becoming more difficult. After Elon Musk bought Twitter (now X) in 2022 the company shut down the free system that let anyone download information on posts and accounts, and began charging thousands of dollars a month for such data access. Meta announced in March that it would be retiring CrowdTangle, its platform-monitoring tool that lets scientists, journalists and civil-society groups access data, though the company says academics can still apply for access to certain datasets.

Such changes have seriously hampered researchers’ ability both to detect disinformation and to understand how it spreads. “Most of our foundational understanding of disinformation has come from having access to huge amounts of Twitter data," says Rachel Moran of the University of Washington. With this source cut off, researchers worry that they will lose track of how new campaigns are operating, which has wider implications. “The academic community is very, very important in this space," says an American official.

Regulators are stepping in to try to plug the gap—at least in Europe. The EU’s Digital Services Act (DSA), which came into force in February, requires platforms to make data available to researchers who are working on countering “systemic risk" to society (Britain’s equivalent, the Online Safety Act, has no such provision). Under the new EU rules, researchers can submit proposals to the platforms for review. But so far, few have been successful. Jakob Ohme, a researcher at the Weizenbaum Institute for Networked Society, has been collecting information from colleagues on the outcomes of their requests. Of roughly 21 researchers he knows of who have submitted proposals, only four have received data. According to a European Commission spokesperson, platforms have been asked to supply information to show that they are complying with the act. Both X and TikTok are currently under investigation over whether they have failed to supply data to researchers without undue delay. (Both companies say they comply, or are committed to complying, with the DSA. X withdrew from the EU’s voluntary code to fight disinformation last year.)

In America, however, efforts to fight disinformation have become caught up in the country’s dysfunctional politics. Researchers believe that fighting disinformation requires a co-ordinated effort by tech platforms, academics, government agencies, civil-society groups and media organisations. But in America any co-ordination of this kind has come to be seen, particularly by those on the right, as evidence of a conspiracy between all those groups to suppress particular voices and viewpoints. When false information about elections and covid-19, posted by Donald Trump and Marjorie Taylor Greene, was removed from some tech platforms, they and other Republican politicians complained of censorship. A group of large companies that refused to advertise on right-leaning platforms where disinformation abounds were threatened with antitrust investigations.

Researchers studying disinformation have been subjected to lawsuits, attacks from political groups and even death threats. Funding has also diminished. Faced with these challenges, some researchers say they have stopped alerting platforms about suspected suspicious accounts or posts. An ongoing lawsuit, Murthy v Missouri, has led American federal agencies to suspend their sharing of suspected misinformation with tech platforms—although the FBI has reportedly resumed sending social-media companies briefings in the past few weeks.

All this has had a chilling effect on the field, just as concern is mounting about the potential for disinformation to influence elections around the world. “It is difficult to avoid the realisation that one side of politics—mainly in the US but also elsewhere—appears more threatened by research into misinformation than by the risks to democracy arising from misinformation itself," wrote researchers recently in Current Opinion in Psychology.

The tide may be turning, however. In the past few weeks, during oral arguments about the Murthy v Missouri case, most of the justices on America’s Supreme Court expressed support for the efforts of governments, researchers and social-media platforms to work together to combat disinformation. America has also announced an international collaboration with intelligence agencies in Canada and Britain to curb foreign influence on social media by “going beyond ‘monitor-and-report’ approaches", although the details of any new strategies have not been disclosed. And if the EU’s DSA regulations can open the way for tech companies to share data with researchers in Europe, researchers elsewhere may benefit too.

If America has lately provided an illustration of how not to deal with disinformation in the run-up to an election, another country, Taiwan, offers a more inspiring example. “Taiwan is the gold standard," says Renée DiResta, who studies information flows at the Stanford Internet Observatory. Its model involves close collaboration between civil-society groups, tech platforms, government and the media. When disinformation is spotted by fact-checking organisations, they inform the tech platforms—and where appropriate government ministries also issue rapid rebuttals or corrections. The government also promotes media literacy, for example by including it in the school curriculum. But while this approach may be effective in a small country where there is a high degree of trust in the government and an obvious adversary (Finland and Sweden would be other examples), it may be difficult to make it work elsewhere.

Other countries have taken different approaches. Brazil won plaudits from some observers for its muscular handling of disinformation in the run-up to its elections in October 2022, which involved co-operation between civil-society groups and tech platforms—and the oversight of a Supreme Court judge who ordered the suspension of social-media accounts of politicians and influencers whose posts, in his view, threatened the process. But critics, within Brazil and outside it, felt the judge was too heavy-handed (he is now involved in a legal dispute with Elon Musk, who owns X). Sweden, for its part, created a government agency in 2022 responsible for “psychological defence".

Global warning

Disinformation is a sprawling problem, requiring co-ordinated action from multiple sectors of society. Unfortunately, the analysis of it tends to be siloed and there is a lack of agreement on things like terminology. This makes it hard to join the dots and find lessons that apply more broadly. Dr Howard of the IPIE likens the situation to the early days of climate science: lots of people are trying to tackle the same problem from different perspectives, but it is difficult to see the whole picture. It took decades, he observes, to bring together atmospheric scientists, geologists and oceanographers to form a consensus on what was happening. And there continues to be strong political opposition from those who have an interest in maintaining the status quo. But the UN’s Intergovernmental Panel on Climate Change now provides governments with solid data on which to base policy decisions. The IPIE aims to do the same for the global information environment, says Dr Howard. The current lack of a joined-up response to disinformation is a problem, but also an opportunity: co-ordinating research and action should lead to better detection and mitigation of deceptive content, because modern disinformation campaigns all work in similar ways. But, as with climate change, cleaning up the world’s information environment presents a daunting, long-term challenge.

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

3.6 Crore Indians visited in a single day choosing us as India's undisputed platform for General Election Results. Explore the latest updates here!

Catch all theBusiness News, Technology News,Breaking NewsEvents andLatest News Updates on Live Mint. Download TheMint News App to get Daily Market Updates
More Less
Published: 11 Jul 2024, 06:00 PM IST
Next Story footLogo
Recommended For You
  翻译: