Researchers involved in a large peer-reviewed study published Wednesday say that “pre-bunking” is the best method yet developed to stop people from believing nonsense and lies they read or see on the internet. Could the new strategy help keep people from falling for misinformation?
The experiment was carried out by researchers from the University of Cambridge, University of Bristol, the University of Western Australia and Google’s Jigsaw to conduct a total of seven different experiments involving nearly 30,000 participants. The goal behind these experiments was to see if they could persuade web users to steer clear of the web’s most noxious content.
The experiments used a relatively new concept known as “pre-bunking” or, in researcher parlance, “attitudinal inoculation,” based on a field of psychological research that shares the name, inoculation theory. The theory posits that, by using various forms of communication, people can be persuaded not to be persuaded by other arguments or belief systems. In short, “pre-bunking” is meant to give web users a taste of what online manipulation looks like so that they can identify it later and then protect themselves from it in the future.
To test this theory, researchers deployed 90-second videos in YouTube’s advert slot to inform viewers about misinformation tactics that they might encounter on the platform. These PSAs weren’t focused on particular kinds of content, but instead tried to teach viewers about different types of manipulative rhetoric that might be used in misinformation campaigns. Specifically, the videos warned viewers about well-known tricks, such as “emotionally manipulative” language, false dichotomies, ad hominem attacks, scapegoating, and incoherence.
After being shown the videos, study participants were shown a variety of social media posts — some with manipulative tactics and others that were “neutral” — and asked to rate them for trustworthiness. According to researchers, the videos seem to have worked well. They claim that the ability for participants to identify manipulative rhetoric rose by an average of 5 per cent after having viewed the videos. The recently published findings note:
“Across seven high-powered preregistered studies including a field experiment on YouTube, with a total of nearly 30,000 participants, we find that watching short inoculation videos improves people’s ability to identify manipulation techniques commonly used in on- line misinformation, both in a laboratory setting and in a real-world environment where exposure to misinformation is common.”
Jon Roozenbeek, one of the lead researchers involved in the project, said that the inoculation worked for people from all walks of life. “The inoculation effect was consistent across liberals and conservatives. It worked for people with different levels of education, and different personality types. This is the basis of a general inoculation against misinformation,” he said.
A Solution with Scale
Pre-bunking’s supporters say it’s the most effective, scalable method currently available to fight misinformation. Fact-checking, which has been one of the most widely used tools in the fight against online bullshit, is difficult to scale because of the impossible amount of effort necessary to fact-check every single incorrect thing that gets published online. Alternately, pre-bunking is supposed to prime web users against entire genres of manipulative tactics or narratives before they ever encounter them in the wild. This means that, regardless of the specifics of a particular viral conspiracy theory, viewers will be mentally armed to fend off that kind of information when it pops up.
Researchers said that their method worked so well that they are in the process of launching new “pre-bunking” campaigns that will be used to target specific kinds of content in specific geographic regions. Google’s Jigsaw is now in the process of “launching a prebunking video campaign to counter anti-refugee narratives in Central and Eastern Europe in partnership with Google, YouTube, and local experts.” The effort will be used to discourage web users from engaging with content that demonizes refugees or makes them seem like a noxious influence on their host countries.
“These findings are exciting because they demonstrate that we can scale prebunking far and wide, using ads as a vehicle, and that the pre-bunking videos are effective in an “ecologically valid environment” on social media and outside a controlled lab test,” said Beth Goldberg, Head of Research & Development at Jigsaw, and a co-author of the paper, in a statement to Gizmodo.
But if all this sounds very impressive, there are some questions that you can’t help but ponder. If you just think about it for a minute, it’s pretty clear that a lot could go wrong with the whole “pre-bunking” concept.
One question that naturally springs to mind is: who gets to determine what counts as a false or “manipulative” narrative? Is it the government? A corporation like Google? A select panel of academic experts? In short: who gets to be the arbiter of this very important epistemological function? And how do you maintain confidence in that arbiter when so much of the misinformation crisis is driven by public distrust in official narratives?
When you look at recent examples of “pre-bunking,” you can see that it hasn’t always gone so smoothly. One of the most prominent instances of “pre-bunking” occurred during the lead up to the Russian invasion of Ukraine, when the State Department controversially announced that Russia was planning to distribute a professionally produced propaganda video that involved pyrotechnics and “crisis actors.” The video would be used to blame Ukraine for terroristic attacks on civilians and would help to justify the invasion, the U.S. said. Unfortunately, not everybody bought what the State Department was selling: an Associated Press reporter expressed incredulity at the claims and blatantly called out the government for spreading “Alex Jones” style bunkum.
Even more problematically, the video never materialised. Was it because America’s “pre-bunking” efforts stopped the Russians from releasing their video? Or was it because the video never existed in the first place? Under the circumstances, it’s impossible to say — and, therefore, it’s also impossible to gauge whether the U.S. was being a good-faith “pre-bunker” or was actually spreading its own disinformation.
In the wrong hands, pre-bunking (or, even more creepily, “psychological inoculation”) could be just another way to guide and shape online narratives — to deploy a whole different kind of manipulation that is all the more noxious because it’s distributed by authoritative institutions rather than just some paranoid goons on the internet. Roozenbeek is careful to acknowledge that “pre-bunking” is by no means the only strategy necessary to combatting misinformation and that it has to be conducted with care and sensitivity to the audience that is receiving it.
“The point that we’ve been explicitly trying to make is: we’re not telling people what’s true and what isn’t,” said Roozenbeek.
It’s also the algorithms that govern these platforms that has to be looked at, he said. “They [YouTube] have a big problem with people ending up in these spirals of increasingly low-quality content — that’s certainly an issue,” Roozenbeek said, referencing the way in which YouTube tends to send people down toxic content rabbit holes. “It’s commendable that, at least on the surface, they’re trying to do something about that,” he said. “What I don’t think would be good…is if they just said, ‘Well, don’t worry about our algorithms, we’ll just pre-bunk everything.’” Pre-bunking is not the only solution, he stresses — it’s just part of the solution.