Q and A with Jon Roozenbeek, Winner of the 2020 Research Prize in Public Interest Communications
In the following interview, Dr. Jon Roozenbeek, a researcher at the Cambridge Social Decision-Making Lab at the University of Cambridge, and co-author of paper that received the 2020 Research Prize in Public Interest Communications, discusses his research into ways to help prevent the spread of false information online, with Annie Neimand, Director of Research at the Center for Public Interest Communications.
Read the winning paper: “Fake News Game Confers Psychological Resistance Against Online Misinformation,” he co-authored with Sander van der Linden.
frank: Why do you study misinformation and fake news?
Jon Roozenbeek: I think it’s an interesting problem… how do you solve this problem of making sure that people become less vulnerable to misinformation, that misinformation becomes less big of a problem while avoiding a lot of the constraints. For example, freedom of speech, right? You can’t just say okay, let’s delete all fake news. Because the moment you do that, that part becomes illegal, which is absurd, right? So there’s a lot of parameters and constraints to this problem that you need to factor in before you can start solving it. And I thought that was interesting.
Second, the internet has been heralded a lot as this advent of… a space for universal free discussion and so on. And it is in many ways, but it also comes with a lot of problems that we are starting to solve collectively sort of as scientists, as lawmakers, policy makers, journalists and so on and so on. And we’re noticing that spreading misleading false information for whatever purpose, political, financial, whatever, is actually surprisingly easy and surprisingly successful. So to be on the side of solving that problem is really exciting.
What challenge does your study take on?
The study takes on the challenge of building individual resistance against misinformation. So I don’t look at algorithms, I don’t look at legislation or fact-checking or anything like that. The study looks at what we can do at the individual level. How can we equip individuals with the skills to know what it even looks like when someone’s trying to deceive you online? How can we make sure that that is scalable?
So part of the question was, how do we get people interested in this? How do we make sure that we develop something that people voluntarily engage with? I don’t feel like I’m a kind of condescending, ivory tower academic telling you how it’s supposed to be done, but instead involves people in critical thinking in such a way that they actually enjoy it. We’ve been trying to balance those two things. So academic rigor, and also fun and a sense of humor to make sure that whatever we develop ends up being used voluntarily by people.
Can you briefly tell us what you did in this study?
For this study, we first designed an online game that was playable from any browser, really. For this game, we boiled down this whole nebulous concept of online misinformation into several simple concepts… We said, okay, let’s work with six common misinformation techniques. For example, conspiracy theories using emotional language, polarizing audiences, fake experts, trolling, and so on. Let’s turn that into an online game where people learn about these different techniques while playing. We decide to have people play from the perspective of the villain. The idea behind that was, it’s a little bit daring, a little bit counterintuitive. It’s also a little bit controversial perhaps, but we decided to do that deliberately because we thought, well, it’s much more likely that people will voluntarily engage with a game in which they are the bad guy. Where they can do all of the nasty things that you’re not supposed to do online. And in so doing, learn about how misinformation works. So we can create that game using insights from social psychology about persuasion, deception, but also from media studies.
So how do things, websites, or propagandists or political operators or other people seem to deceive people online? How do they go about doing that? So we created that game… The game takes about 15 minutes to play.
Before you play the game, you answer a couple of questions about your age, your gender and so on. And you’re also shown a couple of headlines, which may or may not be manipulative or fake in some way. And you’re asked okay, “How reliable is this headline according to you” and you give it a number from one to seven somewhere. You go through the rest of the game, and then at the end the game asks, “Hey, you participated in the survey at the beginning. Would you like to do that again? Help us answer a couple more questions.” And they’re again asked, “How likely do you think this headline is?”
People become much better at assessing that a headline that is manipulative or deceptive in some way was less reliable or very unreliable after playing.
We launched the game in February 2018, and the website immediately crashed because of all of the visitors thanks to the press office at Cambridge. Journalists were interested in it, so they wrote about it in The Guardian, CNN, and so on. And that brought a lot of visitors to the website, which was amazing. When the website was back up, we managed to collect about 15,000 responses in one month and a half. That meant we could do really nice statistics on whether or not people become better at spotting misinformation.
We found… the deceptive headlines were seen as much less reliable afterplaying. Whereas the non-deceptive headlines are seen as equally reliable before and after playing by people participating in the survey. We found no real differences for people with different genders or different age, education. Background mattered a little bit. Political affiliation being left wing or right wing or didn’t really matter all that much, either. So in that sense, we were very happy to see that the game works really well and not just for liberals or conservatives or for men or for women, but for pretty much everyone who played.
Your paper was testing this theory called inoculation theory. Can you tell us how your findings add to our understanding of inoculation and ways to counter misinformation?
Inoculation focuses on preventing persuasion. So there’s a lot of research out there that looks into changing people’s minds, if you will. So how difficult is it to make someone unbelieve something that they firmly believe? And it turns out that that’s almost impossible, even if that belief is based on objective misinformation. For example, if someone firmly believes that vaccines cause autism, which is verifiably untrue, and they’ve invested quite a lot into believing that, it’s extremely difficult to convince them that that’s not true. Inoculation then is how about we flip the switch and we focus on the people who haven’t been persuaded yet and equip them with the tools preemptively to know that they were about to be misled. And what does that do if you do that?
Our contribution is that it switches the perspective from what is traditionally done in inoculation research, which is inoculating against specific arguments. For example about climate change or in the original case in the 1950s about brushing your teeth, and instead focuses on techniques. So not the deceptive argument itself, but rather how is that deceptive argument used and can you preemptively, before the real persuasion attempt happens, inoculate people against that technique. And we find some pretty good results with that seems to be the case.
What does this mean for practitioners who are working in the social change sector?
This is one of the first interventions that we know of that actually has the statistical backing behind it. To say that it actually works as intended and works really well. We’ve tested how long the effect lasts, for example, and it lasts up to eight weeks. We’ve tested if people’s confidence in their ability to assess what is and what is not misinformation improves after playing. So we’re very confident when we say that we know that this works and very well.
So as far as sort of traditional media literacy efforts go in the schools and so on, this tool is one of the first to be actually rooted in insights from social psychology and behavioral science that we know actually has an intended effect. Obviously it’s not the only thing that needs to be done about the problem of misinformation, but it is an important step in knowing that it’s possible to equip people with the tools.
What are you working on next?
There’s a few projects that are ongoing. One is on extremism. We built a game called Radicalize where you were tasked with recruiting someone into your extreme organization, which is an absurd one: The Anti-Ice Cream Front. Obviously, I think we were very careful with this kind of material, really because it’s so sensitive. We show people how the persuasion attempts work in extremist recruitment and we wanted to see if, okay, the people actually become better at spotting those after playing that game.
The next step for that is to see, okay, can you provide an intervention also in refugee camps in Lebanon. We’re working with an organization in Beirut in a mobile lab, which has the capacity to go to refugee camps in Lebanon. And we’ll plan to run some trials there to see if this kind of intervention would be useful in that context. Obviously after the adaptation and translation and so on.
The second project is with WhatsApp. A little over a year ago, we got a grant from WhatsApp to study and combat misinformation on direct messaging apps, which is a really big problem in India, Mexico, Brazil, and other places where direct messaging apps such as WhatsApp are used as social media platforms. We built the game for that as well, which specifically focuses on how misinformation spreads on these types of apps. And we’ve run some initial tests with that. It looks good and the next step is to translate that game to Hindi, and to test it in rural India especially. We want to see, if this game can be voluntarily adopted by a lot of people, maybe the word will spread that this type of game available, for example, in WhatsApp itself, and people would start playing it and that will prevent harmful rumors from spreading on WhatsApp to the same extent that they do now.