How Will Twitter’s Birdwatch Community Debunking Actually Work? A VP Answers Our Questions.

How Will Twitter’s Birdwatch Community Debunking Actually Work? A VP Answers Our Questions.

It’s clear that Twitter vice president of product Keith Coleman genuinely believes in Birdwatch, the platform’s community-based debunking feature that will be expanding to 50% of U.S. users this week. The feature has been met with scepticism by some, who worry bad actors will misuse it and fill the social network with even more misinformation.

Coleman has heard it all before. One of the most common questions he’s received is: Will Birdwatch be exploited?

“Everyone is so used to things on the Internet being trolled or manipulated, and they understandably wonder or worry whether this will be,” Coleman, the Twitter exec who has led the development of Birdwatch, told Gizmodo in an interview on Wednesday. “So, that’s really been a huge part of our focus.”

This has been a big week for Birdwatch, Twitter’s latest effort to address misinformation. Besides unveiling a new quality assurance system, Twitter has also announced that notes from Birdwatch contributors will start showing up in a larger number of users’ timelines in the U.S. Birdwatch relies on its thousands of anonymous unpaid contributors — there are currently 15,000, although Twitter plans to onboard up to 1,000 more each week going forward — to add contextual notes to tweets to stop the spread of misinformation.

In particular, Coleman points out that Birdwatch is able to tackle potentially misleading content that may not be addressed by Twitter’s policies or is in a grey area. For instance, does that tweet really contain the trailer for this new TV show? Is that bat really the size of a human or was the picture taken from a strange angle?

It’s kind of like Wikipedia, but for Twitter. The notes are written “by the people and for the people,” as the company describes it. That doesn’t mean that the note that’s the most popular wins, though. As explained by Coleman, Birdwatch uses what’s known as a “bridging algorithm,” which selects content from people with a range of perspectives who have disagreed in the past. The logic behind this, according to Coleman, is that if content is being recommended by folks who have haven’t seen eye to eye, it’s likely to be helpful to a wide group of people.

And Twitter believes Birdwatch is helpful. The company says that users are 15-35% less likely to like or retweet content that has a helpful Birdwatch note, meaning that misinformation is indeed going down. Furthermore, and I thought this was especially promising, users on Twitter were 20-40% less likely to agree with the substance of a potentially misleading tweet after reading a Birdwatch note about it, regardless of whether they identify as Democrats, Republicans, or Independents.

Yet, it’s not easy to control the spread of misinformation on social media. Many platforms, including Twitter, have holes or weaknesses in their approaches. Birdwatch is no different, as the Washington Post reported this week. The outlet cited an internal audit that found that Birdwatch had accepted an “overt” QAnon supporter as a contributor.

The news alarmed misinformation experts that spoke to Gizmodo, some of whom warned that letting unvetted people into Birdwatch could have serious consequences. Criteria to become a contributor are lax, which Twitter says is by design. All you need is to have a verified phone number from a major U.S. carrier; ensure your account has not recently broken any of Twitter’s rules; and have an account that is more than 6 months old.

“Birdwatch is a good idea if and only if participants are properly vetted. That doesn’t mean they should all hold a singular ideological belief system–it’s important that the system be as fair and unbiased as possible–but members should ideally have demonstrable knowledge of how to detect false/misleading information,” Sara Aniano, a disinformation analyst with the Anti-Defamation League, told Gizmodo via email on Thursday. “If that isn’t happening, then this could have serious consequences.”

The fact that an overt QAnon believer was on Birdwatch “is symptomatic of a common ailment we see today: The false claim that conspiracy theories aren’t harmful unless they are widespread,” Aniano explained. Any amount of conspiracy theories in the system can be harmful, she added.

Timothy Caulfield, a professor at the University of Alberta who studies and actively debunks misinformation on Twitter, said it was worrying that someone with extreme views was allowed on Birdwatch. While he said he appreciated that Twitter and other platforms are recognising they need to do more about misinformation, an approach like Birdwatch has potential pitfalls.

“I think that Birdwatch is hoping for a ‘wisdom of the crowds’ solution–that is, if enough people are involved, we’ll get closer to the truth,” Caulfield told Gizmodo. “But when misinformation is so ubiquitous, this strategy might not work. I mean, a huge portion of the population now believes pretty hardcore conspiracy theories.”

Caulfield also pointed out that harmful misinformation and conspiracy theories can sneak into a lot of places that Birdwatch tends to address, such as sports and entertainment. The researcher cited a 2017 study he coauthored that found scientifically inaccurate information related to Platelet Rich Plasma was frequently included in sports-related news stories, which helped to normalize and legitimatize the information.

In response to the news of the QAnon supporter on Birdwatch, Coleman told Gizmodo that although he had read the reports, he did not know what specific account they referenced. The Twitter exec further underscored that he was unaware of an incident involving a specific account.

“I think one really important thing for people to realise is that if one person could influence the outcome of Birdwatch, it wouldn’t work,” Coleman said.

Time will tell. Birdwatch will now be front and centre for half of Twitter’s users in the U.S. While its long-term success is still up in the air, I’m glad to see the platform trying out new things to address misinformation instead of throwing its hands up in the air and saying nothing can be done. Nonetheless, I do think experts point out a very valid concern when it comes to the lack of more thorough vetting of Birdwatch contributors.

You can read Gizmodo’s full Q&A with Coleman below. The Twitter exec addresses the QAnon user on Birdwatch, comments on criticism that Twitter is outsourcing content moderation, and explains why Twitter does not want to decide which Birdwatch notes are seen.

The interview has been lightly edited for clarity.

Gizmodo: People who are Birdwatch contributors are searching for information. They are spending their time contributing to this product. I wanted to know, are they getting like paid in some way, like with a free Twitter Blue subscription or something? They are, after all, helping Twitter be a healthier platform.

KC: Yeah, it’s a really good question. I think it’s worth going back to why we why we started this project in the first place and why we’re taking this approach. For a while now, we’ve had a number of approaches to misinfo interventions. Those include, for example, adding labels and annotations to tweets that violate misleading info policies that Twitter has. And we’ve been studying those and other approaches. One of the challenges that we hear regularly is that there are plenty of people out there who don’t want a company or any singular institution to decide what is misleading or not and how to annotate it. There are also challenges covering the breadth of potentially misleading info out there.

So obviously we have policies in certain areas around COVID, around civic integrity, around crises, around manipulated media, but there’s a lot of other stuff out there that the typical human living their life in the world would look at and say, like, “Wow, that is kind of misleading.” But it may be hard to craft a policy against those or related to those [tweets]. Certainly, when you’re working in those grey areas, when people are already not necessarily comfortable with a company deciding when to intervene, they may be less so in grey areas. And so the question we were asking was, how could we add context to those tweets, particularly in the wide range of grey areas, in a way that people genuinely thought was trustworthy, informative and helpful? That was that was the prompt in the challenge that led to the idea of Birdwatch.

The inspiration was looking at Wikipedia and other products like it, where you have a wide range of people who are coming together, they’re collectively creating, putting information out in the world. We thought, well maybe, instead of writing an encyclopaedia or instead of writing how-to docs on the Internet or whatever it is, people could actually add information to tweets. Maybe that would work and maybe that would be more trusted. Maybe it would cover a wide range of topics in a way that people would find helpful. Maybe it would be detailed in the whole source like a lot of these resources are. That was the idea and that’s what got us to try to pilot this concept.

Our motivation in having people do this is really to try to find a way to add context that is genuinely helpful in a way that is trustworthy. We think that generally the more intrinsic that motivation is for people doing that, the better the outcome will be and the more trustworthy it will seem. So, we’re open to exploring other kinds of recognition for contributors. They are doing a lot of work, great work. It’s obviously having a huge effect. And so we’re open to exploring a wide range of recognition or award for them. But I would say we started with the intrinsic motivation because we think that’s the most likely to produce a result that is high quality and that people trust. We’ve spoken to advisors and others about this and there are challenges that can come with extrinsic motivation. We started with the intrinsic, open to exploring more, but that’s why we’ve taken this approach.

Gizmodo: Got it.

KC: One more thing related to recognition and reward. These people are doing great work and some of them are doing a lot of it, and we want them to feel the power of that. They are having an impact and we want them to know it.

When we first launched the product, obviously very few people were seeing the notes. Contributors would know that their note had been rated helpful, but they wouldn’t know much beyond that. And now, as we scale up the service, many people are seeing some of these notes. There’s hundreds of thousands or near a million people seeing some of the notes already in the pilot, even at our current phase. We wanted the contributors to feel that like, “Hey, I wrote this note and 100,000 people saw it,” or “I helped rate this note and a 100,000 people saw it.” So, we’ve actually started counting those views and sending [them] to the people who’ve written the notes or helped rate the notes to again fulfil that intrinsic motivation that we know they have.

We do hear again and again that the reason these people are here is because they want to get information out in the world that helps people stay informed, and so we think that’s an avenue to at least satisfy that core motivation that they have.

Gizmodo: In terms of policing misinformation, there were 15,000 people in the Birdwatch pilot, a number that’s now going to change because you all are going to open it up to a lot more people. What would you say to people who critique Twitter and say that it’s outsourcing content moderation?

KC: The key to us here is that this is empowering the people to make the decisions for themselves collectively about what warrants additional context and what that context says. Our focus isn’t on who’s doing the work, it’s on how we get information [on Twitter] in a trustworthy and fair way that people find informative. It’s really about empowering people and handing over the decision. The main focus isn’t about handing over the work, if that makes sense.

[Here’s] another way to look at it. Imagine there was a team of employees of a million people, you’ve got unlimited people to [enforce content] policies and apply interventions when a tweet is running counter to a policy. It still wouldn’t achieve the full goals or potential because there are so many topics and so many tweets that those policies don’t cover. And so, the only way really to cover those grey areas in a way that seems fair and trustworthy that we figured out is to allow the people to do that. That’s really the focus.

Gizmodo: I definitely see your point and agree that there are many types of misinformation. I think the examples you gave of what information Birdwatch can and does address — Is that really the trailer for this new TV show? Is that bat really the size of a human — are really illustrative of what we can find on Twitter and other social media platforms.

KC: I would just add to that and say that Twitter, the company, still does a number of other things with regard to misleading information. This is really additive on top of that because we think it can help cover a broad range.

Gizmodo: Something I’ve been curious about is whether you all have engaged misinformation experts in Birdwatch or even the experts that regularly debunk bad info on Twitter already, many of which I follow and chat with. Are misinformation experts involved in Birdwatch and if so, who?

KC: That’s a great question. We have a set of advisors who directly advise the product. We have folks from MIT that have studied misinformation and particularly crowdsourcing around this information. We have an advisor from the University of Washington who studies areas like digital juries, an advisor from the University of Michigan who studies design of these online communities and systems, and an advisor from Duke, who studies polarization. We’ve also worked with behavioural economists at the University of Chicago to help us design the system. We’ve brought a bunch of experts directly in to shaping the system. In terms of the experts on Twitter, the way we’ve approach bringing them into Birdwatch is, first of all, making Birdwatch signups open so anyone can sign up. We want the set of contributors to be people who organically are interested in doing this, so we haven’t specifically added anyone into the contributor base. We just let people sign up and we have the people that signed up.

So, there may be such folks [misinformation experts] in it if they if they decide to join. We also see that in notes contributors are often regularly citing experts who are on Twitter. It’s not uncommon to see someone write a note that adds a bit of information and then cites a tweet as saying like, “hey, this is from CNN’s Fact Checker” or “this is from Reuters” or this from this other person on Twitter who is covering this specific story and has this credential. We often see even when those people aren’t necessarily writing the notes, we see the notes referencing their work on Twitter.

Gizmodo: And how exactly did these experts that you all are working with shape the product?

KC: Those advisors that I was mentioning, they meet with our team as we’re designing the product. We have regular, approximately quarterly sessions, with our advisory group that contains a bunch of those academic advisors. Others we meet ad-hawk and we’ll usually give them an update on what we’re learning in the product, what design challenges we’re facing, and they’ll give us feedback on either tradeoffs on design decisions we’re making or other research we should look at to help us answer key questions or other measures we should be considering in making decisions, for example. So, they’re helping sort of behind the scenes on the design side of the product.

Gizmodo: You’ve mentioned before that Twitter does not want to be taking action on individual Birdwatch notes or deciding which should be shown or not. Yet, you all have taken a stance and acted on misinformation in the past, as is the case with misinformation labels, promoting information in moments, deleting context, etc. Why is the case different with Birdwatch?

KC: Birdwatch has been an experiment with really entrusting and empowering the community and the people on Twitter to do this. We’ve taken a very clear stance on that with Birdwatch, which is we want notes to be written by the people; we want the people to decide which ones are helpful enough to be worth showing; and if there are problems with that, we don’t want as Twitter to be taking action on individual notes.

We want to be building a system that consistently over time will elevate the notes that are going to be probably found helpful. That decision and that principle really stems from the reason we started it, which is that we know not everyone wants a single company to be making these decisions. And so, we’ve just taken a clear line here, which is we want those decisions to be made by the people who contribute to Birdwatch, the people who Twitter serves.

Gizmodo: I understand that AP and Reuters are collaborating with Twitter on Birdwatch, but how they’re involved isn’t exactly clear to me. Can you talk more about their role and give me an example of how they’re contributing to Birdwatch?

KC: We have three main measures we look at. One, we measure note quality. We want to know that generally notes are effective and high quality. And so we look at things like, “are they subjectively sound helpful by people on Twitter?” Or, “do they inform understanding?” So, if you see a tweet versus you see a tweet with a note, do you come away with a different understanding? When you read the note, if the notes are effective, you should come away with a different understanding. And third, we want to know that the notes are accurate.

The first two, helpfulness and informativeness, we measure with large scale surveys across the Twitter user base. Surveying across the political spectrum across the U.S. on Twitter, we show some people tweets and some people tweets with notes and we get their perspective on helpfulness. They answer some questions to help understand whether the note has informed understanding.

To measure accuracy, we send notes that have been rated helpful in Birdwatch to professional reviewers, these partners like AP and Reuters. They evaluate them on a number of measures, including accuracy, and then we get those evaluations back. We want to see that generally accuracy is high, and if we see that accuracy is low, we investigate. If we see it’s consistently low, we would take a significant action, like we might just turn off display of all notes until we can figure out why there was an issue with accuracy.

We haven’t ever had to do that. We’ve not had those issues with accuracy, but we know that something could always change, so that’s one of our “always on” measures to understand how the service is doing. Importantly, the decision to show a note is based entirely on contributors’ ratings. It’s up to the people. If the note is rated helpful enough, it will be shown. It’s after the fact that we that we measure accuracy with partners. And then if we see a pattern of issues, we would take action.

Gizmodo: What is the biggest critique you all have received of Birdwatch? How are you addressing it, or have you already addressed it?

KC: Maybe a better way to phrase it is as a question. The biggest question we’ve received is: “Will this be manipulated?” Everyone is so used to things on the Internet being trolled or manipulated, and they understandably wonder or worry whether this will be. So, that’s really been a huge part of our focus.

The concerns could be, “will someone just mess with it in the classic manipulation sense or just trolling sense?” or, “Would it be biased in some way based on who is participating?” Our focus has been on making sure it doesn’t have those problems and making sure that it consistently elevates and makes visible, helpful notes that are helpful to a wide range of people across points of view, across the political spectrum, across different points of view.

We’ve done a number of things to make that possible, and that’s been a lot of what we focused on throughout the pilot. At the basic layer, there are some eligibility criteria that accounts need to meet to join Birdwatch in the first place. You have to have a verified phone number. A phone number has to be from a trusted carrier, so not just one of these virtual carriers where you can get 100 numbers. Your account has to have been on Twitter for at least six months and you have to have had no recent Twitter rule violations. These are intended to be simple criteria, relatively objective, any account that meets that can join. But that already makes it much more difficult to, you know, rate up a bunch of stuff or have a single person who has a bunch of accounts. That’s already providing some strength against potential manipulation.

Then [this week], we’re rolling out this new system where people have to first earn the ability to write by effectively through their ratings, identifying notes that a wide range of people find helpful or unhelpful. So that’s, again, another threshold that accounts need to meet in order to have more influence in the system. To meet that threshold, it’s not up to Twitter or what we think, it’s up to the community. You have to contribute in a way that is found helpful by the community. So again, that the thought processes in the people’s hands.

On top of all that and probably the most important [measure] is the way we actually decide which notes to show, which is this bridging-based approach. Birdwatch does not use majority rules, it does not use most likes, wins or anything like that. It identifies notes that have been found helpful by people who typically disagree or who have tended to disagree with each other on the belief that those notes are probably going to be helpful to people from a wide range of views. Those are the ones we show.

Obviously, people may still have questions about whether that’s sufficient, and we will continue to monitor how that’s all working, but the results are really encouraging. When we apply all the systems, we are seeing in the real world with notes, written by people, rated by people, selected entirely through this process, we see these notes are consistently helpful. They’re informative, they’re informative independent of party ID, which is amazing. And they’re informing people’s sharing choices. So, just by giving people information that is written by the people and selected by the people, people are choosing to not share these tweets as much. I think that it’s kind of amazing and proof that this can work and it can overcome what sometimes feels like overwhelming polarization. There is a space, though, that a lot of people can find helpful and actually tells people to update their beliefs and take action as a result.

I would think that’s the biggest question we’ve had. I imagine that may continue, but we hope the product shows what’s possible and that people, by experiencing it, come to realise that this really can work.

Gizmodo: Given the timing of the Birdwatch expansion in the U.S., some folks might believe that you all are going to lean on it heavily to monitor and debunk misinformation during the upcoming midterm elections. Is this the case or will we also be seeing other initiatives from Twitter to fight misinformation?

KC: Twitter has a whole set of initiatives around the election. Birdwatch is very much as always additive to everything else we’re doing. Our rollout is very much driven by when we think the product is ready, so we’re not driven and don’t set schedule based on external events. We only want to expand it when we feel like it’s ready, when we feel like the quality is high and also either that people will get a benefit or will learn something from the expansion. So now feels like a good time. We’ve already been operating the service in the U.S. midterm primaries, so we already have some experience of how the product performs in elections. We feel ready to be expanding it now.

Gizmodo: Can you share a bit about what you all learned from how from how it worked in the primaries?

KC: Generally speaking, the main learnings are that the notes have been helpful. The notes are sort of forming understanding, they’re generally accurate, they’re changing sharing behaviour. We’ve seen that be consistent for quite a long time across many different news events, whether that is election context or COVID and health context or Ukraine conflict contexts. It seems to produce that output that people find helpful in many different contexts, which is really encouraging.

Gizmodo: A report in the Washington Post this week talked about a leaked internal audit that revealed that an overt QAnon believer was accepted as a contributor on Birdwatch. Given the criteria you all have set, this is not surprising. What is your response to this news report and to people whose perception of Birdwatch may be tainted, so to speak, over the fact that a QAnon account was on Birdwatch?

KC: I think one really important thing for people to realise is that if one person could influence the outcome of Birdwatch, it wouldn’t work. We intentionally are allowing a wide range of people to sign up and we want people from different points of view to sign up. And amidst that, the system needs to be able to show notes that are found broadly helpful. It has to be able to have people of all different kinds of beliefs and all different kinds of motivations in it if it’s going to work.

We have focused on how to make that true, and it seems to be true with everyone so far. Birdwatch is consistently producing helpful, informative notes. So that’s really our focus. Singular accounts have not been a concern and generally can’t be a concern.

Gizmodo: So, will that QAnon account be kicked out of Birdwatch?

KC: I actually don’t know what account that is referring to or what that incident is referring to. I’ve read about it, but I’m not aware of us having an incident with a specific account.

Twitter does have policies broadly related to what accounts are allowed on the service and there are some related to coordinated harmful activities. So, if an account is in violation of that they will not be on Twitter and they wouldn’t be in Birdwatch. We follow the Twitter policies in that regard. Maybe that’s a simpler way to answer that. If the account is allowed on Twitter and it passes the eligibility criteria for joining Birdwatch, then it is allowed in Birdwatch. We think that’s an important thing.

Gizmodo: Do you all still believe that Birdwatch can continue allowing people into the program without doing a more thorough analysis of them? And do you think contributors should still remain anonymous?

KC: We think it’s important that Birdwatch has people from a wide range of views in it and we think it’s important that Twitter is not curating who those people are. We want that. We want people’s ability to participate in Birdwatch and their influence in it to be gained in a fair and objective way. We think that’s really important for people trusting the process and trusting the output.

To achieve that, we focus on making the eligibility criteria simple and understandable and objective. The account has been on Twitter at least six months, verified phone number, no Twitter rule violations, things like that. And then, with updates [this week], we’re adding another layer on top of that where to gain more capabilities, you have to have demonstrated helpfulness in the product. We think it will be a fairly strong process for ensuring quality is high. We will constantly be monitoring quality and if we see issues with that, we will evolve the product just like we have with this [this week’s] update. We are always open to changing the product, but we think this is a pretty strong start and the results so far have continued to be good in terms of quality about them.

Gizmodo: Last question. This week is a big week for Birdwatch. You all are expanding to more people in the U.S., and that’s exciting. However, you also have news reports about the leaked audit, which can generate concerns that Birdwatch can be misused by bad actors. What message do you want to send to the public in light of everything that’s happened?

KC: We think this is an exciting new approach. It’s a different way of tackling the problem. We’ve been really careful in the design of it and the rollout of it. We’ve sought a lot of input from the people Twitter serves, the people who are reading these [notes], the people who are writing and contributing to these notes. We’ve also sought input from academic advisers. We’ve run a large number of qualitative research studies with people. We’ve done a large number of quantitative studies about how this is performing, and it seems to work.

The results are so far really positive. It is found broadly helpful. It is informative. It’s informing peoples’ sharing behaviours entirely by their choice. It’s just giving people information to make up their own mind. And so, I think the proof is in the pudding. I hope that people would look at the product and see what it’s doing and decide for themselves whether they think it’s helpful. So far, a lot of people have found it helpful. We hope that many will.

On top of that, we’ve wanted to build this in a really transparent way. A lot of people sometimes feel like social media algorithms and systems are black boxes. That’s why we’ve made all of the code that determines which notes to show publicly available in open source and GitHub. All contributions are made publicly available in downloadable data files so people can audit that. If people have questions about how it’s working or want investigate or audit how it’s working or want to help us build it and make it better, they can also do that, too. I would hope that anyone who has questions or wants to dive in deeper would take advantage of the resources that are out there.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.