An internal Twitter report found that right-leaning content on the platform enjoyed disproportionate amplification when compared to left-leaning content. Twitter decided to release the report publicly, though it doesn’t as yet have a reason for why this occurs.
The study focuses on Twitter’s recommendation algorithms, so if you routinely tell Twitter to sort your feed chronologically (despite it constantly wanting to go “Home”), this won’t affect you. If you deliberately follow people who disagree with you to escape your social media bubble, Twitter’s recommendations on the Home screen can be particularly intolerable.
The study analysed millions of tweets from April 1 to August 15, 2020, capturing the lead-up to the US presidential election, and aimed to answer the following questions:
-
How much algorithmic amplification does political content from elected officials receive in Twitter’s algorithmically ranked Home timeline versus in the reverse chronological timeline? Does this amplification vary across political parties or within a political party?
-
Are some types of political groups algorithmically amplified more than others? Are these trends consistent across countries?
-
Are some news outlets amplified more by algorithms than others? Does news media algorithmic amplification favour one side of the political spectrum more than the other?
The first part of the study examined tweets from elected officials in Canada, France, Germany, Japan, Spain, the UK and US. No difference was found between algorithmic amplification on the Home screen compared to the reverse chronological timeline.
The second part looked at news outlets, and had some more interesting findings:
In six out of seven countries — all but Germany — Tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group.
Right-leaning news outlets, as defined by the independent organisations listed above, see greater algorithmic amplification on Twitter compared to left-leaning news outlets.
Importantly, the content of these tweets wasn’t considered. Twitter looked at the political affiliations of the tweeters as confirmed on external, public sources, and then looked at how amplified those users became.
Another interesting finding was that “group effects did not translate to individual effects”, so despite being within the same factions, two different individuals could experience different levels of amplification.
Twitter has said the next step is to identify the root cause of the disproportionate amplification, and if possible, fix the algorithms structure to eliminate it. It believes that amplification is inherent to recommendation algorithms, but that it should be equal across both sides.
The data will be made available for independent researchers to replicate Twitter’s findings, but currently Twitter is seeking a way to do this without compromising privacy. Even with names and other details hidden, it only takes a few data points to deanonymise someone. Especially with highly identifiable metrics being looked at; number of followers, for example, could be relevant to calculating how much amplification someone received.
Twitter has previously investigated its image cropping system that inadvertently displayed racial bias, and found the underlying reason for it. Hopefully it can do the same thing here.