Researchers with the NATO Strategic Communications Centre of Excellence released a report this week showing that—shocker!—it remains as easy as ever to buy followers and engagement on social media sites like Facebook, Instagram, Twitter and YouTube.
The researchers wrote in the report that they tested the preparedness of the four sites by purchasing engagement on 105 separate posts on the four sites using 16 “social media manipulation service providers” (11 of them Russian, with the other five operating out of Europe). For just 300 EUR (about $483), the team was able to lure 3,530 comments, 25,750 likes, 20,000 views, and 5,100 followers—in the process identifying 18,739 accounts “used to manipulate social media platforms.”
The team also found that four weeks later, about 80 per cent of the inauthentic engagements they had purchased were still online. They also reported a sample of the nearly 19,000 accounts involved in cooking the books to the social networks themselves, finding that some 95 per cent remained online after three weeks.
The centre advises NATO and bears its name but is organisationally independent of the military alliance, according to the New York Times.
While most of the tests were run on accounts created for the experiment, they also boosted some posts from verified accounts like European Union antitrust enforcer Margrethe Vestager and justice commissioner Vera Jourova—but only ones that were at least six months old and conveying apolitical sentiments to avoid having an impact on real conversations. According to the report, there appeared to be no mechanisms in place to better police the verified accounts than the artificial ones.
The report states that far from being a “shadowy underworld,” social media manipulation is an “accessible marketplace that most web users can reach with little effort through any search engine.” In many cases, the firms involved in such work openly advertised purchasing fake engagement on the platforms themselves.
Results were delivered in under one day with the exception of Twitter, which sometimes took up to two days. Twitter removed roughly half of the purchased likes and retweets within the timeframe of the study, while Facebook proved the best at identifying inauthentic accounts but not removing content. Instagram and YouTube fell far behind, with YouTube removing none of 100 reported accounts without any explanation. However, YouTube was the most expensive network to purchase engagement on, which is perhaps unsurprising considering that video views on the site can translate directly into ad revenue.
The results are especially eyebrow-raising given that 2020 is a major election year in the U.S., where the exact impact of widespread social media disinformation campaigns (let alone social media itself) remains a matter of vigorous debate. They also indicate that despite assurances to the contrary, tech firms are doing a terrible job of preventing automated disruption of social media networks.
“We assess that Facebook, Instagram, Twitter and YouTube are still failing to adequately counter inauthentic behaviour on their platforms,” authors Sebastian Bay and Rolf Fredheim wrote in the report. “Self-regulation is not working. The manipulation industry is growing year by year. We see no sign that it is becoming substantially more expensive or more difficult to conduct widespread social media manipulation.”
“Given the low number of accounts removed, it is clear that social media companies are still struggling to remove accounts used for social media manipulation, even when the accounts are reported to them,” the researchers concluded. “… Even if the market is somewhat chaotic, it functions reasonably well and most orders are delivered in a timely and accurate manner. Social media manipulation remains widely available, cheap, and efficient.”
“We spend so much time thinking about how to regulate the social media companies—but not so much about how to regulate the social media manipulation industry,” researcher Sebastian Bay told the Times. “We need to consider if this is something which should be allowed but, perhaps more, to be very aware that this is so widely available.”
“Fake engagement—whether generated by automated or real accounts—can skew the perceived popularity of a candidate or issue,” Oxford Internet Institute researcher Samantha Bradshaw added in remarks to the paper. “If these strategies are used to amplify disinformation, conspiracy and intolerance, social media could exacerbate the polarization and distrust that exist within society.”
Experts told Gizmodo in 2018 it is practically impossible to quantify how much of social media is composed of real people versus bots and sock puppet accounts, though all agreed that the percentage was likely in the low double digits.
Companies that specialise in monetising these bot armies largely operate in a legal safe zone, though that is changing. Earlier this year, the New York Attorney General’s office announced a settlement with Devumi LLC, a “follower factory” put on blast by a 2018 Times article, in what it characterised as the first finding by a law enforcement agency that “selling fake social media engagement and using stolen identities to engage in online activity is illegal.”
The office said that the firm’s activities violated existing laws prohibiting fraud, false advertising, consumer protection, and (when said bots used real people’s photos) identity theft.