How the War on Terror Set the Stage for Today’s Moderation Wars

How the War on Terror Set the Stage for Today’s Moderation Wars

This week, the Supreme Court is hearing two cases that could upend the way we’ve come to understand freedom of speech on the internet. Both Gonzalez v. Google and Twitter v. Taamneh ask the court to reconsider how the law interprets Section 230, a regulation that protects companies from legal liability for user-generated content. Gizmodo will be running a series of pieces about the past, present, and future of online speech.

There were surely a number of events that led a quiet middle-class teenager to become one of the most influential terrorist propagandists of all time, but one of the first was that he set up a Blogspot account.

In 2003, Samir Khan, a Saudi-born U.S. naturalized citizen, was barely 18 years old when he launched “InshallahShaheed,” which translates to “Martyr, God Willing,” in which he poured out his thoughts about why America deserved “hellfire” for the wars in Iraq and Afghanistan. Originally born in Saudi Arabia, Khan grew up in Queens, ostensibly the byproduct of a normal childhood. At some point, he decided that he hated America and wanted to join a holy war against it. Setting up his blog, Khan swiftly became something of an icon and used his status to foster ties to various high-ranking al-Qaeda members.

Eventually, Khan would become the editor of Inspire, the terrorist group’s web magazine dedicated to recruiting western Muslims to violent jihad. The magazine, which was only downloadable via PDF file, was full of grisly stuff, including articles advocating for the murder of U.S. government employees and one infamous article encouraging would-be terrorists to build bombs in their mum’s kitchen. It would also reportedly serve as the inspiration for numerous real-world terrorist attacks, including the Boston Marathon bombing.

As for Khan, he seemed to relish his role as the mouthpiece for the world’s most feared terror group. “I am proud to be a traitor to America,” he wrote, in one notorious screed. He predicted a future in which America would be overrun by jihadists.

Of course, that future never materialised. Roughly a year after making that post, Khan was silenced permanently. In September of 2011, while living in Yemen, Hellfire missiles from a U.S. Predator drone struck the convoy the 25-year-old blogger was travelling in, killing him. The government said that the primary target of the strike had been Anwar al-Awlaki, another U.S. citizen, a friend of Khan’s, and, through his online videos, one of the most influential radical clerics at the time. The targeted assassination of both men was unprecedented for many reasons, not least of which was that it involved the killing of two U.S. citizens without a trial or even a coherent legal pretext.

Two rights groups, including the ACLU, later sued the U.S. government over the drone strike, arguing that its actions were unconstitutional. Hina Shamsi, director of the ACLU’s National Security Project, characterised the lawsuit as a challenge to “the constitutionality of [the government’s] killing of American citizens without due process, based on vague and constantly changing legal criteria and secret evidence that was never presented to a court.”

“At the time, the government was taking really unprecedented and extraordinary positions. It was claiming the power to use lethal force against its own citizens and arguing that the court should have no role at all to play in reviewing its actions,” Shamsi told Gizmodo. As to Khan’s role as a propagandist, Shamsi notes that Khan was never officially charged with a crime. “The government can’t kill people based on their speech alone [in this country] — that’s pretty fundamental,” she said.

However, whether Khan was technically guilty of a crime or not, the truth was that he had been the mouthpiece for some truly horrendous stuff. Threading a thin line between incitement to violence and a constitutional grey zone where rhetorical ugliness is tolerated, Khan’s online presence, controversial as it was, was an early example of what has now become the fundamental dilemma of the social media age: how to deal with internet speech that’s considered undesirable.

It’s a dilemma that still obviously plagues us with questions that have no easy solutions: What kind of speech should be allowed? What doesn’t qualify? What should be done with the speech that doesn’t?

Dealing with Problematic Content (or Not)

This week, the Supreme Court heard two cases that challenged our understanding of Section 230 of the Communications Decency Act, the landmark 1996 law that gives broad legal immunity to web platforms and shields them from legal action as a result of the content they host. One case, Gonzalez v. Google, sought to hold Google and its subsidiary, YouTube, partially responsible for the ISIS terrorist attacks that took place in Paris in 2015. The lawsuit, which was filed by one of the victims’ parents, argues that Google “aided and abetted” one of the shooters in the incident. YouTube had failed to take down ISIS videos from its platform, and later the videos were allegedly recommended to the shooter. The other case made a similar argument about Twitter’s past hosting of terrorism-related material.

It’s interesting that these issues continue to haunt social media platforms because, for a very long time, extremist content was a problem that said platforms really didn’t want to admit existed. And, because of the protections provided by Section 230, they hadn’t really worried about it.

The Middle East Media Research Institute, or MEMRI, which researches the proliferation of right-wing Islamist content online, spent years attempting to get major tech companies to take action against extremists. During the early years of the social media industry, it was mostly a lost cause. MEMRI’s executive director, Steven Stalinsky, remembers one particular meeting he and his colleagues had with the senior policy team at Google way back in December 2010. According to him, the meeting was most memorable because of how much “screaming” it involved.

“We were being yelled at by their lawyers. It went on for a long time,” Stalinsky recalled, in a phone call with Gizmodo. Stalinsky said that, at that particular meeting, Google’s team was upset about numerous reports that MEMRI had put out accusing the tech giant of hosting terrorist content. Indeed, at the time, it wasn’t unusual to see YouTube videos that involved al-Qaeda adherents proselytizing violent jihad. Despite a large amount of this kind of content floating around its video hosting site, Google wasn’t very good at taking it down.

Twitter had a similar problem on its hands. In the early days of the microblogging app, radical extremists flocked to the platform, setting up shop to spread their gospel. Many extremist Sheikhs used accounts to advocate for jihad, with seemingly little awareness or action taken by Twitter’s management. When ISIS emerged, it too found Twitter to be incredibly useful. By one count in 2015, the group had tens of thousands of followers on the platform.

“They didn’t want to deal with it,” Stalinsky said, of the social media platforms. “They were preoccupied with other stuff and I don’t think they saw moderation as a major priority at the time. A lot of these companies were created by pretty young guys who were very good at coding but weren’t really ready for the national security implications of what they’d built,” he added.

It wasn’t until Islamic State fighters began using YouTube and Twitter to host videos of American journalists getting beheaded that the major platforms were finally forced to confront their own inaction. The gruesome killing of American journalist James Foley, in particular, became a flashpoint for change. “That was absolutely the turning point,” said Stalinsky. “There was so much government pressure, so much bad press — it was impossible for them not to do something about it.”

YouTube acknowledges that the platform has put markedly more effort into its content moderation strategies in recent years. When reached for comment by Gizmodo, a company representative said: “With respect to our policies prohibiting violent extremist content, we’ve been very transparent over the last several years about our efforts in this space, and the dramatic increase in investments starting in 2016-2017.” The representative added that, today, the platform uses a combination of “machine learning technology and human review” to catch violent videos; additionally, the platform’s Intelligence Desk, a group of specialised analysts, “work to identify potentially violative trends before they spread to our platform,” the representative said.

Still, not everybody is happy with Big Tech’s efforts to clean itself up. After platforms started paying closer attention to the content they were hosting, community guidelines expanded, and account suspensions became routine. It wasn’t just terrorists getting booted from platforms anymore, it was a whole lot of different kinds of people. As a result, complaints from folks who felt they’d been undeservedly “cancelled” or “shadowbanned” rose, and allegations of political bias — of all different stripes — became a staple.

Aaron Terr, director of public advocacy at the free speech organisation FIRE, said the major platforms’ moderation strategies may have their heart in the right place but they’re a bit of a mess overall.

“Right now you have lists of complex and vague rules, enforced without transparency by highly imperfect algorithms and underpaid, overworked staff, who are often reviewing content with little understanding of the cultural context or even the language,” he said, noting that companies like Meta have been known to hire low-skill moderation staff from countries in Africa.

Meanwhile, other firms, like Twitter and YouTube, rely heavily on algorithmic moderation, which is prone to bans that can seem quite arbitrary. “Often, users don’t receive detailed notice about how their content violated any rules, it’ll just be like, ‘You violated our community guidelines,’ or ‘You violated our policy on hate speech,’ but it doesn’t explain how,” Terr said. “There’s a lot they could do to better fulfil the promise that they’re free speech friendly platforms.”

Forever War

The problem with moderation writ large is that getting rid of speech — or a speaker — usually doesn’t change much. For social media companies, de-platforming extremist accounts may solve their own problems with advertisers but it doesn’t erase the users from the internet — instead, it just pisses them off, and forces them to move to other platforms where moderation infrastructure is even less present.

Today, the single biggest platform for the distribution of terrorist propaganda content on the web is Telegram, according to Stalinsky. There, terrorist groups are largely allowed to flourish free of any kind of censorship — similar, in some ways, to the proliferation of right-wing trolls on sites like 4chan and 8kun. This started sometime around 2015, when Twitter and other platforms finally started getting serious about kicking hardline terror groups off of its service. Ejected from the bird app, the same sorts of web cretins inevitably set up shop on the semi-encrypted messenger, using the site’s channels to engage in a variety of unsavoury activities, including fundraising, recruitment, and, most disturbingly, the distribution of “kill lists” — contact information and other personal details related to people that were considered undesirable. While the platform has made modest attempts to ban this type of activity, Stalinsky notes that it still runs rampant. Telegram didn’t respond to a request for comment from Gizmodo.

Even if Telegram were to crack down on its most odious users, there’s a pretty big problem that nobody knows how to solve: the same people could ultimately move to other platforms or hosting companies, or — given the right resources — could self-host their own content. We’ve seen exactly this trend before when it comes to right-wing figures and groups who have been “deplatformed” from major sites. Most prominently, conspiracy theorist Alex Jones was infamously kicked off Twitter and YouTube and court filings show that he racked up record-setting revenues as he pivoted to self-hosting. Meanwhile, the controversial banning of right-wing accounts on major social media sites has helped give rise to an alt-right social media ecosystem — a segregated industry populated by people jacked up on similar grievances. In short: de-platforming has, arguably, fuelled the problem of toxic content and helped further polarize the internet.

In the case of Khan, his death didn’t stop Inspire from getting published. Instead, the magazine soldiered on for years after the 2011 drone assassination — releasing a number of additional issues, the latest of which was published in 2021. Inspire’s only change was that, after Khan’s killing, al-Qaeda decided to make its editor anonymous — adding an additional layer of protection to its operations. At the same time, Inspire has now been joined by a slew of other extremist rags, all of which mimic Khan’s style. Last summer, the Anti-Defamation League reported that three new Islamist publications — two from al-Qaeda and one from ISIS — were seeing intense online readership. “All three magazines are positioned to fill the void left by the dissolution of Al Qaeda’s notorious Inspire magazine,” the advocacy group reported. Somehow, facts like this seem to indicate a broader futility at the heart of the forever war on bad internet speech and the virulent thoughts that fuel it. Try as we might, the U.S. just can’t seem to stamp out the insurgent voices that wish us harm — at least not permanently; inevitably, new voices, empowered by similar ideas, grow up like weeds in place of the ones that have been cut down.

A final absurdity of this whole mess is that, even if those subversive forces somehow manage to win (and they rarely do), they inevitably become trapped by the same dilemmas suffered by the forces that they sought to overthrow. A Business Insider piece published in January found that the Taliban, long the primary antagonist in a war on the U.S. occupation of Afghanistan, seemed bored with their recent victory. Having vanquished America’s foreign hordes, many former soldiers are now said to be saddled with “desk jobs” and spend their days doom-scrolling Twitter, benumbed by “everyday urban battles like internet addiction and difficult bosses.” One former sniper is purported to have said: “The Taliban used to be free of restrictions, but now we sit in one place, behind a desk and a computer 24 hours a day, seven days a week…life’s become so wearisome; you do the same things every day.”

Having thrown off the yoke of their oppressive empire, one has to imagine that digital insurgents from the Taliban could soon turn their attention to online moderation wars.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.