Social media is a content minefield, and Britain’s had it. Today, the British government announced it will appoint its media watchdog Ofcom as a regulator for social media content in a bid to rein in platforms such as Facebook, YouTube, and TikTok, among others.
The UK’s Department for Digital, Culture, Media, and Sport says the move is part of an effort to “protect children and vulnerable people online” and is geared toward cracking down on content involving terrorism and child abuse. Ofcom, which also regulates Britain’s broadcasting and telecom sectors, will also be granted “new powers” to force internet giants to comply. That said, the government’s announcement was vague as to what exactly those new powers will be. It’s also vague with regard to punishment”all it says is that companies will be held to account if they do not quickly remove what the regulators deem to be harmful content.
[referenced url=” thumb=” title=” excerpt=”]
“We will give the regulator the powers it needs to lead the fight for an internet that remains vibrant and open but with the protections, accountability, and transparency people deserve,” said DCMS Secretary of State Nicky Morgan in a statement.
The announcement was careful, however, to note that appointing a regulator wouldn’t necessarily interfere with free speech, press, or businesses with online presences. Specifically, it stated that new regulations would not prevent adults from “accessing or posting legal content that some may find offensive.” Rather, it puts the onus on platforms to define what is and is not acceptable on their sites in the terms and conditions”something that most social media sites already do. It goes on to say that companies will have to enforce its terms “effectively, consistently, and transparently,” but doesn’t go on to explain what that would look like. The statement does clarify that the regulations are aimed at the bigger social media platforms”mainly sites that enable user-generated content. Businesses that simply have a presence on say, Instagram or Facebook, may not be impacted.
It’ll be interesting to see how social media platforms react. While it seems the British government is laser-focused on child pornography and terrorism, content moderation and harassment have generally been a thorny issue that tech giants have repeatedly fumbled. Twitter has a notorious borked its own platform trying to de-Nazify its ad-targeting program. The company has also bungled most of its attempts to address harassment. Other platforms haven’t fared much better. Facebook’s AI moderation has gaps you could drive a truck through, and its human moderators are miserable. YouTube also has an extremist problem despite vows to curb hate speech, and creators on the platform often complain its policies are inconsistently enforced. Even relative newcomer TikTok, famous for delinquent teen shenanigans, has had a rough go of it.
Gizmodo reached out to several social media platforms for comment but did not immediately receive a response. However, Gizmodo did get a statement from the UK Internet Association, a trade group that counts Facebook, Google, Reddit, Snap, and Twitter among its members.
“Internet companies are committed to working with government to achieve our shared goals of keeping people safe online and ensuring that the internet continues to deliver benefits to the economy and society,” Said Daniel Dyball, UK Executive Director at Internet Association. “We will continue our constructive engagement on the issues of concern that are still under review”for example, the scope of regulation, treatment of legal but potentially harmful content, and enforcement powers”and look forward to the full consultation response in the spring. These are complex topics, and it is essential that we find a solution that both enhances digital safety and fosters a thriving internet economy.”
According to the DCMS, more details regarding enforcement and Ofcom’s regulatory powers will be decided on later this autumn. The office also plans to simultaneously work on legislation in the meantime.