Facebook, in an attempt to clean up the mess it’s created, has announced yet another set of changes meant to help it be less awful.
Facebook is constantly updating or expanding on its procedures for mitigating the spread of misinformation, fake news, and hate speech on its platform.
In a lengthy blog post, Facebook VP of Integrity Guy Rosen and Head of News Feed Integrity Tessa Lyons on Wednesday laid out Facebook’s latest stab at saving face, notably with a few systems that may actually help curb some of the misinformation that tends to spread like wildfire on Facebook’s own platform and those of its subsidiaries.
As Facebook has overhauled its Community Standards in the past, it has stopped short of outright removing fake or misleading information, which did not expressly violate its rules.
Instead, it opted to essentially demote that content by reducing its reach. However, Rosen and Lyons wrote this week that Facebook would begin scaling back the reach of Groups that “repeatedly share misinformation.”
In addition, Facebook will in the coming weeks be “holding the admins of Facebook Groups more accountable for Community Standards violations,” the blog post said. Going forward, the company will gauge the individual in-group actions of moderators and admins—such as post approvals—as a greater indicator of whether a Group violates its standards.
The most significant changes will include introducing a “Click-Gap” signal into user feeds. Click-Gap will essentially examine whether links have a greater number of outgoing links from Facebook than incoming links from other sources, which the blog post said “can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content.”
Think of it sort of the way that Google works, with rankings based on citations on the broader web rather than purely based on Facebook’s internal engagement. And by “engagement,” we mean the link’s ability to enrage Facebook users.
Additionally, Facebook said would be bringing in “outside experts” to weigh in on how to combat its fake news problem, and will be expanding the role of the Associated Press in its ongoing fact-checking efforts. Facebook will also allow users to scrub their past posts from a group when leaving them.
One thing it won’t be doing, however, is debuting its anticipated Clear History tool to allow users clear information the company has collected on them.
The feature was expected last year, but after ongoing delays, Engadget reported Wednesday that the company is again pushing its release back to fall of this year.
That Facebook is moving on misinformation and fake news in a more aggressive manner is certainly commendable and there are probably worse methods for going about it. Mitigating the spread of such information is no simple task, particularly as the company navigates blowback from all sides about the content that is and is not promoted on its platform with its constantly evolving algorithm and rules.
However, despite these or any other changes, scrubbing problematic content from the site completely is a near-impossible goal, given its more than 2 billion monthly users.
History has demonstrated that Facebook lacks the ability to moderate its product in any way that remotely approaches adequate. Even Mark Zuckerberg has acknowledged that with respect to issues like “election interference or harmful speech, the problems can never fully be solved.”
As for its new system for demoting bad news sources, the company doesn’t make clear whether sites that are widely read off of Facebook but still spread fake or misleading news will still be able to thrive on its platform.
[Facebook]