The Digital Industry Group (DIGI) has launched a new voluntary code of practice for tech companies to combat the spread of misinformation and disinformation in Australia. So far it has been adopted by Google, Microsoft, Tik-Tok, Twitter and, ironically, Facebook.
This disinformation code has come at a crucial time
DIGI has been working on the Australian Code of Practice on Misinformation and Disinformation since 2019. Its development was requested by the federal government as a response to the Digital Platforms Inquiry mere months before COVID-19 changed the world.
Since then, the spread of misinformation and disinformation has seemed more pertinent than ever. Conspiracy theories have run rampant over the past 12 months.
These include wildly unproven theories about the origins of COVID-19 (including blaming 5G), dangerous treatments recommendations (that were endorsed by former President Donald Trump) and anti-vaxxer misinformation now that vaccines are beginning to roll out.
Misinformation in Australia
Here in Australia we have seen the spread of COVID-19 misinformation from high profile sources.
Former celebrity chef Pete Evans was removed from Facebook for repeatedly posting coronavirus conspiracy theories.
Evans was known to post health disinformation and conspiracy theories as well as anti-mask and anti-vaccine content during 2020.
He recently announced he would be running for the NSW Senate as part of The Great Australian Party.
Federal Liberal MP Craig Kelly has also repeatedly posted misinformation to Facebook, as well as openly undermined Australia’s COVID-19 vaccination roll out.
Facebook finally banned Kelly from its platform in mid-February.
But these are just a few examples of a sea of misinformation on social media. And it took a long time for them to be removed from the platform — their disproven messages were already firmly out there.
Similarly, Donald Trump was able to use his platforms to spread misinformation and disinformation for years. He was only removed in the final days of his presidency due to the Capitol riot.
DIGI’s new code has clearly come at a crucial time, particularly considering Facebook’s recent actions in Australia.
What it actually does
“People misleading others, or people being misinformed, are not new problems — but the digital era means that false information can spread faster and wider than before,” DIGI Managing Director, Sunita Bose, said in a statement.
“In this code, we’ve worked to get the balance right with what we think people expect when communicating over the Internet. Companies are committing to robust safeguards against harmful misinformation and disinformation that also protect privacy, freedom of expression and political communication.”
But what does that actually mean?
There are several major objectives of the voluntary code that aim to remove misinformation and disinformation, as well as educate Australians on how to better identify it on tech platforms.
Firstly, the code aims to provide safeguards against Harms that may arise from disinformation and misinformation.
Some examples given in the code include, but aren’t limited to, implementing policies and procedures that require human review, labelling content as false or misleading, content removal and account suspensions.
Another proposed safeguard includes “prioritising credible and trusted news sources that are subject to a published editorial code.” Considering that Facebook just banned news in Australia, we’re guessing it won’t be implementing that one for the time being.
Another objective of the code is to disrupt advertising and monetisation incentives for disinformation. In other words, make it more difficult for people to profit from content pushing disinformation.
The rest of the objectives include:
- Work to ensure the integrity and security of services and products delivered by digital platforms
- Empower consumers to make better informed choices of digital content
- Strengthen public understanding of disinformation and misinformation through support of strategic research
- Improve public awareness of the source of political advertising carried on digital platforms
- Signatories publicise the measures they take to combat disinformation and misinformation
The last objective involves companies committing to publishing and implementing policies as well as providing annual transparency reports.
The inclusion of political advertising is particularly interesting, and the code addresses this: “While Political Advertising is not Misinformation or Disinformation for the purposes of the Code, Signatories will develop and implement policies that provide users with greater transparency about the source of Political Advertising carried on digital platforms.”
The government approves of the disinformation code
The development of the code will be overseen by the Australian Communications and Media Authority (ACMA) and has the support of the federal government.
“We’ve all seen the damage that online disinformation can cause, particularly among vulnerable groups. This has been especially apparent during the COVID-19 pandemic,” Paul Fletcher, Minister for Communications, Urban Infrastructure, Cities and the Arts, said in a press release.
“The Morrison Government will be watching carefully to see whether this voluntary code is effective in providing safeguards against the serious harms that arise from the spread of disinformation and misinformation on digital platforms.”
According to Fletcher, ACMA will report to the government on the effectiveness and initial compliance with the code by June 30, 2021.
This news comes less than a week after Facebook removed the sharing of reputable news sources from its platform in Australia. Meanwhile, fake news pages that are known for sharing misinformation and disinformation remain on the platform.