The AI Deepfake Controversy Is Bigger Than Taylor Swift

The AI Deepfake Controversy Is Bigger Than Taylor Swift

Over the last week, deeply concerning and pornographic AI-generated images of Taylor Swift were shared across Twitter and the internet as a whole. 

Disturbingly, one of the images was live for 17 hours, gaining 45 million views and garnering hundreds of thousands of likes, reposts and bookmarks before X banned the account. 

The term “Taylor Swift AI” was trending on X across the globe. 

Twitter responded by blocking the search term Taylor Swift, taking a long-awaited step down the road of content moderation. However, as of publication, Twitter users can search the term once again. 

Reacting to the news, several organisations have voiced their concern with SAG-AFTRA releasing a statement calling the images “upsetting, harmful, and deeply concerning”. And even the White House has put in its two cents on the situation, calling it “alarming”. 

And now, a handful of US senators have introduced a bill this week to criminalise the spread of non-consensual sexualised images generated by AI. 

Swift hasn’t made a public statement, but sources state she is “furious” and currently pursuing legal action via her legal team.

Deepfakes are not a new technology by any means, but with the resurgence of generative AI technology, it has become easier to create and access this type of content online. 

While this may not seem like an issue that might impact a non-celebrity, this type of abuse can happen to anyone, and it’s on the rise. 

An identity fraud report from Sumsub released last November showed the APAC region has experienced a 1530 per cent surge in deepfake cases from 2022 to 2023. 

Julie Inman Grant, eSafety Commissioner said deepfakes represent one of the most “egregious invasions of privacy”.

“The rapid deployment, increasing sophistication and popular uptake of generative AI means it no longer takes vast amounts of computing power or masses of content to create convincing deepfakes,” she said. 

“As a result, it’s becoming harder and harder to tell the difference between what’s real and what’s fake. And it’s much easier to inflict great harm.”

Mark Van Rijmenam, strategic futurist and author told Gizmodo Australia there is “nothing we can do” to stop deepfakes from being created and spread.

He also noted how easy it is to create harm with deepfakes. 

“It’s very easy to do harm and platforms like X just take way too long to moderate this stuff. Which is not surprising given the state of X at the moment. This is a really big problem,” he said. 

Cracking down on harmful content

While content can be created, there is a way for victims to complain. The eSafety Commission has an image based abuse scheme where people can report when they are being threatened or have seen doctored or real images of themselves. 

Through this scheme, Inman Grant said the commission is receiving reports containing deep fakes.

“We are already receiving reports containing synthetic (AI generated) child sexual abuse material, deepfake videos created by teens to bully their peers and of course, deepfaked porn through our image-based abuse scheme,” she said.  

Last November, the eSafety Commission began public consultation on draft industry standards which will require tech companies to do more to tackle seriously harmful content, including online child sexual abuse material and pro-terror content.  

The standards address the production, distribution and storage of “synthetic” child sexual abuse and pro-terror material, created using open-source software and generative AI. 

Can I go to jail for creating or distributing a deepfake?

So, if someone creates and distributes a sexually explicit deepfake of yourself or someone you know in Australia, can they be legally charged? Yes and no. 

Alec Christie, partner at Clyde & Co who specialises in AI, data privacy and deepfakes told Gizmodo Australia that there is not a specific general national legislation around the creation and distribution of deepfakes.

However, the particular areas which deepfakes are used such as pornography, revenge porn, intimate images and child sexual abuse material, can be covered in other laws. 

“For example, the Online Safety Act prohibits non-consensual sharing of intimate images and specifically includes what they called altered images, i.e. deepfakes,” Christie said. 

Complaining to the eSafety Commissioner could result in the bad actors being fined hundreds of thousands of dollars, Christie explained. 

“The important thing is [the eSafety Commissioner] can send a notice to the relevant ISP etc, to say take it down,” he said. 

While the current laws do reprimand those who distribute deepfakes, scarily, they don’t punish those who create them. 

“There’s no crime or there’s no civil regulation for me not to create [deepfakes]. There is no prohibition on me for having it. It comes in on the use of the distribution,” Christie said. 

Clearer regulation needed 

Christie has called for a national approach to deepfake regulation in Australia, for both nefarious and non-nefarious uses. 

“This is the hot topic at the moment, how to run deepfakes, how to manage deepfakes and in my humble opinion, we’ve got to not just have bits and pieces, whether it’s an intimate or whether it’s a consumer issue,” he said. 

“We’ve got to have and I hate saying this because, we’re over regulated, but we’ve got to have a national approach to deepfakes.”

He suggested a similar approach in the U.S. where images should be labelled whether they have been altered by AI. 

“All of the aspects, the nasty stuff that happened to Taylor, all the way down to the cheapy embarrassing stuff,”

He suggested some factors that need to be clarified if there was regulation, “When it can be done when it can’t, what the penalties are and some sort of ability for people to complain without necessarily spending $10 million on legal fees.”

Platforms to step up 

Those platforms who are used to distribute this content also need to take some responsbility too when it comes to the distribution of this troubling content. 

Inman Grant said a “greater burden” must fall on the “purveyors and profiteers of AI” to take a robust safety by design approach. 

“So that they are engineering out misuse at the front end. We’re not going to regulate or litigate our way out this – the primary digital safeguards must be embedded at the design phase and throughout the model development and deployment process,” she said. 

Inman Grant highlighted the need for tech platforms to do much more to stop the sharing of these images. 

“And platforms need to be doing much to detect, remove and prevent the spread of this extremely harmful content,” she added. 

Van Rijmenam is of a similar sentiment, saying there needs to be better moderation of the social platforms like Twitter.

“One of the tweets took 17 hours to take down, which is simply way too long. In 17 hours, there can be millions of copies on the internet already,” he said. 

“These big tech companies should be required to do a much better job at finding deepfakes and removing deepfakes. And unfortunately, X is not really an example at the moment of motivation.

“We need to require these big companies to take a bigger step because it can definitely ruin a society. It’s become a more bigger problem.”

Image: Getty Images


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.