Web wipe: combatting hyperreal deepfakes
AI’s latest jaw-on-the-floor moment comes courtesy of Sora, a generative tool from OpenAI that produces stunning videos from text prompts. While there are other video models, Sora leaps far ahead with HD imagery and multi-shot edits. Most impressively, Sora can generate videos that are 60 seconds long, versus 5 seconds for Google’s Lumiere (launched January – things are moving fast).
This immensely powerful model obviously brings risks: deep fakes, hateful imagery, harassment, and other nefarious uses. OpenAI knows this and is taking safety measures, such as red teaming with help from experts in misinformation, hate and bias. But it also says the system will need ‘learning from the real world’ to be safe. Translation: ‘We’ll probably have a big problem like that Taylor Swift porn crisis at some stage, but we’re going to release and figure out solutions on the fly.’
But the appearance of a model this powerful means practical issues need to be thought through now. How will innocent victims of deep fakery get the content expunged from the internet before it ruins their lives? Luckily, I had a coffee date booked in with Bryce Craig, a lawyer from Gilbert + Tobin, who knows as much about this issue as anyone, and was kind enough to explain the current state of play.
The good news is Australia is somewhat served by existing legislation. The Online Safety Act provides a channel for victims of non-consensual intimate imagery to get redress. This includes deepfake porn, provided it depicts an Australian citizen. The most common outcome of complaints is that the content is taken down quietly, either through cooperation of the person publishing it or the relevant online platform. But if the person who published the material refuses to remove it, the eSafety Commissioner can take them to court. Last year Antony Rotondo, owner of a website called Mr Deepfakes, was prosecuted and fined $25,000 in the first case of deepfake pornography to reach the courts in Australia.
There are also laws in Australian states that can apply to deepfake pornography. Rotondo, for example, is also facing obscene publication charges in Queensland. Bryce notes that existing criminal and telecommunications laws also provide options, but queries their practicality: “How often do we see those things enforced nimbly enough to be effective for an everyday person? Often the police will refer victims back to eSafety.”
The pathway to breaking out of a non-sexual deepfake nightmare is less clear. There is some protection under Australian consumer law for a case where a celebrity appears to endorse a product in a fake video. And for cases where a person is falsely depicted committing a criminal or unsavoury act, defamation law provides a pathway to seek public correction and damages. But defamation proceedings are expensive and drawn out, and while an interim injunction is possible, takedown is not necessarily assured. And of course, material shared with any measure of virality is seldom scrubbed from the internet completely.
The biggest gaps concern deepfake content that is neither sexual nor commercial in nature, particularly political content. For example, beyond relying on the platforms, how do you ensure that a deepfake video of a politician contradicting their publicly held positions is not allowed to proliferate on the eve of an election? One possible solution is the government’s proposed disinformation laws, but these have been met by scepticism and, in some cases, open derision, and are unlikely to oblige platforms to take down political content, fake or not. See our submission on the draft bill for more on this issue.
Sora’s fidelity starkly illustrates that this is not tomorrow’s problem. Deepfake incidents have already started to plague politics and we’re almost certain to see more during this year’s US election. Australia’s election is coming in 2025 – we need a practical solution, but this is a particularly intractable problem.
Shaun Davis, UTS FASS Masters student