Artificial intelligence can magnify creative possibilities, but it can also reproduce society’s existing biases and inequalities.

Artificial intelligence is a tool that can magnify creative possibilities, but as experts at a recent SXSW Sydney 2025 panel warned, it can also reproduce society’s existing biases and inequalities.

AI generates millions of images every day and a truly vast amount of text. As well as biases in training data, another concern is ‘model collapse’. This is where AI learns from other AI outputs, leading to increasingly homogeneous content.

For Marie Conley, Executive Strategy Director at R/GA Australia, AI representation became personal when a Chief AI Officer described AI-generated hands as “creepy”. Conley, the mother of a daughter with a limb difference, was furious.

“My daughter has a limb difference. She is not creepy,” Conley said. “After the rage subsided, I realised that as part of the creative innovation industry, I could address representation in AI.”

Conley launched The Santa Clara Project at SXSW Sydney, which aims to build diversity into the creative process and encourage scrutiny of AI-generated content.

“Just as we’ve pushed for better representation in film and TV, let’s pledge to scrutinise AI-generated content. Let’s prompt for equity,” she said.

Awareness, she stressed, is the first step. Bias seeps into AI through the data it learns from. Wikipedia is a key source of data for AI and the majority of editors are male, she said. The imbalance also extends to AI engineers; for every female engineer there are four men.

Sophie Farthing, Head of the Responsible Technology Program at the Human Technology Institute at the University of Technology Sydney, spoke on the panel about the need for safeguards in the development and use of AI.

A human rights lawyer and former policy adviser to the Australian Human Rights Commission, Farthing studies how emerging technologies intersect with human rights, accountability and governance.

She cited research from the Gender Shades project, initiated by Dr Joy Buolamwini at MIT, which found facial recognition technology produces more false positives for women and people with darker skin tones.

“When it’s used in everyday policing, facial recognition technology is disproportionately making false positives for people with darker skin tones,” Farthing said. “If you have a darker skin tone, you’re more likely to be that case of mistaken identity.”

Farthing warned that Australia currently has no regulations governing the use of facial recognition technology.

“I’m a great fan of regulation,” she said, arguing that clear rules, accountability and proper safeguards are essential to protect human rights and ensure AI is trustworthy.

“We regulate food safety… roads and manufacturing. I don’t see why the tech industry should consider itself immune to the same sort of processes, rules and basic humanity that we apply to every other area,” said fellow panellist Andrew Birmingham, editor of CX Magazine.

The panel discussed how AI can actually drive inclusion, including ways it has expanded access or created empowerment for marginalised groups.

Farthing highlighted Thrive, a HTI program that uses machine learning to understand educational outcomes and support better policy decisions, as well as a NSW community legal centre project that uses AI to improve administrative efficiency for legal service provision.

In New York, she said, a chatbot called Roxanne helps renters, particularly those who are disadvantaged, access information about repairs and housing rights.

“When AI is built with inclusive design and proper safeguards,” Farthing said, “it can potentially help address those biases that exist in society.”

 

Share