#Humanising AI futures
Late last month, UTS hosted the half-day symposium, Humanising AI Futures: Reimagining data and AI for a future worth wanting. Much like generative-AI, it mashed together insights from a range of wildly different disciplines. The results were fascinating.
In her keynote, Heather Horst explored that AI and automation don’t spring from nowhere, but from national and corporate contexts. Theses contexts mean biases are being baked into our tech. One example is the Amazon Echo Look, which recommended clothes, thereby revealing that its coders and its users had wildly different values about bodies and clothes. Amazon Echo Look is now defunct.
The CMT’s Michael Davis gave a taste of research he’s conducting with Monica Attard into how journalists use AI; David Lindsay outlined his research with Evana Wright into the implications for news media of the regulation of genAI; Nicholas Davis from the Human Technology Institute talked corporate governance of AI; and an eye-opening presentation from Adam Berry revealed how AI often excludes persons with disabilities entirely.
To close, literary scholar Michael Falk challenged the audience to replace the word ‘intelligence’ in the phrase ‘artificial intelligence’ with something better. ‘When ChatGPT says, “I cannot do that”, there is no “I”,’ said Falk. ‘We have a tendency to treat AI as a responsible, coherent agent, when really it’s hundreds of thousands of specific people building these systems ... AI will become what we imagine it to be.’
Sacha Molitorisz - Senior Lecturer, Law
This was featured in our Centre's fortnightly newsletter of 11 August - read it in full here and/or subscribe.