Hall of mirrors
Last week, The Washington Post launched a new AI tool called ‘Climate Answers’. The chatbot, which is now available on the outlet’s homepage, its app, and inside its articles, is designed to answer readers’ questions about climate change, the environment, sustainable energy, and related issues. What makes it different from other chatbots is that Climate Answers uses the outlet’s own respected climate reporting to answer these questions.
This is an interesting new model and one that can be replicated by other newsrooms. For starters, it is cost-efficient, in that leveraging existing journalistic content minimises the need for constant updates. Since it uses the outlet’s reporting, and is designed not to produce responses for questions it does not have an answer for, the probability of misinformation is reduced. And by relying on its own reporting, the Post also minimises risks of running into legal issues, such as copyright suits by external parties for using their content.
However, there are serious concerns that are not limited to AI’s massive environmental footprint and soaring emissions (a ChatGPT query needs nearly 10 times as much electricity as a Google search query).
While a dedicated AI chatbot, drawing only on one newsroom’s (or company’s) archives minimises the risk of misinformation, it increases the risk of bias because it draws only on a single source. We know that generative AI can be more biased than humans; it discriminates in harmful ways; and can perpetuate racial and gender stereotypes. I have previously expressed concerns about the whiteness of AI, and am increasingly troubled by the tendency of these tools to spin up and regurgitate disturbing clichés: Refugees are a burden. Prisoners are black. Political leaders are men. Diversity is tokenistic. China is a threat.
The bias of a particular news outlet will be replicated in its chatbot output, while the perceived trustworthiness of chatbots will leave many users convinced that they have a complete and accurate answer to the question they asked it.
What if AI also propagates war-orientated and conflict-escalatory narratives in more harmful ways than what we see now – if, say, the Post's model is replicated by partisan newsrooms and applied to broader topics including politics, society, migration, and war and conflict?
The answer to this bias would seem to lie in broader training of these models, but that again increases the risk of inaccuracies and will not eliminate systemic bias across different outlets.
For instance, to a question of whether China is a threat to Australia, a proprietary chatbot, whether trained on only one Australian newsroom’s output or that of many, would provide a response that reflects the dominant narrative - that China is a threat to Australia.
Or to a question about religion and politics, the response would likely reflect what Crikey has noted to be a lack of religious literacy by a majority of journalists in a country that is rich with religious diversity.
The bias in reporting of international wars has also been quite apparent. When coverage of an attack on one hospital makes headlines like 'Israeli military says its forces have entered Gaza hospital in a ‘precise and targeted operation’ and an attack on another reads '"No words for this": horror over Russian bombing of Kyiv children’s hospital', it doesn’t take a rocket scientist to understand what the problem is and where it lies.
AI-powered chatbots are learning everything from existing content, and narrowing AI will not solve the problem of bias. In fact, it will pick up on racist undertones and implicit linguistic bias in journalistic content and replicate it in its responses to user queries. What journalism desperately needs is a genuine investment in accurate and representative reporting, particularly of how the ‘other’ is reported. Unless that is taken seriously, journalism and other content that is reproduced by AI tools and chatbots will end up becoming a hall of mirrors in which the public will continue to learn from our worst journalistic impulses, blocking their view of what’s actually happening outside the mirror world.
Ayesha Jehangir, CMT Postdoctoral Fellow