• Posted on 26 Mar 2026
  • 3 mins-minute read

The latest eSafety transparency report found that 79% of Australian children aged 10 to 17 have used an AI companion or AI assistant, while 8% (or around 200,000 children) have used an AI companion specifically. Unlike general-purpose chatbots, AI companions anthropomorphise interaction through pre-made or user-generated characters – friends, romantic partners, anime or zoomorphic personas – that embody distinct personalities, tend to be highly affirming and sycophantic, and develop parasocial relationships that resemble genuine human connection.

Among children who have used AI companions or assistants, 1 in 5 reported daily interactions for advice about physical health, feelings, life challenges, mental health, and wellbeing. Yet the four AI companions examined by eSafety – Character.AI, Nomi, Chai, and Chub AI – were found to be lacking meaningful age assurance and failing to redirect self-harm and suicide-related prompts to real-time human support services and adequately protect children against sexually explicit content. 

Three years ago, when I unexpectedly found myself working in a market research agency’s cultural forecasting team, stories about users marrying their AI companions in virtual weddings or creating digital avatars of the dead felt surreal – the sort of material that belonged more to Greg Egan’s Permutation City, Spike Jonze’s Her, and Victor Pelevin’s Transhumanism Inc. These fictions imagined worst-case scenarios of digital consciousness and simulated selves. Yet what once looked like speculative fiction now appears in lawsuits.

People across a wide age range have been caught up in these cases. In Canada, the family whose daughter was critically wounded in the Tumbler Ridge school shooting is suing OpenAI, alleging that the company had "specific knowledge" of the 17-year-old shooter using ChatGPT to plan a mass casualty. The suspect’s account was banned in June 2025, but the shooter later created a second account.

People experiencing mental illness also seem particularly susceptible to the illusory realism of chatbots. In the US, a man has sued Google, alleging that the Gemini chatbot deepened his 36-year-old son’s paranoia and delusions and ultimately encouraged his suicide. While Google said Gemini had provided crisis referrals, the complaint detailed that the chatbot called itself his “wife,” referred to him as “my king,” sent him on a rescue “mission” of a humanoid robot trapped in a warehouse near Miami’s airport, and even created a countdown clock for his suicide.

Another case, settled in January 2026, points to the same broader pattern. A Florida mother alleged that her 14-year-old son became deeply emotionally attached to Google’s Character.AI chatbot modelled on the Game of Thrones’ Daenerys before his death by suicide.

The UK offers a useful example of how governments are beginning to address these harms through new laws. In February 2026, the government announced that it would "shut a legal loophole" and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act, alongside social media and gaming platforms. On 2 March, the Department for Science, Innovation and Technology launched its Growing up in the online world national consultation on children’s digital wellbeing, explicitly considering issues unique to AI chatbots and whether new restrictions, stronger age assurance, and limits on risky features may be needed.

In the Australian context, eSafety’s March 2026 report has thrown the ball back into the tech giants’ court. Under the Online Safety Act and the Age-Restricted Material Codes that took effect on 9 March 2026, eSafety is now explicitly monitoring whether AI companions and other gen AI services comply with requirements around safety-by-design, age assurance, moderation, and protection against potentially harmful age-inappropriate material. CMT will be watching this space closely.

Share

Author

Alena Radina

Alena Radina

CMT Postdoctoral Fellow

News

Centre for Media Transition newsletter - Rules, risk and responsibility | Issue 3/2026 From ACMA’s powers and the challenge of regulating repeat misconduct (yes, KIIS), to what new eSafety data reveals about children using AI companions — it’s a packed edition.

News

Sacha Molitorisz shares some reflections from the recent public lecture and symposium celebrating UTS Professor David Lindsay’s contributions to copyright, privacy, cyberlaw, and digital regulation in Australia

News

Derek Wilding examines the latest developments in Kyle Sandilands’ dispute with KIIS FM, ACMA’s powers, and the challenges of regulating repeat misconduct

News

Centre for Media Transition newsletter | Defending, verifying and bundling news - Issue 2/2026