• Posted on 23 Oct 2025
  • 4 mins read

Australia is currently in the midst of figuring out which social media platforms the country’s youth will be banned from accessing come December 10, fuelling criticism that the entire approach is a knee-jerk, rushed intervention.

Case in point: recent comments from the eSafety Commissioner indicated that notorious cesspit of the internet – 4chan – will likely not fall under the ban because it is an “image board”. There are a few points in this comment to unpack.

First, 4chan doesn’t require a user to sign up before posting or interacting. In that sense, 4chan isn’t even captured by the legislation, which requires age-restricted social media platforms to prevent under16s from creating or maintaining accounts. This account-based limitation undermines the logic of protecting kids from harmful content: if the risk sits in the feed, removing accounts just shifts kids into feeds without protections.

While accounts may underpin other risks, such as hyper-targeted algorithms, which have given rise to concerns about addiction and echo chambers, banning accounts does not protect against harmful content. It may, in fact, make it worse. A recent Guardian Australia test found that simply accessing YouTube Shorts or TikTok in a logged‑out state on a fresh device quickly surfaced gambling promos, violent incidents and far‑right content. eSafety has confirmed the logged‑out experience isn’t subject to the ban, even while urging platforms not to undermine the intent of the law. That’s the core loophole.

Google, the owner of YouTube, reiterated this point when it told a Senate inquiry that not only will the ban be “extremely difficult to enforce”, but that forcing teens into a logged‑out experience will strip away the safety features parents rely on – no autoplay, break reminders, teen modes, or personalised filters. Meta is already testing AI to identify suspected under‑18 Instagram users and automatically shift them into teen settings. Under this law, those protections will vanish if the user is simply logged out.

Meanwhile, definitional hair‑splitting is doing a lot of work. Ignoring the account issue, the eSafety Commissioner’s framing of 4chan as an “image board” is being used to justify an exemption from the ban, while YouTube’s arguments that it’s a “video‑streaming platform” rather than social media haven’t carried any weight.

If you’re looking for any coherence on what social media is, the list of companies approached by eSafety to prompt them to self-assess whether their platforms fall under the law doesn’t help. Alongside the expected Facebook, Instagram, TikTok and X, eSafety reached out to GitHub – a developer collaboration platform – which speaks to how elastic the definition of “social media” has become. Contrast this with LinkedIn, which, despite being a full-blown social network, has argued its way out of the ban by being too boring for children. Does the government consider it unlikely that kids are on LinkedIn, therefore sparing Microsoft (LinkedIn’s owner) the onerous burden of implementing age verification, or does it think it is valuable for kids to start professional networking and development early? Probably the former, considering recent comments that eSafety would focus on platforms with a large number of young Australian users. If this reflects eSafety’s motivation for contacting GitHub, the future of open-source code development in Australia looks bright, if, of course, GitHub manages to escape the ban.

Back to 4chan. Officials said that 4chan would instead be captured by other Online Safety Act codes (designated internet services) and could face penalties up to $49.5m, but also acknowledged eSafety hasn’t formally assessed it and is taking a risk‑based approach focused on “mainstream” platforms. This is the heart of the trust issue: leaving a well‑documented dark‑web‑adjacent platform outside the flagship initiative to address youth online harm sends precisely the wrong signal to families about where the risk actually lives. It also ignores the practical reality that 4chan reportedly refused to pay a proposed UK fine – so even if captured by codes later, compliance is hardly guaranteed.

The ban both over‑reaches and under‑reaches: it dampens incentives to build age‑appropriate safeguards, yet limits restrictions to the logged‑in experience only. If Australia were serious about protecting children, it would either rethink its approach or ban access to harmful sites outright, account or not.

Share

Author

Kieran Lindsay

Kieran Lindsay

Research Officer

News

Monica Attard unpacks the latest BBC turmoil and what it signals for the ABC as public broadcasting becomes a proxy battlefield.

News

Derek Wilding digs into the proposed Australian content obligations for streaming services. 

News

Dr Alena Radina looks at new claims about Russia “grooming” AI models.