Australian eSafety Report Exposes Tech Giants' Failures in Child Protection
Tech Giants Fail to Protect Kids: eSafety Report

Australia's online safety regulator has issued a stark warning to the technology industry, revealing that eight major platforms are falling short in their efforts to protect children from exploitation and abuse. The latest transparency report from the eSafety Commissioner highlights critical gaps in how companies handle harmful content, including AI-generated material and livestreamed abuse.

Major Platforms Under Scrutiny

The eSafety Commissioner, Julie Inman Grant, has called out Apple, Discord, Google, Meta, Microsoft, Skype, Snapchat, and WhatsApp for inadequate measures in detecting and preventing child sexual exploitation. These companies were specifically questioned about their steps to tackle real and AI-generated abusive material, grooming, sexual extortion, and livestreamed abuse.

In a concerning development, the regulator has also ordered four providers of artificial intelligence "companions" to explain how they are safeguarding children from sexually-explicit conversations and discussions of self-harm. This move underscores the evolving challenges posed by emerging technologies in the online safety landscape.

Persistent Safety Gaps Identified

Commissioner Inman Grant expressed disappointment that despite a decade of engagement, some companies still lack proper measures to detect and remove new abusive material. She highlighted specific deficiencies, including insufficient language analysis to identify sexual extortion and inadequate detection of new material and livestreamed exploitation.

"These companies have the resources and technical capability to make their services safer, not just for children but for all users," Inman Grant stated. "It beggars belief that some have not yet deployed the tools and technology to detect live child sexual abuse occurring over popular video calling services."

The eSafety Intelligence and Investigations team has provided companies with sexual extortion language indicators, common scripts, kill chains, and frequently used fake imagery. However, the commissioner noted that these resources have not been adequately implemented to combat organised criminal gangs targeting Australian youth.

Mixed Progress and Industry Response

While acknowledging some improvements, including better response times to reported content and enhanced detection of resurfaced known abuse material, Inman Grant described these advances as "more incremental than monumental." She emphasised that the platforms have demonstrated they can improve when it comes to protecting society's most vulnerable members.

John Livingstone, UNICEF Australia's head of digital policy, echoed these concerns, stating that the report highlights the urgent need for a Digital Duty of Care. He criticised the current "patchwork" approach to policing and preventing child sex abuse online, calling for more consistent and robust safety measures.

"Protecting children from online sexual exploitation and abuse should be built in from the start through safety-by-design approaches and robust guardrails to prevent misuse," Livingstone said. "Top of the list is legislating a new duty of care on tech platforms in Australia to ensure safety from the start."

AI Presents New Challenges and Opportunities

Livingstone also addressed the growing risks associated with deepfakes and AI-generated content, noting that while AI creates new dangers for children, it can also be part of the solution. He suggested that responsible use of generative AI could help detect and remove abuse material at scale, potentially making Australia the safest place in the world for children to go online.

The platforms face significant consequences for non-compliance, with potential fines of up to $825,000 per day if they fail to meet the requirements of mandatory transparency notices. Companies will be required to report again in March and August this year, with eSafety planning to publish these findings in the coming months.

This report comes as the Federal Government continues work on implementing stronger online safety measures, following Australia's recent under-16s social media ban. The findings underscore the ongoing tension between technological innovation and child protection, highlighting the need for more effective regulatory frameworks in the digital age.