Experts have issued a stark warning about the Chai AI chatbot application, claiming it poses significant risks to Australian children. The app, which allows users to interact with artificial intelligence-driven characters, has raised concerns among child safety advocates and cybersecurity experts.
Key Concerns Raised by Experts
According to Rhianna Mitchell from The West Australian, the Chai AI app exposes young users to potentially harmful content, including explicit language, sexual themes, and violent scenarios. Unlike many other chatbots, Chai AI enables users to create and customize their own AI characters, which can lead to inappropriate interactions.
Lack of Effective Safeguards
Experts highlight that the app lacks robust age verification and parental controls, making it easy for children to access mature content. The chatbot's responses are generated by AI algorithms that may not always filter out harmful or dangerous advice. This could lead to children receiving guidance on self-harm, eating disorders, or other sensitive topics.
Privacy and Data Security
Another major concern is the app's data collection practices. Chai AI may gather personal information from users, including location, device data, and conversation logs. Without strict privacy protections, this data could be misused or fall into the wrong hands.
Call for Action
Child safety organizations are urging Australian parents to monitor their children's app usage and advocate for stronger regulation of AI-based platforms. The eSafety Commissioner has also been called upon to investigate the app and enforce stricter guidelines. In response, a spokesperson for Chai AI stated that the company is committed to user safety and continuously updates its content moderation systems. However, experts argue that more proactive measures are needed to protect vulnerable users.
This warning comes amid growing global scrutiny of AI chatbots and their impact on young people. As technology evolves, so too must the safeguards that protect children online.



