ChatGPT Health: The 'Dr Google on Steroids' That Could Undermine Australian Healthcare
The pitfalls of relying on artificial intelligence for consequential decision-making, particularly in health matters, have been recognised for decades. As IBM famously declared back in 1979, "A computer can never be held accountable, therefore a computer must never make a management decision." This timeless warning resonates powerfully today as AI juggernaut OpenAI launches its official foray into healthcare with ChatGPT Health.
A Pandora's Box for Modern Medicine
This sentiment was strongly echoed in guidance published last August by the Australian Medical Association, which cautioned clinicians that AI must never replace clinical judgment and that final decisions must always rest with medical practitioners. A patient-facing, unregulated AI tool that could easily stand in for a general practitioner, at least from a patient's perspective, represents a Pandora's Box for healthcare as we currently know it.
OpenAI describes the feature as designed to help people understand their test results, track trends, prepare for doctor visits, and support health-related questions—not to diagnose or replace professional medical care. However, ChatGPT Health is poised to become what experts are calling 'Dr Google' or 'WebMD' on steroids, creating a potential nightmare for patients, doctors, and hospitals across Australia and beyond.
The Accountability Gap in AI Healthcare
When a medical professional makes an error during treatment or care, professional bodies like AHPRA (the Australian Health Practitioner Regulation Agency) review and investigate. They can make recommendations to prevent future mistakes or sanction professionals who commit malpractice. But who bears responsibility when ChatGPT hallucinates and provides dangerous advice? Will OpenAI executives be held to the same rigorous standards as our doctors and nurses?
Trust in the medical profession, already under pressure from online misinformation, will only further erode when people are tempted to bypass credentialed medical expertise in favour of a text generator. In this age of social media-induced self-diagnoses, there is little doubt that patients will "symptom-shop" using AI tools until they receive the answers they're seeking.
Flawed Technology Meets Healthcare Realities
AI is already notorious for its sycophantic tendencies, telling people what they want to hear and affirming thoughts and behaviours that may be harmful. This creates significant potential for individuals to receive inappropriate or inadequate medical treatment, with potentially life-threatening consequences. The deep flaws inherent in AI systems are fundamentally at odds with delivering accurate, evidence-led medical advice.
Combined with the tech industry's notorious "move fast and break things" mentality, a chatbot dispensing medical guidance becomes a formula for numerous misdiagnoses. What makes AI so powerful for creative industries—its ability to use statistics and vast computational power to generate new ideas—becomes dangerously problematic in healthcare contexts where factual accuracy is paramount.
Data Privacy: The Hidden Healthcare Crisis
Users of these services must also remain hyper-aware of who else could access their sensitive health information, including cybercriminals and state actors. OpenAI has already confirmed that a third-party breach leaked names and physical locations, creating prime targets for malicious entities. But the concerns extend beyond security breaches.
Users need to understand that their health data represents a valuable asset for any technology company. When genetic testing company 23andMe filed for bankruptcy last year, the DNA data of millions of users was sold to a pharmaceutical company. If OpenAI is sold—which is not unimaginable given it's not expected to turn a profit until at least 2030 according to HSBC estimates—that health data could easily flow to health profiteers.
The Regulatory Imperative for Australia
In the hunt for "alternative revenue sources" (a tech industry euphemism for hyper-targeted advertising and sales brokering), private healthcare providers and insurers will likely rush to access OpenAI's highly sensitive data troves. A collection of sensitive health information, including biometrics from wearables, test results, MRI scans, and medical records, may well end up on the auction block.
The power of generative AI should not be underestimated, and Australia should continue supporting ethical AI projects that can add billions to the economy. But if markets fail to adequately manage these risks, governments and regulators must step in to protect our health data. Responsibility for healthcare must rest firmly with those who provide it.
If AI companies intend to deploy robo-doctors, they must be held fully accountable for any harm caused, without hiding behind the convenient excuse of "hallucinations." We would never permit a doctor who admits to hallucinating diagnoses to treat patients, and we should not accept this from artificial intelligence systems either.
We trust governments to safeguard sensitive data in the public interest, but private companies ultimately answer to their investors, making strong regulation of health AI absolutely essential. This represents the only way to ensure health data remains protected and is never treated as a mere commodity.