By Stefanie Schappert
AI health assistants are here to stay, and they may provide real value in helping people interpret complicated medical information, but consumers should understand exactly what that means before inviting those tools into their most sensitive digital lives. What are the data risks consumers need to know before plunging headfirst into this new era of healthcare?
ChatGPT Health: Insight vs Exposure in AI-Driven Healthcare
Health data is already among the most sensitive personal information people have, and with the introduction of ChatGPT Health last week, users will undoubtedly be pouring their medical data into the AI chatbot with the same verve they have since ChatGPT was first launched in November 2022.
But should they?
The amount of sensitive information users freely and regularly post into ChatGPT (and other popular AI chatbots) is astounding.
A study last January found that nearly one in ten workers regularly exposed their own companies' sensitive data when using AI.
And when thousands of ChatGPT conversations were leaked via search engines last August, the conclusion was that people pretty much share everything with AI, literally.
So when OpenAI introduced its ChatGPT Health to the public, tech and health experts began sounding the warning bells about privacy and security issues, as well as the limits of AI's accuracy.
This makes it crucial to understand where information is going and how it’s being used, especially when the data in question includes deeply sensitive details such as medical history or chronic conditions.
"Designed to Support, Not Replace, Medical Care"
OpenAI touts ChatGPT Health as a “dedicated experience” intended to help people understand lab results, prepare for doctor visits, track fitness and wellness trends, or compare insurance options, marking a significant shift in how consumers interact with AI.
“Health is already one of the most common ways people use ChatGPT,” OpenAI said in the announcement, noting that 230 million people worldwide ask the bot health and wellness questions every week.
Users can now upload and connect Health not only to medical records, but also to wellness apps – such as Apple Health, Function, and MyFitnessPal – creating a complete individual health profile, the likes of which we have never seen before.
Traditionally, health data has been scattered across many devices and platforms – a hospital portal here, a fitness tracker there, a PDF of bloodwork in your inbox.
But now, health data will be woven together into new AI-generated interpretations and summaries, all stored within a single system.
Not just storing medical records, Health will aggregate and interpret them, creating narratives, patterns, and insights – a fundamental departure from how most people think about their medical data.
This matters because the value of health data isn’t just in its raw form; it’s what can be inferred and contextualized from it.
Derived insights, health trends over time, connections between symptoms and test results, and personalized explanations can prove more revealing than the “data points” themselves.
People may also consent to sharing individual data points, for example, a symptom or lab result, without understanding the new meaning that emerges once those data points are combined.
AI algorithms developed from aggregated data have already proven that, in the wrong hands, could easily lead to AI biases, workplace, or societal discrimination, impacting such variables as individual treatment plans or health insurance premiums, among many others.
Understanding the Privacy Tradeoffs
On the technical side, OpenAI says ChatGPT Health builds on its existing security architecture with additional, layered protections, including purpose-built encryption and isolation to keep health conversations protected and compartmentalized.
Users can also enable multi-factor authentication, review or delete Health memories, and revoke access to connected apps at any time, according to OpenAI.
With layered, end-to-end encryption, health conversations are isolated and not used to train models, the company further states.
Still, privacy critics have pointed out that when users upload medical records into an AI service – even one with promises of encryption and compartmentalization – they may effectively remove traditional privacy protections that would otherwise apply in regulated healthcare settings.
One expert recently told The Record that giving an AI access to electronic medical records can strip those records of the legal safeguards they enjoy under rules like HIPAA, which lays out how Protected Health Information (PHI) is processed, stored, transmitted, and secured.
“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time,” explained Sara Geoghegan, senior counsel at the Electronic Privacy Information Center.
Because health data remains among the most valuable targets for hackers, any system that aggregates medical records, wellness data, and AI-generated health insights – especially on a single platform – can significantly increase the amount of data exposed in the event of a breach.
From a cybersecurity perspective, aggregation also concentrates value, making AI health platforms especially attractive targets for attackers seeking high-impact data rather than isolated records.
The tradeoff – insight versus exposure – is destined to be the burning question we face moving forward.
One thing is certain: weighing insight vs. exposure is no longer theoretical – it is now the defining moment of AI-driven healthcare.
ABOUT THE AUTHOR
Stefanie Schappert, a senior journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019. She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News. With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University's International Social Engineering Pen Testing Competition, sponsored by Google. Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines.
ABOUT CYBERNEWS
Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence. For more, visit www.cybernews.com.
Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:
Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia.
The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google's latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
They revealed that two online PDF makers leaked tens of thousands of user documents, including passports, driving licenses, certificates, and other personal information uploaded by users.
An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
No comments:
Post a Comment