While healthcare data breaches fell slightly in 2024 compared to 2023, the number of individuals affected by data breaches sky-rocketed, driven by the February 2024 ransomware attack on Change Healthcare that compromised the data of approximately 190 million people.
As technology like AI and EHR systems develop at a rapid pace, many are wondering how to best protect patients’ privacy in an increasingly cloud-based healthcare industry.
Here are five notes on the state of patient privacy and the rise of AI:
1. “Shadow AI,” or the use of unauthorized or unmonitored AI tools within organizations, is increasingly a concern among healthcare leaders.
This phenomenon arises when vendors or employees deploy AI tools without the knowledge or approval of an organization’s IT or compliance departments.
“Many applications now include AI in some form,” Jason Adams, MD, director of data and analytics strategy at University of California Davis Health told Becker’s. “Often, even the individuals requesting the technology aren’t aware that AI is embedded in the product. It’s an ongoing challenge.”
2. A study by Prompt Security found that companies typically have 67 generative AI tools operating within their systems — 90% of which lack proper licensing or approval. In response, UC Davis Health has built structured pathways to identify and evaluate most AI-enabled tools before they go live.
“We have a fairly mature program called the Analytics Oversight Committee. It functions almost entirely as our AI oversight committee,” Dr. Adams said.
3. CMS administrator Mehmet Oz, MD, recently promoted the use of AI avatars during his first all-staff CMS meeting, according to an April 9 report by Wired magazine. Sources told the publication that during the April 7 meeting, Dr. Oz discussed possibly prioritizing AI avatars over front-line healthcare workers, adding that the technologies can help scale up “good ideas” quickly and affordably.
4. In January, President Donald Trump signed an executive order aimed at eliminating AI policies from the Biden administration. The order reverses a Biden-era policy that President Trump claims imposed excessive government regulations on AI development and stifled private sector innovation. The executive order states that the development of AI within the U.S. must be “free from ideological bias” and establishes it as a core commitment of U.S. technological advancement.
5. In an October 2024 blog post published by Medical Economics, Sarah Worthy, CEO of DoorSpace, a healthcare-oriented human resources consulting firm, specifically cautioned against smaller or independent practices scaling up their AI implementation too quickly. She said that, while AIs systems are able to process large amounts of data, independent practices may struggle to equip themselves with data security systems strong enough to protect the large amounts of sensitive health data an AI may use. This can have a cyclical effect, as such practices are then also more vulnerable to cyberattacks and data breaches, which can have further consequences for patient privacy, practice reputation and financial stability.
The post Patient privacy amid the rise of AI: 5 notes appeared first on Becker’s ASC.