Key Takeaways
- A Wolters Kluwer Health survey found more than 40% of health care workers are aware of colleagues using unauthorized “shadow artificial intelligence (AI)” tools.
- Nearly 20% of respondents reported personally using unapproved AI, citing faster workflows and better functionality.
- Experts warn shadow AI poses patient safety, accuracy, and data privacy risks for hospitals and health systems.
Health care workers across the US are increasingly using AI tools that have not been approved by their organizations, raising concerns about patient safety and data security. According to a new survey published by Wolters Kluwer Health, the use of so-called “shadow AI” is common among both clinicians and administrators, often without formal governance or risk assessment by health systems.
Survey Findings Highlight Scope of Shadow AI Use
The survey, conducted by information services and software firm Wolters Kluwer Health, included more than 500 respondents from hospitals and health systems. Results showed that more than 40% of medical workers and administrators were aware of colleagues using unauthorized AI tools. Nearly 1 in 5 respondents—about 20%—said they had personally used an AI product that was not approved by their organization.
Dr Peter Bonis, chief medical officer at Wolters Kluwer, said the appeal of these tools is understandable but potentially dangerous. “The issue is, what is their safety? What is their efficacy, and what are the risks associated with that?” Bonis said. “And are those adequately recognized by the users themselves?”
Respondents cited several reasons for turning to shadow AI. More than 50% of administrators and 45% of care providers said unapproved tools offered a faster workflow. Nearly 40% of administrators and 27% of providers said they used them because of better functionality or the absence of approved alternatives. Curiosity and experimentation also played a role, with more than 25% of providers and 10% of administrators citing those motivations.
Patient Safety and Cybersecurity Concerns
Shadow AI presents well-documented security risks across industries, but the stakes are higher in health care. Because these tools operate outside official oversight, IT and security teams often lack visibility into how data are used or stored, creating opportunities for cyberattacks and breaches.
Health care organizations are already frequent targets for cybercriminals due to the value of clinical and personal data and the urgency of care delivery. In the survey, about one-quarter of providers and administrators ranked patient safety as their top concern related to AI use.
Accuracy is another major issue. Bonis warned that AI tools can produce misleading or incorrect information that may not be caught before reaching the point of care. “There’s a whole variety of ways in which— even though the intention is for humans to be in that loop at some point or another—these tools misfire,” he said.
Implications for Health System Leaders
Despite the risks, AI remains one of the most promising technologies in health care, with potential to streamline documentation, accelerate administrative tasks, and help clinicians find medical information. The survey suggests that unmet demand for these capabilities is driving shadow AI adoption.
The findings also point to gaps in AI governance and communication. Many health care workers reported limited awareness of their organization’s AI policies. While administrators are more likely to participate in policy development, only 29% of providers said they were aware of their organization’s main AI policies, compared with 17% of administrators.
Bonis noted that familiarity with AI scribes—tools that record patient encounters and draft clinical notes—may create a false sense of policy awareness. Providers may recognize rules around specific tools without understanding broader AI governance frameworks.
Conclusion
The Wolters Kluwer Health survey highlights a growing disconnect between AI demand and organizational oversight in US health care. As shadow AI use expands, health systems face mounting pressure to establish clear policies, vetted tools, and education to protect patients, data, and clinical integrity.
Reference
Olsen E. ‘Shadow AI’ use is widespread in healthcare: survey. Healthcare Dive. Published January 22, 2026. Accessed January 26, 2026. https://www.healthcaredive.com/news/shadow-unauthorized-ai-/810191/