Skip to main content
News

Health Care AI Adoption Requires Transparency, Policy Upgrades, and Cybersecurity

Edited by 

Key Clinical Summary

  • Scope: Review of English-language literature (2000–2024) on artificial intelligence (AI) integration across US and global health care settings.
  • Core finding: AI can enhance diagnostics, surgery, research, and clinical decision-making—but raises medical liability, cybersecurity, and health equity concerns.
  • Action items: Prioritize transparency about algorithm training, strengthen digital policies and safeguards, and diversify datasets to improve generalizability.

A comprehensive narrative review of publications from SCOPUS, PubMed, and Google Scholar (2000–2024) evaluates the feasibility of deploying machine learning and deep learning tools in health care. The analysis details opportunities in large-scale data analysis, imaging, surgical robotics, and decision support, while warning that liability ambiguity, cyber risk, and entrenched disparities could undermine safe adoption in the United States and beyond.

Study Findings

The review highlights broad AI use cases: imaging support for MRI, X-ray, CT; pathology pattern recognition; ophthalmic screening; and oncology platforms (eg, HALO™, Oncotopix®, DeepLens) designed to accelerate detection and diagnosis. Surgical applications include robotic-assisted procedures (eg, knee arthroplasty and minimally invasive cardiac surgery) associated with lower revision rates, fewer complications, and faster recovery.

AI-enabled chatbots (eg, Woebot, ChatPal) and AIoT wearables extend access and continuous monitoring, while predictive analytics drive genomics, drug discovery, outbreak modeling, and contact tracing. The review also notes AI’s dual role in cybersecurity—bolstering defenses against phishing and malware but simultaneously expanding attack surfaces as models handle sensitive health data.

Policy analysis underscores gaps in legacy frameworks (Health Insurance Portability and Accountability Act [HIPAA], General Data Protection Regulation [GDPR]) for modern AI risks such as re-identification, opaque models, and automated decision-making. Proposed solutions include clearer liability allocation among clinicians, health systems, and developers; interoperable data exchange; and explicit requirements for model explainability, training disclosure, and bias audits. Recent incidents (eg, major US cyberattacks and software outages) illustrate operational fragility and the financial stakes for providers and patients.

Clinical Implications

For clinicians and health leaders, the review argues that trustworthy AI hinges on 3 pillars:

  1. Transparency—clear documentation of training data, methods, and performance to guide fit-for-purpose use;
  2. Security—robust monitoring, least-necessary data access, de-identification, and workforce cybersecurity training; and
  3. Equity—diverse, representative datasets and routine bias evaluation to prevent algorithmic harm, particularly in rural and minority-serving settings.

Operationally, organizations should map data flows, limit model access to protected health information, implement incident response playbooks, and audit vendor claims. Policymakers should modernize privacy statutes for AI-era risks, define accountability, and incentivize knowledge-sharing on threats and best practices.

Conclusion

AI can meaningfully improve diagnostics, surgery, monitoring, and research, but safe scale-up requires aligned policy, clear liability rules, rigorous cybersecurity, and intentional equity. Health systems and regulators that operationalize transparency, protection, and inclusivity will be best positioned to realize AI’s benefits while safeguarding patients and clinicians.

Reference

Virk A, Alasmari S, Patel D, Allison K. Digital health policy and cybersecurity regulations regarding artificial intelligence (AI) implementation in healthcare. Cureus. 2025;17(3):e80676. doi:10.7759/cureus.80676