SAWC Fall Panel Demystifies Artificial Intelligence for Wound Care: Promise, Pitfalls, and Practical Steps
A session at SAWC Fall examined how artificial intelligence (AI) can be incorporated into wound care without compromising safety, equity, or clinician judgment. The moderator, Jeanine Maguire PhD, MS, MPT, FAAC, CWS, framed the intent as patient-centered and collaborative: “I’m not here to do anything. I’m here to help you,” before outlining a plan to define AI modalities, map use cases across clinical and administrative workflows, and surface legal and ethical considerations specific to health care.
Defining the tools
Panelists differentiated among machine learning systems that learn patterns from data, rule-based “engines” that apply explicit logic, and generative models that produce text, images, and other media. They noted that clinicians already encounter analogous technologies in daily life, voice tools, facial recognition, navigation, and similar components are moving into documentation, imaging, and operational support. The emphasis was augmentation over replacement, particularly for documentation and other burdensome tasks.
Trust, governance, and liability
Trust hinged on data quality, provenance, and explainability. Clinicians, the panelists argued, should expect clear traceability for any AI-generated recommendation: “There needs to be … permission … allowing [clinicians] to understand where recommendations are coming from and where you will trace back the evidence.” The discussion highlighted that software incorporating AI may fall under existing device frameworks and evolving guidance, with additional exposure from state laws and potential class actions when tools do not perform correctly. Practices and developers were urged to define change control for models that update, maintain documentation of oversight, and build processes for issue reporting and remediation.
Equity and bias
Existing inequities, such as differences in cardiovascular care and trial representation, were cited as reminders that skewed datasets can propagate harm. The panel encouraged representative data collection, explicit bias testing, and corrective approaches when local populations differ from training cohorts. Transparency about training data and model limits was described as essential to clinician and patient trust.
Operational impact and value-based care
Beyond bedside decision support, the panel described operational advantages relevant to multidisciplinary wound programs. Predictive analytics may help align resources to patient risk; documentation automation could reduce administrative burden; and cost/outcome modeling can support value-based care across teams and care settings. The use of high-quality historical and synthetic datasets may streamline research and trials when privacy safeguards and disclosure are in place.
Practical guidance
The session closed with a pragmatic orientation: proceed, but deliberately. AI can accelerate documentation, surface patterns in imaging and risk profiles, and improve coordination—provided clinicians retain judgment, demand explainability, and insist on equity throughout design and deployment.
Key takeaways for practice
• Start with augmentation. Prioritize documentation, coding, discharge planning, and imaging triage to reduce burden while preserving clinician oversight.
• Require transparency. Select tools that disclose data sources, reasoning, and version/change history.
• Build governance. Define validation, update approval, and issue escalation; document clinician supervision.
• Test for bias locally. Evaluate performance across sex, age, comorbidities (eg, diabetes, vascular disease, obesity), and care settings; apply corrective strategies when gaps appear.
• Align with value-based care. Use AI for risk stratification, pathway optimization, and outcome/cost modeling across the multidisciplinary wound team.
• Protect data rights. Ensure privacy safeguards, clear disclosure around synthetic data, and adherence to applicable requirements.