Advancing AI in IO: Dr Dania Daye on Improving Research Quality, Clinical Outcomes, and Patient Care
Key Summary
- The iCARE checklist was developed to address heterogeneity and incomplete reporting in artificial intelligence (AI) studies in interventional radiology by outlining 7 domains to standardize methodology and improve transparency and reproducibility.
- The authors hope journals will adopt the checklist as a reporting standard for AI research in interventional radiology, similar to the CLAIM checklist used in diagnostic radiology.
- Although few FDA-cleared AI tools exist in interventional oncology, the field is expected to shift toward integrated procedural intelligence, though better datasets, validation, and prospective trials are needed.
© 2026 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of Vascular Disease Management or HMP Global, their employees, and affiliates.
VASCULAR DISEASE MANAGEMENT. 2026;23(3):E27-E28
Interview by Laura Simson, MA
University of Wisconsin-Madison
Dania Daye, MD, PhD, an interventional radiologist at the University of Wisconsin–Madison and director of the Center for High Value Imaging, helped develop the iCARE checklist to improve the quality, transparency, and reproducibility of artificial intelligence (AI) research in interventional radiology. In this interview, she discusses the need for standardized reporting in AI studies and reflects on key themes from her recent keynote at the Society of Interventional Oncology (SIO) conference, where she explored the evolving role of AI in IO oncology and its future impact on clinical practice and patient care. Click here to see a video of this interview.
What were you seeing in research and practice that prompted the creation of the iCARE checklist?
When we look at the research that's currently available on AI in the interventional space, we see that there is a huge heterogeneity in what's being published, and the quality of some of the studies is really lacking. We have to acknowledge that there's a huge heterogeneity in the datasets that are available, how big the data sets are, etc. And we see that when the authors are reporting what they did and the results, a lot of the detail is missing. This really prompted us to think through how we can standardize what is being reported in these studies to ensure the highest quality possible and, frankly, to ensure that what is being reported is reproducible by other groups.
With this, we came up with the 7 different domains that we describe in the iCARE checklist to ensure the highest level of quality.
How do you see the recommendations being implemented, and where is room for improvement?
One of the things we were starting to look into is having some of the journals recommend using this checklist when they receive publications in the AI and interventional space. So, if we look across different specialties (for example, diagnostic radiology) there are already a number of checklists that are available, including the CLAIM checklist that was recently updated. We realized very quickly that, in interventional radiology, no such checklist is available. So, we moved forward with putting together this checklist, and it is our hope that some of the publishers will adopt it as a recommendation for those authors submitting papers in the AI space to their journals.
Based on the advancements in AI since the inception of your checklist, what would you add?
When we published this checklist, it was just right at the beginning of large language models, and we tried to incorporate some of those details in the way we structured the checklist. However, now we are much more advanced than we were last year—as you know, the field is moving so quickly. So, for example, there are a number of new areas that we're starting to see publications in, including agentic AI, visual language models, or more bigger foundation models, and I think there are certain things that can be added to the checklist around these new structures that can be beneficial to ensure that we're a little bit more inclusive.
You gave the keynote on AI at SIO. What were some of the key takeaways that you hope to impart on clinicians as far as the involvement of AI in their practice?
Thanks for asking that. My take on AI and IO in general is that, if we look at our current state today, there are many, many point applications in the literature, or some on the market, and very small studies, but there are really very few FDA-cleared AI algorithms that we're using day-to-day in IO. However, the future, I think, is very bright. AI and IO is definitely expected to move from point tools to integrated procedural intelligence, and I feel very strongly that this is where the future is going to be for us. It's going to allow us real-time decision support throughout the IO care continuum.
I think it's important to acknowledge that there are many challenges that remain, including that, in IO specifically, we have many small heterogeneous datasets. Label scarcity is a very, very big problem—who's going to label the data? There is a lack of generalizability and external validation datasets, given how hard it is to share datasets across different sites. And right now, a lot of the ways we're evaluating AI tends to be on process measures, not really on clinical utility endpoints. And this is where the impact of these tools really is. So, we're really focusing and shifting how we're reporting the impact of these tools to measure clinical utility endpoints. I think is going to be very, very important. Moving forward, we really need to have prospective IO registries and pragmatic AI clinical trials to start looking at time gains, margins, retreatments, complications, and outcomes, where we can really start moving the needle in terms of having AI improve the care of IO patients.
Is there anything else you'd like to share with our audience?
I think the future is very exciting. AI is a field that's advancing very quickly, and we are going to get to a point where people who are using AI will surpass people who are not using AI. I'm a big believer that AI will definitely help us improve patient care with all the advances that we're seeing today. n


