Skip to main content
Conference Coverage

Transforming Oncology Pathology With Artificial Intelligence

 

In part 2 of this interview, Brian Anderson, MD, Coalition for Health AI (CHAI), discusses opportunities and challenges associated with integrating artificial intelligence (AI) into pathology and oncology practice. He emphasized opportunities in diagnostic support and clinical decision-making, alongside challenges with data quality and infrastructure.

Dr Anderson gave the keynote presentation on this topic at the 2025 College of American Pathologists (CAP) Meeting in Orlando, Florida. 

Transcript:

I’m Brian Anderson, I'm the president and CEO of the Coalition for Health AI, or CHAI. 

What are some realistic first steps pathology/oncology departments can take to integrate digital pathology, without overwhelming existing systems?

Any new technology is going to have some level of disruption, so I think it's fair to say that if you're going to enter in this space there's going to be some level of disruption. The challenge is how do you use this tool, because AI is a tool, in a way that empowers the providers, the pathologists, the surgeons, the oncologists, and the patients. 

I think thinking through several things are important. One, as I mentioned, having technical infrastructure that supports this is going to be critical. A lot of health systems don't have the technical infrastructure to support high resolution, high quality, hundreds, thousands of images to be uploaded and stored. Also, the ability to take those and present those to an AI model, so there's cost, there's real infrastructure costs both in the purchasing of a tool, but also the technical infrastructure. In addition to all the challenges around image resolution, whether or not it's blurry or high quality, whether or not the lumens are at the right level, there’s kind of all that kind of surrounding infrastructure that you need to think about. 

Additionally, in the deployment of the tool, you need to be sure that you're training your providers how to use these tools appropriately because it's a tool. You wouldn't just hand a GI doc an endoscope without training them on how to use it if they'd never used an endoscope before, that'd be malpractice, that'd be dangerous. As we think about deploying these AI tools, it's not as simple as, I'm just going to turn it on, and a little box is going to pop up for the provider on the screen with the recommendation and the provider can decide what they want to do – that's not how this should be done. There needs to be onboarding and robust training, possibly even a certification on how these providers should use the tools, so process, people, change management. 

The other part is around monitoring. And this is something I think I see a lot of health systems struggle with. Everyone's very excited to purchase, procure, and deploy these tools and get their providers using them and even if you've done all the things I've set up until now, perfect, you have the infrastructure, you've trained your providers, the third and important part is monitoring. 

I'd mentioned earlier that if you have a model that's trained on pristine ideal data, but then you deploy it in the real world, in a local setting where the data is not pristine, the image quality is not similar to the training dataset, the performance of that model in that local environment is going to be suboptimal and if you don't know what the performance is of that model, you're putting your patients and your providers at risk: One because providers are the ultimate party that is legally liable in using any kind of tool to deliver care and two, if you think you bought a model that maybe has an AUC of 0.98, but when it's deployed locally and you actually start monitoring it and it has an AUC of 0.48, worse than a coin flip, that should give you significant pause about taking that model offline and not using it until you understand the root cause and can either retrain it, tune it, or do something different about how the images are being created that you're presenting to the model. So, monitoring models in an ongoing way is critical [and] having the infrastructure in place to do that monitoring is important. 

We've seen a lot of health systems begin to struggle with this and navigate it. Some of those that are successfully navigating it oftentimes are looking to third party technology companies, and so not kind of building up all the infrastructure themselves and the dashboards and the technical environments to do that, it's really kind of looking for a technology partner that already has those tools built, that has that expertise, that can surface the insights to kind of the decision makers in the health system to be able to understand. If we bought it and we thought it was 0.98, but whatever it is, it may not be as bad as 0.48, maybe it's 0.8, 0.85, to be able to have at least the information in place to make an informed decision as they monitor and govern this AI. So, those are some of the things to consider.

When it comes to data quality and standardization, how are institutions overcoming variability (slide preparation, scanning, annotation) to ensure AI models are clinically useful? 

The strategies that they are taking, I think, are addressing everything I said so when it comes to image quality, well having devices that can capture images at high quality. When it comes to the stains, having robust processes and certifications for the technical approaches that are involved in performing a stain, having the kind of infrastructure to support lots of models, having the ability to transmit those models. I think all of those are important things to consider. 

In terms of the standardization, I think one of the things that I'm excited about is the CLIA [ Clinical Laboratory Improvement Amendments] approach makes sense. I think for a lot of these health systems where you have a CLIA-certified lab, that certification is something that stands out as an opportunity for where you have these technical things, that I've mentioned that are a real challenge, and any kind of non-standardization or non-optimal lighting image capture, etcetera, can be addressed with a certification. I think there's opportunity for pathology labs to build this kind of AI certification into their portfolio, meaning all of those steps that lead to the optimal data being presented to the model and the workflow optimization that makes sense, where the providers have been trained and are using it in their cognitive decision making, in the optimal way, in the intended use for the model. And if you can certify all those steps and ensure that there is rigorous adherence to those steps, I think you'll reduce the variance, improve the standardization, and ideally improve the performance and the outcomes for the patients.

As concerns grow about bias in AI models, particularly those trained on limited datasets, what safeguards are being developed to keep AI tools in pathology equitable and trustworthy across diverse patient populations? 

So, [there are] 2 answers to this: One is we just need to make more data sets available, and by that, I mean, oftentimes the data that is used to train a lot of these models comes from specific geographies or specific academic medical centers. And it's great to have that kind of high quality data to train models on, don't get me wrong, but the challenge is if you don't have a heterogeneous set of data that's coming from different geographies, I'll be honest, a lot of this data comes from urban environments in [academic medical centers] that are not representative of rural America, and that's important because if you don't have data that comes from those specific populations of people, it is potentially likely that a model if deployed and used in those settings will not perform as well. 

When we think about the intended use of a model, it is important for a model developer to be very transparent, as transparent as possible in informing the customer, the providers, the end users, the patients and the providers. How is this model trained? Where did the data come from that this model was trained on? What type of patients, what type of images was the model trained on? Then secondly, and importantly, what is the intended use? If I have a model that's trained to identify breast cancer in middle-aged women of African-American descent and the AUC score, as a developer, I'm publishing it, and it's advertised at 0.98, as an example, but the model has as its intended use described as all people, all middle-aged women, the risk is if that model is then used on non-African-American women, that it may have a really poor performance, but you don't know that because the intended use wasn't clear. Being very clear about the intended use is important. 

Then on the other side of this, it is important for providers to use models for their intended use and to not use models outside of that intended use. This is similar to using a therapeutic, a drug for on-label use or off-label use. The only times that providers today generally use drugs in an off label use is if there's actual evidence and signs that it's benefiting, but we don't have those signs yet, we haven't generated that evidence yet because AI is so new, and so this gets to the final point, which is we need to be able to monitor how these models are performing. If you're not actually as a health system testing and evaluating and ultimately monitoring how these models perform, you're putting yourself at risk and it's really important to be able to have a robust ecosystem of testing and evaluating models to inform the health systems before they make that procurement decision– do I want to buy this model or that model, because as a provider, at the end of the day, I want to know how is this model going to perform on patients like the one I have in front of me and so the health system in the procurement decision is in the best situation to be able to go to the developer and say, hey, I'd like to have this model tested, maybe it's patients that come from my community, or maybe it's work with this other health system that has data that's similar to mine because I’d like to know how your model performs. And then as I'd mentioned earlier, once you've made that procurement decision with that data, being able to monitor it and maintain that kind of high-quality performance is important too.

If you could give one piece of advice to oncologists preparing for the digital pathology era, what would it be?

It is important for pathologists, just as any specialty, to not overlook the challenges of how to integrate these tools into our workflows and I think it's important for the provider voice to be present in that. Oftentimes, I don't hear that voice in the conversations we have with technology companies or with health system executives. Having that end-user voice is important for a multitude of reasons for the design of a product. High quality technology companies design their products with the end user in mind, so how do we get the pathology voice into that conversation at the development of these models in the first place, and then once a big health system might purchase a model, how do we ensure that that end user, the pathologist voice, maybe it's the chair of pathology, that voice is there when they think about implementing and configuring the tool to ensure, again, that in these complex clinical workflows that these tools are being used in the optimal intended way that they were developed. Having the voice of the pathologist at those steps is critical.

And then finally, making sure ultimately the pathologist knows how this model is performing and performing on patients like the one they have in front of them. If they're examining a patient image, if you don't know as a pathologist how that model is performing on patients like the one you have in front of you, you're putting yourself at risk from a malpractice standpoint, from a legal standpoint and so pathologists need to be empowered with the kind of transparency around how these models perform. So, advocating, calling attention to these things as pathologists is really important. 


Source: 

Anderson B. The perils and promise of AI and digital pathology — Myths, realities, and what it all means for pathology practice. Presented at CAP25. September 13-16, 2025; Orlando, FL. Keynote Address.