Skip to main content
Feature

Artificial Intelligence in ePCR Writing: To AI or Not To AI?

December 2025

To best answer the fundamental question about the use of artificial intelligence (AI), it is important to fully understand the different types of AI that exist and are at the disposal of your crewmembers. Some e-PCR products use a type of internal “generative AI” writing to assist the crewmember in creating a narrative. More on these nuances in a minute. Alternatively (and outside the confines of the e-PCR capabilities directly), crewmembers can use external AI features (such as ChatGPT) to solicit assistance in writing the narrative. 

Let’s start with the basic concept, whereby some version of AI is directly integrated into the e-PCR software itself. In most cases, this is a “closed source” model, such that the software assists in writing the narrative by simply pulling information from other data elements already populated into the e-PCR.

The benefit of such a concept is that it does not create information, but simply writes the narrative based on existing information within the confines of the e-PCR. This, of course, is constrained based on the degree and accuracy of those other data elements.

For instance, if the autogenerate feature simply places information (such as dispatch condition, patient age and sex, patient complaint, response time, response mode, etc.) into sentence and paragraph form, when any of those data elements are not entered, then the resulting auto-generated narrative is incomplete. That is, if patient age, sex, and complaint are missing, the resulting narrative may read something like this: “Responded immediately to 123 Main Street for dispatched condition of difficulty breathing. On arrival, found [blank] year old [blank] with chief complaint of [blank].” This odd-looking sentence is a clear and obvious sign that some degree of AI was used. 

A more complex situation involves those situations in which the AI tools (even those embedded into the software) rely on “open source” features. This means the software can take steps to “fill” any holes in the pertinent information (based on similar situations). The AI software searches the internet (i.e., other e-PCRs that may be residing in cyberspace) to look for patterns, trends, typical interventions, or vital signs based on a patient complaint. 

In such an “open source” AI narrative writing, while there might no longer be blanks or holes in the e-PCR narrative, that narrative instead might be filled with incorrect or inconsistent information. That is, the AI software inserts information that it thinks is (or should have been) appropriate, regardless of its accuracy. 

An equally complex (and sometimes scary) proposition also arises when outside software is used. For instance, a crewmember could realistically enter a prompt into software such as ChatGPT along the line of “write me an ambulance narrative for an 87-year-old female patient who experienced multiple falls and has a decreased mentation.”

The resulting text will likely be thorough and include all sorts of information that may or may not be accurate. The ChatGPT software can (and will) generate fabricated data such as vital signs, pain score, interventions performed, complaints, a GCS score, and a wealth of other information. This fabricated information stems from what was done (or should have been done) in similar situations—and has little to do with what really happened.

In essence, the ChatGPT tool scours the internet to find similar situations, effectively borrowing clinical information from other PCRs. The problem, of course, is that such information may not be an accurate depiction of what really occurred with respect to this patient encounter. 

Please keep in mind that the highlighted shortcomings discussed above are not intended to be a criticism on AI or ChatGPT. Indeed, many millennial and Gen Z crewmembers are well-versed in using AI for a variety of purposes—from finding low-carb recipes to learning how to tie a necktie. AI is (fortunately or unfortunately) part of our culture and is (likely) not going away. Thus, it is imperative to recognize that AI will, no doubt, be used by crewmembers, so it becomes important to figure out how AI can be used properly. 

First things first: AI should be treated as nothing more than a writing tool, akin to a mnemonic such as SOAP or CHART. AI is never a substitute for human involvement, review, and editing. Indeed, some e-PCR software companies tout the AI and generative report writing capabilities embedded into the software, claiming that such software helps to save time.

While certainly possible (and true—especially for crewmembers who might be poor writers) the most critical part of using such AI features is that the resulting narrative be human-reviewed for accuracy and completeness. If AI is simply used to create the narrative, and there is never any other human review or involvement, the resulting PCR will be grossly incorrect. This can lead to liability problems and/or allegations of malpractice—not to mention compliance risks and reimbursement issues. 

Another concern involves HIPAA. When “open source” resources are used, this means that the software scours the outside internet for information. When locating such information, the software might also be sending information into cyberspace.

Obviously, releasing PHI about a particular patient encounter to the internet is a textbook violation of the Privacy Rule. The problem is that this can occur unwittingly. For example, the crewmember did not intentionally or maliciously release PHI about a patient encounter into the ethos of the web.

Since the internet is a two-way street, when crewmembers are seeking information from the internet, the software might also be sharing information with the internet. The PCR today becomes tomorrow’s guidepost for AI being able to generate future PCR narratives. To the extent patient identifiers are released in the open source of the internet, HIPAA is violated. 

In summary, we are not yet ready to pronounce whether AI is “good” or “bad” for EMS. While it indeed has its merits, it also has its downfalls. The best way to use AI effectively is to not let it be a crutch, or to rely solely on what AI spits out. It needs to be an assistive tool in the narrative writing process that can write an initial draft which is then carefully reviewed and revised by the crewmembers for accuracy.

So long as that degree of review and revision occurs, then AI can have a place in EMS. However, once that level of human involvement is lost, and the e-PCR narrative is no longer an accurate depiction of the encounter, then billing, reimbursement, compliance, legal, moral and ethical issues become apparent. It’s up to your organization as to what level and degree of AI tools will become part of the workplace. Those requirements, constraints, and limitations must become part of a policy to be shared with all crewmembers.

If you choose to completely ban the use of AI, be prepared for some backlash. If you choose to allow AI, just ensure that “open source” sharing is kept to a minimum, that PHI does not extend from your systems to the internet, and that any AI generated language be thoroughly reviewed and revised by humans to ensure that the resulting e-PCR narrative is thorough, accurate, and honest.


About the Author

Daniel Pedersen is a partner with Werfel, Moore & Kelly, LLP.  The WMK Law Group is a law firm dedicated to providers of EMS, ambulance, mobile integrated healthcare, non-emergency medical transportation (NEMT), and the software and billing companies that support these industries. This article is not intended as legal advice. For more information he can be reached at dpedersen@wmklawgroup.com.