AI in the Front Office: What You Need to Know
© 2025 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of Podiatry Today or HMP Global, their employees, and affiliates.
Key Clinical Operations Insights
-
Podiatry practices and AI adoption: Early 2025 data show 22% of healthcare organizations use AI tools (health systems 27%, outpatient facilities 18%, medical groups 19% for chatbots/virtual assistants). Adoption remains cautious due to concerns about patient safety, accuracy, HIPAA compliance, and maintaining human interaction.
-
Front-office use cases: Useful applications include AI voice agents for 24/7 phone answering/scheduling, AI-powered appointment reminders with two-way rescheduling, automated intake/forms, and limited symptom guidance (non-diagnostic; requires clear disclaimers).
-
Implementation guidance: Start with HIPAA-compliant reminder systems (can reduce no-shows by 20–30%), integrate with EHR/PMS, pilot in small steps, use an AI-first/human-backup workflow, ensure transparency, and avoid automating tasks requiring empathy, clinical judgment, emergency triage, billing complexity, or long-term patient relationships.
Transcript
Jennifer Spector, DPM: Hello, everybody, and welcome back again to Podiatry Today Podcasts. Here we bring you, as always, the latest in foot and ankle medicine from leaders in the field. In this episode, we're bringing back yet again, Dr. Jim McDannald. He's the founder and owner of Podiatry Growth. And he shares with us a lot about the cutting edge of new information on practice management, especially in the digital world.
So today we're going to start talking by getting into a little bit about AI tools, but at the front desk. We hear about AI tools all of the time. But I think this is a really interesting lens that I can't wait to hear about the real-world use and maybe some of the limitations from somebody with his expertise. So welcome back again, Jim.
Jim McDannald, DPM: Thanks. I'm happy to be back.
Jennifer Spector, DPM: So we've talked before about AI and podiatric practice management, but today we're diving into that specific component that we haven't yet talked about, the front office. What's your sense of the podiatric community's outlook on this? And before we dive into the details, what do you think they're feeling? Do you think they're excited about opportunities here, skeptical, confused?
Jim McDannald, DPM: I think it's a real mix, to be honest. I'd say the podiatric community is cautiously curious with a healthy dose of skepticism, which frankly is exactly where they should be right now. From the conversation I've had with colleagues, there's definitely excitement about the potential of what's going on. Podiatrists are running small businesses while trying to provide excellent patient care and anything that promises to reduce administrative burden really gets people's attention. The idea that an AI assistant can handle appointment scheduling while they're in surgery, that's an exciting kind of a pronouncement is like whether that thing is real or not is another question.
There's also confusion though about what AI actually is versus you know what it's marketed as. Every software company is slapping AI powered on their products and it's hard to know what's generally useful versus you know what are just buzzwords. And then there's kind of legitimate skepticism podiatrists are thinking about patient safety, HIPAA compliance, the quality of patient interactions. I say they're worried about you know will this give me patients bad patient's bad information, you know, will it understand them if I have an AI agent, like a voice agent? And will it create kind of more problems than it solves?
So, you know, what I'm seeing is a broader healthcare landscape is interesting, though. You know, research shows that 22% of healthcare organizations have implemented about, you know, have implemented AI tools. And that's about a seven times increase over just last year. Healthcare is actually ahead of most of the other industries and AI adoption. Health systems are leading at 27% adoption, while outpatient facilities are about 18%. But here's, I'd say the big gap is that 19% of medical group practices have integrated chatbots or virtual assistance for patient communication as of early 2025, and that's still relatively small. So, you know, I think podiatrists are watching, waiting to see what actually works. And they're smart to kind of approach things thoughtfully rather than jumping on every new technology that comes along.
Jennifer Spector, DPM: Well, speaking of the new technology, could you share some of the AI tools that might be of use for a podiatric front office?
Jim McDannald, DPM: Yeah, absolutely. Let me focus on the kind of the practical applications that I think make the most sense for podiatry practices. First and foremost, I think the most impactful are what are called AI voice agents for phone answering and appointment scheduling. This is a big one. Instead of staff members having to answer every call, an AI voice agent can handle calls, incoming calls 24/7, answering common questions, actually scheduling appointments.
Here's why this matters. People don't like leaving voicemails. They don't want to wait for a callback. When someone has heel pain and it's bad enough to be calling for help, they want to kind of get on the schedule right away. An AI phone system can do that, even if it's a 10 p.m.—you know, when your office is closed. There's different ways that that can be a real benefit. And it's something that the AI isn't just pulling things from out of the internet or thin air. Generally, your clinic is straight on your clinic protocols. It takes time to kind of build that information into that AI agent, but it's something that can be very beneficial, not only for the practice, but for the patients.
I'd say second is automated appointment, reminders and confirmations. This is a new technology, but modern AI powered systems can do more than just send out an automated text. They can have two-way conversations with patients, you know, asking like if they can, if that patient can make a 2 p.m. appointment tomorrow, you know, reply with a better time. The AI handles that back-and-forth and reduces no-shows. It frees up staff time or can kind of book inpatients more frequently or at some of the slow times in a clinic’s calendar.
I’d say a third is just automation. AI can help patient fill out forms before they arrive, ask follow-up questions if something's unclear, flag anything that needs staff attention. This speeds up check-in and gives your team better information to work with. So like right now, you know, there's a lot of online forms and those can be kind of onerous, typing everything in or maybe it's my grandmother's, you know, in her 90s and she's, she's not going to be able to fill out her paperwork on a laptop or on a phone, but if AI were to call her and ask for her updated medical history or maybe someone that doesn't want to type these things in, it can pull that information to use transcription through AI to basically fill in a patient's chart or just give more information about why the patient's coming in for a visit.
And then I'd say fourthly, I'd say basic triage and symptom guidance is maybe a future promise. It's a bit one of those ones now where now, people have to be really, really careful. AI can help patients kind of understand whether there's an urgent need for care or a regular appointment is sufficient. It can kind of provide general information about common foot conditions, but it's something that should never be diagnosing. And there should be very clear disclaimers.
But those are four things that I think that people are kind of interested in AI from a front desk perspective that would save some burden on your staff answering phones or having to do paperwork. The key with all these is that they're handing routine and repetitive tasks. They’re not replacing clinical judgment. They’re freeing up your staff to focus on the things that actually require a human touch, like helping an anxious patient, handling a complex insurance question, or managing a scheduling conflict—that needs critical thinking. Those are the kinds of things that the front desk can benefit from AI.
Jennifer Spector, DPM: Do you think there’s anything else that the audience should know about how this realistically fits into workflow and how patients typically respond to these things?
Jim McDannald, DPM: I'd say the integration is actually the tricky part. This is where a lot of practices struggle. The most reliable approach is to start small and layer in AI tools gradually. Don't try to automate everything all at once. That's a recipe for a disaster.
Here's how I think about it. I'd say identify the pain points first. Is it, you know, phone calls that go unanswered during lunch? Start with an AI voice agent that can handle calls during those specific hours. Maybe it's during lunch or during off hours. Doesn't necessarily have to be while your staff is still there in the clinic. You know, is it no-shows? Maybe it's maybe something related to smarter appointment reminders might make sense.
For AI to work well, it really needs to integrate with your existing systems as well. AI tool is no good if it's a standalone tool that doesn't integrate with your practice management software, your EHR. So it really needs to, it's going to create more work, not less. Your AI appointment scheduler needs to have access to your actual schedule in real time. Otherwise, you'll be double booking things. You'll have people replace appointments, missed appointments. It's going to be a nightmare for your staff. So making sure that these tools are well integrated is really, really important.
And one pattern I've seen work is that it's kind of AI first and the human backup model. The AI handles the routine stuff, scheduling basic questions, reminders, but there's always a clear path to get a real person when needed. The AI should be smart enough to recognize when it's out of its depth and when to hand things back to your staff. And there's ways to make that happen.
You know, for a practical example, a patient calls asking whether they need to see a podiatrist for their heel pain. The AI can gather information, how long have they had the pain, is the pain worse in the morning, have they tried anything for it. And then schedule an appointment at the appropriate time. I’d say based on what you’re describing I’d like to contact you so one of our staff members can help you better than I can. That handoff should be seamless.
The workflows that fail are the ones that create friction, like I mentioned. If patients get stuck in an AI loop and can't reach a human, then you've kind of lost them. If the AI gives information that contradicts your staff, what your staff says, you've kind of eroded trust not only in the patient, but in your staff. If it takes longer to use the AI system than just doing things manually, your staff will abandon it. So this is why I recommend a real phased-in approach. You pick one workflow, implement AI there, get it working smoothly, make sure your staff is trained up, get feedback from patients, and adjust as needed. Like anything else, there's going to be kind of some friction with the change, but if you can move to that more efficient workflow, your staff and your patients are going to benefit.
Jennifer Spector, DPM: You mentioned about AI tools not replacing personal service. How do you think practices can work to ensure that these AI tools are indeed enhancing the services that they're providing to patients and not just replacing them and making them too automated.
Jim McDannald, DPM: Yeah, I think there's definitely something about, like you mentioned, too much automation. That can be a real problem. Let me be direct about what should never be automated. Everything involving patients’ emotional well-being and kind of complex clinical decision making, that's part of the staff and what the doctor's job is. The patient calls clearly distressed. Maybe they think that they have an infection. Maybe there's a severe pain. Maybe they're worried about losing their job because they can't stand on their, for their shift. That needs to be a human interaction immediately. No AI system, no matter how sophisticated, should be handling that conversation.
Here's a rule of thumb. If the interaction requires empathy, clinical judgment, or the ability to read between the lines, it needs a human. Let me give you an example of a few examples of things that should never be fully automated.
First, I'd say is handling patients’ complaints or concerns about care. If someone's unhappy about their experience, they need to talk to a real person who can understand the nuance, apologize genuinely in an appropriate fashion and make things right. And AI definitely can't do that.
Second, complex insurance or billing questions. At some point in the future, maybe AI can handle this, but for right now, you know, AI might be able to answer something like, you know, do you take Blue Cross Blue Shield? But when someone's asking about coverage for a specific procedure or why their claim was denied, how their deductible works—that requires human expertise in the ability to navigate kind of a complicated scenario.
I'd say third is emergency triage. If someone calls with an urgent medical situation, you need to have a trained human making the assessment about whether they need to come in immediately or go to the ER or if it can wait. I know there are AI triage tools out there but this is kind of where the stakes are too high to be automated.
And I’d say fourth is building relationships with established patients. Mrs. Johnson calls and she’s been your patient for 15 years and she should be greeted by someone who knows her name. That personal connection is valuable. So you don’t necessarily to automate everything away.
So those are some basic ways that practices, the practices that get this right are the ones that use AI strategically for specific pain points, but maintain kind of strong human touch points throughout the patient experience.
Jennifer Spector, DPM: And you talked a lot about what should never be automated and really focused on some great points there. Are there any other compliance or ethical considerations that you've observed when it comes to AI tools in the front office?
Jim McDannald, DPM: Yeah, absolutely. This is crucial. Honestly, it's where I think a lot of practices could get into trouble if they're not careful. I say first and foremost is a HIPAA compliance. This is non-negotiable. If your AI tools have access to patient information—most of them will, even if it's just names and appointment times, you need a business associate agreement with that vendor. The AI system needs to be a HIPAA compliant, which means proper encryption, secure data storage, and clear policies about how patient information is used and protected.
One thing that concerns me a little bit is that some tools are cloud-based. They're learning from the data they process. So you need to understand that, is your patient data being used to train an AI model? Is it being shared with third parties? What happens if there is a data breach? These questions need to have clear answers from your AI tool.
Second, I’d say it’s accuracy and liability. If your AI gives your patients incorrect information—wrong office hours, wrong instructions about prep for a procedure—or worse, kind of medical advice, which it shouldn’t be doing; you’re the one that’s supposed to be doing that. So AI needs to be working on your behalf of your practice. So any mistakes are ultimately your responsibility. This is why AI needs to be extremely well scripted and thoroughly tested. And it needs clear disclaimers to say, like, “I am an AI assistant providing general information. For medical advice, please consult with our podiatry team.”
Third, I'd say is consent and transparency. I think practices should be upfront about using AI. Don't try to make your AI sound like—don't try to trick people to think it's a human receptionist. Patients deserve to know they’re interacting with AI. Most people are going to be fine with it for routine tasks but they just want transparency.
Last but not least, I’d say the fourth thing would be data privacy. What concerns do call recordings for using the AI agent, where this chat log is stored. Patients have a right to know how their information is being used and stored. So it's really, really important from an ethical standpoint.
Your obligation is to use AI in ways that improve patient care and access. It's not always to prioritize cost savings at the expense of quality or safety. If you're using AI to avoid hiring adequate staff or patients are suffering because of it, it becomes an ethical problem. So the practice doing things right are the ones that are using AI as a tool to enhance their service, not a replacement for human experience or compassion. They're monitoring the outcomes, gathering feedback, and adjusting their approach based on what's working for patients.
Jennifer Spector, DPM: Well, and as we close out this episode today, if a practice wants to dip their toes into AI at the front desk, based on everything we've chatted about today, what do you feel is the safest or the first step they might take?
Jim McDannald, DPM: Yeah, this is a great question. I'm advising a podiatry practice that wants to start experimenting with AI. Here's exactly what I'd recommend. I'd say start with automated patient reminders and confirmations. These are lowest risk, highest impact place to begin. Here's why it's the safest option. It's a clearly defined task with low stakes. You're not asking AI to make any clinical judgments or handle complex patient interactions. You're just sending reminders and collecting responses. The technology is mature and proven. It integrates relatively easily with most practice management systems. And there's minimal compliance risk if you choose a HIPAA-compliant tool.
Here's why it's impactful is that, you know, no shows are expensive. Every empty appointment slot has lost revenue for your practice. So good reminder systems can reduce no shows by 20 to 30 percent. That adds up quickly. Plus, modern AI powered–reminders can do more than just old-school robocalls or text messages. They can really have two-way conversations. Like I said, they can ask, can you make your appointment at 2 p.m. and if not, they can reschedule it. Or if there’s an open slot maybe they can go ahead and reschedule. So that’s how I implement it.
You really want to, one, pick a HIPAA-compliant vendor. Don't go with the cheapest option. Look for established companies that work specifically with health care practices, make sure they're willing to sign a business associate agreement and they're very familiar with these things.
Second, we'll start with a small pilot. Don't roll it out to the entire practice immediately. Pick one provider, schedule maybe like one day out of the week and test it. You know, work out any kinks, get staff feedback and see how patients respond.
Third, I'd say, is monitor results. Track your no-show rate before and after. Are you seeing an improvement? Are patients responding positively? Are there any glitches in the appointments being confirmed that shouldn't be? You know, scheduling conflicts or patients confusing, being confused by the system.
Fourth, I'd say, is train your staff. They need to understand how the system works, what to do when patients call with questions about the reminders, how to override or adjust things, you know, once you've got appointment reminders working smoothly, then you can think about the next step. For most practices, that would be like an AI powered voice assistant, something that lets patients book appointments online through your website 24/7 or call, you know, calling via text can be a really helpful thing.
But here's the thing that's say it's really, really critical is, don't rush it. I think people get a lot of AI phone mode these days and they feel like they're behind and they have to do things all at once and that's the wrong approach. The practices getting trouble with AI are the ones that try to implement too much, too fast without proper testing or training. So take your time, do it right. Make sure each tool is actually solving a problem that you have and you don't add another one.
And I would say AI is supposed to make things easier and not harder. Tools creating more work for your staff, causing more patient complaints, and not delivering on the promised benefits. Don't be afraid to pull back and reassess. The goal isn't to have the most AI-powered practice. The goal is to run an efficient practice that provides excellent patient care. AI is just a tool to help you get there. Use it wisely, implement it carefully, and keep the focus on what matters, taking great care of patients and building a successful practice. So those are comprehensive responses, but yeah, that's kind of how to go with that.
Jennifer Spector, DPM: Definitely. I think that's what everyone is really hoping to do is to be able to use these tools to be able to focus on what really matters the most, and that's the patient care. And thank you so much for sharing your insights with us today. We're so grateful to have you with us as always. And thank you to our audience for joining us again today. You can find this and other episodes of Podiatry Today Podcasts on our website, podiatrytoday.com, and on your favorite podcast platforms like Apple, Spotify, and SoundCloud. We'll see you next time.


