How Health Care Leaders Can Get Ahead of the Next Wave of Identity Fraud
Criminals are exploiting outdated security systems in health care with deepfakes, font manipulation, and synthetic identities—Ashwin Sugavanam, VP of AI and Identity Analytics at Jumio, reveals how the sector can fight back with artificial intelligence (AI)-driven defenses and smarter identity verification strategies.
What makes the health care sector uniquely vulnerable to the next wave of identity fraud, and why are bad actors increasingly targeting it now?
Ashwin Sugavanam: Health care institutions store vast amounts of personally identifiable information (PII), from Social Security numbers to insurance data, making them a prime target for identity fraud. Medical records, for example, rack up a higher price on the dark web than credit card numbers. This is because they enable various long-term fraud schemes, including insurance, Medicare, Medicaid, and even organ transplant fraud.
On an international scale, forged identities can be used to climb transplant waitlists, bypassing legitimate patients in urgent need. The sector's legacy identity verification systems and fragmented identity verification processes further expose these vulnerabilities.
How do injected selfies and deepfake attacks successfully bypass traditional facial recognition systems, and what should health care security teams do differently to defend against them?
Sugavanam: Injected selfies and deepfakes are becoming go-to tools for fraudsters because they can sneak right past traditional facial recognition systems. These attacks bypass the camera entirely, using virtual software to feed pre-recorded or artificial intelligence (AI)-generated faces into the identity check. Without a live person in front of the camera, even robust facial recognition can be fooled.
That’s where liveness detection comes in. Advanced systems today can analyze subtle cues, such as light reflections, facial micromovements, or reaction to prompts, to determine whether a real human is present. The most effective approach uses multimodal liveness detection, layering in visual and motion-based signals to create a much harder test for fraudsters to beat.
Can you walk us through how font manipulation and background cloning tactics are used to fabricate IDs that can fool both human reviewers and AI systems?
Sugavanam: Fraud perpetrators are getting creative with how they forge identity documents, and one technique is especially tricky to catch: PII font manipulation.
Font manipulation involves making tiny changes to text on an ID. This includes switching out one character set for another that looks almost identical to the naked eye. Traditional AI systems, especially those using template matching, may not catch these subtle discrepancies.
The fix? Adopting document liveness models that can detect PII data manipulation. This includes the inclusion of security features like holograms, photocopies, digital copies, and superimposition of the photo area to establish document validity.
Advanced tools will also be able to capture background cloning, which is detectable among organized fraud rings. Dozens of fraudulent IDs may be captured in the same room, with the same lighting and background. If a system only reviews one ID at a time, those visual similarities go unnoticed.
Large fraud rings can be discovered through incorporating document liveness checks and network-based AI models that can spot patterns across multiple submissions. These tools can flag repeated elements, uncover hidden relationships, and help reveal larger fraud networks operating under the radar.
What role do AI-powered tools like liveness detection play in real-time fraud prevention, and how mature are these solutions today?
Sugavanam: Liveness detection has quickly become one of the most important tools in modern fraud prevention, and it’s no longer a work in progress. Today’s AI-powered systems don’t just ask a user to blink or turn their head. They go much deeper, analyzing how light plays on skin, measuring involuntary muscle twitches, or using colored lights to test real-time responsiveness.
These systems are mature, scalable, and battle-tested, especially in high-risk sectors like health care. By integrating liveness detection at key moments during the patient or provider journey, organizations can stop identity fraud without adding unnecessary friction for real users.
Looking ahead, what proactive steps should health care organizations take over the next 12 to 18 months to strengthen their identity verification strategies in light of these evolving threats?
Sugavanam: The digitization of health care has been a major focus in the last decade. However, this “scale now, secure later” mindset is resulting in major consequences for the security and safety of digital identities. Over the next 12 to 18 months, health care organizations will focus on securely scaling.
The first step involves moving from one-time checks to continuous adaptive trust. That means monitoring user behavior over time and flagging anomalies like logins from new locations or suspicious activity. and applying just the right amount of friction when needed.
Organizations should also rethink their onboarding experience, especially for patients, staff, and telehealth users. Strong identity proofing at the front door, powered by biometrics, liveness detection, and cross-checks against verified data sources, can drastically reduce exposure to synthetic identities and fraud rings.
Advanced AI is changing the game for both health care organizations and the bad actors trying to scam them. With the right mix of AI-driven identity intelligence tools and continuous verification, health care providers can build a safer, more resilient digital ecosystem for everyone.
© 2025 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of Integrated Healthcare Executive or HMP Global, their employees, and affiliates.