AHLA's Speaking of Health Law
The American Health Law Association (AHLA) is the largest nonprofit, nonpartisan educational organization devoted to legal issues in the health care field. AHLA's Speaking of Health Law podcasts offer thoughtful analysis and insightful commentary on the legal and policy issues affecting the American health care system.
AHLA's Speaking of Health Law
Health Care’s AI Transformation: Issues Related to Delivery and Accountability
As health care organizations rapidly adopt advanced technologies, including artificial intelligence (AI), they face complex challenges around health care delivery and accountability. Christi Grimm, Managing Director, BDO, and Julie Taitsman, Managing Director, BDO, discuss how AI is showing up in clinical care and the business of health care, from helping physicians manage information to transforming the revenue cycle process, and how technology is supporting government efforts to protect public funds, detect risks, and promote transparency. Christi is the former Inspector General, U.S Department of Health and Human Services (HHS), and Julie is the former Chief Medical Officer, HHS Office of the Inspector General. Sponsored by BDO.
Watch this episode: https://www.youtube.com/watch?v=oOHMEoTTvGk
Learn more about BDO: https://www.bdo.com/
Essential Legal Updates, Now in Audio
AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.
Stay At the Forefront of Health Legal Education
Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.
This episode of AHLA Speaking of Health Law is sponsored by BDO. For more information, visit BDO.com.
SPEAKER_02:Hello, I'm Julie Tatesman, a managing director in the healthcare forensics group at BDO. And I recently retired from the United States Department of Health and Human Services, Office of Inspector General, where I was Chief Medical Officer. And I am here today with former Inspector General, the Honorable Christy Grimm. Christy.
SPEAKER_01:Thank you, Julie. Yes, I am Christy Grimm, and I am also a managing director at BDO. And most recently, until January, I was the Inspector General for the United States Department of Health and Human Services, and I ran the OIG for a grand total of five years. And I want to thank everyone for joining us. Today we are going to talk about artificial intelligence in healthcare and how it is already reshaping how care is delivered and how accountability works behind the scenes. In the first half of the conversation, Julie is going to walk us through how AI is showing up in clinical care and in the business of healthcare, from helping physicians manage information to transforming the resume cycle process. And then we'll turn to the oversight side of the story, how technology, including artificial intelligence, is supporting government efforts to protect public funds, to detect risk, and to promote transparency. And to be clear, this isn't a world where algorithms are running audits on their own. Across both halves of the conversation, Julie and I are going to continue to come back to the same themes: how technology can improve performance, why human judgment still matters, and what good governance looks like in this new environment. And so with that, Dr. Tatesman, I turn it to you.
SPEAKER_02:Thank you, Christy. And thank you to AHLA and to BDO for making this podcast happen. And it is such a timely moment for us to be doing a podcast on AI and healthcare. Christy and I are recording this in November of 2025. And just last month in the New England Journal of Medicine, the case records of the Massachusetts General Hospital had an AI chatbot be the guest diagnostician for the first time. Now, this is a special section of the New England Journal that was founded by Dr. Cabot a couple over a hundred years ago. And how it works is a physician is invited to assess a mystery case. And you know you've arrived in medicine when you're invited to uh participate in this special section. And in October of 2025, they invited Dr. Cabot, capital C A, capital B-O-T, Cabot, to be the guest diagnostician. And not only did Dr. Cabot arrive at the correct diagnosis, but the reasoning that Dr. Cabot presented was so clear and so rationally organized to reach the diagnosis, it was truly a milestone. So, Christy, before we uh dive into AI and healthcare, let me just ask you, how do you use AI?
SPEAKER_01:I use it more now than I did even as uh recently as uh last November. Um I use it as an assistant. Um uh I, you know, again before this, uh before the you know the most recent updates to um to AI and things like chat um GPT 5. I was worried about accuracy issues and I used it really um for basic things like scheduling, uh, to proofread something that I'd already written, uh, to win out options for a choice that I needed to make. Uh but now um through testing and feeding and teaching, uh I'm using it in more comprehensive ways to collect, to analyze, um, to synthesize topic-specific information for me to review and to potentially use. Uh, and it's great. Um, I've I've I've shared with people that it's my, it's like having a work partner. Um, I still do a lot of checking to make sure the information is correct, though, uh and kept in the right context. But I use it daily throughout the day. Julie, uh, how do you use AI?
SPEAKER_02:I have to uh bifurcate that answer because I use it very differently in my personal life versus my professional life. In my personal life, I don't really worry about my data. I use the public systems, I use it mostly passively, and I accept every cookie. I love that the machine learning is tailoring what ads to send to me, that my Google feed is going to show me, you know, pictures of dogs being adopted, or the next period drama I might like because I enjoy Downton Abbey. I enjoy that it's learning from my from my past use, and those algorithms are working really well to show me content that I want to see. In my professional life, it's completely different. I am much more protective of the data. I only use private AI systems. Um, as you know, at BDO, we instead of Chat GPT, we have Chat BDO, an internal system, because I'm very, very conscious that the data that we use have to be protected. And there's always a fear that if you're using a public system, they can aggregate that information. It can gather information that can move the market, or aggregating a whole bunch of public information, they can reverse engineer into proprietary or um or confidential or even classified information. So in my private life, I use it more passively and for more fun things. And in my professional life, very, very guarded and protected of the uh data security.
SPEAKER_01:All good points. Uh Julie, uh, bring bring out your clinician side, your physician side. Tell us how AI is being used in healthcare delivery and in the business behind healthcare.
SPEAKER_02:Sure. We can we can go into more details in a bit, but I would lump it into two big buckets, or maybe you know, three or two and a half um buckets. Uh, first is is patient care in terms of making a plan, a diagnosis, and a treatment. Uh, the second is revenue cycle management. That's the business behind healthcare, the front-end scheduling, the prior authorization, submitting bills, appealing denials. And then the third or overlapping bucket is documentation, because documentation of the clinical encounter is important both in terms of delivering quality patient care and then also making sure that providers get paid accurately and payers pay accurately in the revenue cycle management aspect.
SPEAKER_01:Uh, Julie, um, AI has been around really for quite a long time. Um, when you think about some of its early rudimentary uh uses in healthcare, what is AI already good at?
SPEAKER_02:Sure, there are some things that AI is already super good at, possibly even better than an average clinician. AI is great at assimilating huge volumes of information quickly. So think of taking a voluminous medical record, reviewing that chart instantaneously, identifying all of the medications that that patient is taking or being prescribed to take, and identifying drug-drug interactions and contraindications. Another thing AI is really good at is pattern recognition. And that can be applied for visual type patterns for radiology, reading an MRI, uh pathology, reading a tissue sample. The um AI has uh has been proven to be pretty um pretty accurate on that. And again, you know, this is an enduring podcast, so when folks read it a year or listen to this a year or two from now, um AI just keeps uh keeps improving in that regard. Um I will note about AI is AI cannot replace uh physicians' uh medical judgment. It can be a tool to enhance medical judgment. And um AMA, the American Medical Association, is trying to uh rebrand AI in a way instead of having AI stand for artificial intelligence, they're promoting that it should stand for augmented intelligence, to make the point that AI is a tool that human physicians use, not a tool that will replace human physicians. And we see that in other fields as well, trying to rebrand AI to make it uh friendlier, uh more accessible. I stopped reading the Washington Post, the paper version, and now I listen to it while I walk my dog. And just last week, the this story is read by an AI-generated voice. They AI started introducing themselves instead of one voice. Now it's I'm Josh, the AI voice, and I'll be reading your story. I'm Chloe, the AI voice. And we we see that in uh you know, in in marketing in general, to make things seem more virtuous, uh less scary. Like I go to uh what I used to call Dunkin Donuts in the morning and now is rebranded as Duncan. Either way, I'm still getting a French crawler and a Boston cream, but I feel more virtuous about it, going to Dunkin' or going to uh KFC instead of Kentucky Fried Chicken. So I think um there is a movement to try and make AI seem a little bit more accessible and less frightening. And then one particular thing I want to talk about about how some clinicians are using AI is an ambient listening uh technology, which um basically how how that works is you can turn on this ambient listening system when a physician, nurse, I I apologize, I know physicians and nurses don't like being called providers, but sometimes I will say provider to try and be more inclusive of all types of healthcare professionals. So when providers are meeting with a patient and the ambient listening can create a record of that encounter to um be used for the medical record documentation, but it can also go beyond that and start working on differential diagnosis and uh treatment plan.
SPEAKER_01:So I imagine that ambient listening um that that uh can save a lot of time, can it?
SPEAKER_02:Well, I would say maybe. And a lot of times we make the connection between electronic health records and AI, like if the past is prologue for electronic health records, electronic health records promise to save time and improve care. Well, how's that working? I mean, most physicians now say they spend more time on documentation, more time on paperwork, more time on record keeping than ever before. So I'm optimistic that that someday the promise of of time saving will um will occur. But uh, but but maybe quite not yet. But there are some other positives. I remember when we first rolled out electronic health records, the uh provider had to, you know, pull out a you know, turn their back to the patient, pull out a keyboard, hunch over and start, start typing before we had, you know, uh the cart so that you could at least still maintain eye contact with your patient. So even if it doesn't save time, some potential might be that you can focus on talking to the patient, make eye contact with the patient because you're not turning your back to type at a monitor. But um what is super important though is to remember that the physician, the provider has full responsibility for what goes into the record and an obligation to review and verify what the ambient listening uh generated uh to make sure it's it's accurate because that's not a you know, that's a non-delegable responsibility to ensure that you know the note, the record that you sign and certify is accurate is. And then we worried about electronic health records with copy and paste and notebloat that the records could balloon. You could see that potential happening with ambient listening as well. And then one other thing to think about with with notebloat is that the provider is responsible for everything that's that's in the record. And I remember back when I was a medical student in the 1990s, we would um, you know, we had paper records, and sometimes the medical students would spend so long writing their history in physical that instead of writing it directly in the chart, you'd take the pages out, walk around on your uh with a clipboard, uh, write the note and then insert it later. And I had one attending position who said, you know what, I'd rather you didn't put that in the record. Like you can just write your extensive history in physical and just keep that for educational purposes because you don't need a medical student history and physical. And his reasoning was that at the end of a note for a history and physical, you write your differential diagnosis and you consider every possible diagnosis that the patient could have. And you start with the most likely, and then you work your way down to the real thing, real uncommon things, what in medicine we call zebras. And he had had a malpractice case where he hadn't read the medical student note, their number nine on the differential diagnosis, ended up being what the patient actually had. And he was held responsible for, well, why didn't you consider that condition? He said, because it was really rare and it was presented, but your medical student considered it. And so you could see the same thing happening where the artificial intelligence creates a differential diagnosis that is so much more extensive than what a busy clinician would actually include. And then my last point on saving time, there's also the question of if you have to disclose that you're using it to the patient. So for true informed consent, that might take a little while to explain to certain patients exactly what the ambient listening is and get approval to do it.
SPEAKER_01:Interesting point about uh, you know, seeing the forest for all of the trees uh with ambient listening and all of the additional data that uh is uh potentially being added. So, Julie, it does seem like the primary use cases and advancements for physicians, um, their use of AI are as a physician assistant. I described at the beginning, I use it like an assistant to help manage all the data that's available to them. Um, so let's get into how AI can help with the administrative side of things, like revenue cycle management. Sure.
SPEAKER_02:It might be a little bit more exciting and futuristic to talk about how AI can be used for direct patient care, and that potential keeps growing. But for the here and now, I would say that the more immediate use is for administrative purposes and it can be used on on both sides by the by the pair who's processing claims, authorizing care, denying claims. We're seeing a lot more denials. And uh the healthcare providers, the hospitals, the physicians. And some estimates say that about half of providers right now are using AI in some way, maybe to help with appeals, maybe to help with documentation. But again, so important to realize that AI is just a tool, and you always need to keep a person involved and rely on the person's professional judgment. With the rise of managed care, uh utilization controls are even more important. We're seeing more prior authorization requirements. We're seeing payers increasingly using uh denials as a tool, uh, maybe even using denials that will ultimately get overturned if the provider appeals. And I've seen um some estimates that providers are spending about$20 billion a year now trying to uh overturn uh denial. So there's a huge opportunity to use AI to reduce administrative costs and um maybe even uh get better outcomes.
SPEAKER_01:So do you see the potential for AI use on uh more on the provider side, the payer side, both?
SPEAKER_02:Definitely both. And think back to uh one of my favorite movies from the 1980s, uh Real Genius with a very young Val Kilmer. There's that classic scene at the university where the students have stopped coming to class because it's not worth their time. So the students have set up uh tape recorders on each of their desks, and the teaching assistant walks in, sets up the projector, hits play, and the uh professor's lecture is beamed out to the students to record. And I can see both sides using AI in a way like that, where the uh provider is using AI to submit the claim, and the payer is using AI to deny the claim, and then the provider is using AI to submit the claim, and neither is programmed to um to end the conversation. So the cycle to infinity, where uh the Washington Post just had a um had a data article on uh how much energy AI uses, and apparently saying thank you to your AI, then they say you're welcome, and it measured that in terms of how many uh disposable bottles of water that's equivalent to. So we need to make sure that um that the AI is set to at some point uh end the cycle.
SPEAKER_01:Well, talk about the bright side of both the insurers and providers both using AI. Sure.
SPEAKER_02:I would love to see AI used for real-time um prior authorization approvals. I mean, we can reasonable people can disagree about what kind of administrative burden uh prior authorization entails and when it's appropriate and when it's not. But for a procedure that is requiring prior authorization, if we could do that in real time, that would help everyone. In medicine patients, we call it loss to follow-up, when you recommend something, but the person has to come back at another time and it just doesn't happen for whatever reason. So think about the example of a medicine that needs to be injected. So there's many situations where, like an arthritis medicine or something, where the patient is there, the doctor's there, ready to do it, and they say, okay, can you just do it now? And the doctor says, No, we need to get prior authorization from your insurance company, which is a process that takes a couple weeks, the patient leaves, maybe they get approved, maybe they take another day off work to come back. If that could be done right away, you wouldn't lose the patient to follow-up. It would cost less for the insurance company, and it would save that future appointment that the physician could give to a different patient. So it's it's win-win-win for that patient, the payer, the provider, and and other patients who would have better access to care.
SPEAKER_01:Okay, uh so Julie, talk about uh how it can help with staffing issues and administrative burden, how AI plays a role there.
SPEAKER_02:Sure. Um providers now say that they have more administrative burden than than ever. Um, there are staffing issues, tighter margins, um, more use of prior authorization, and um, it's not just managed care, other more traditional types of insurance are also increasingly using prior authorization and increasing denials. So there's great potential to use AI to streamline. So on the provider side, you can use AI to proactively identify which services the insurers are targeting for denials and then proactively ensure that the documentation is good and hits the um buzzwords and the requirements before you submit. So when you submit the first time, as opposed to on the on the back end. And the AI can gather data from the chart that's needed to support the request, and of course, not input it without oversight, um, gather the data, and then the uh provider would have to certify that that data is correct and um and accurate. And with machine learning, it can even tailor the request to what that particular payer requires and what that um particular payer is focusing on. So maybe your staff can focus more on spending times uh spending time with with patients, and the AI can continue learning and um keep getting better.
SPEAKER_01:So tell us, though, about some of the vulnerabilities associated with using AI in that context.
SPEAKER_02:Sure. Well, machine learning is only good as good as the inputs, as the data going in. So it's important to be careful who's programming the AI and also to worry about uh bias, both intentional bias and unintentional bias. Um, first, you know, coming from the OIG perspective, we always said that if you make a system that's giving out money, especially government money, someone out there will figure out how to cheat that system. And, you know, industry players may be looking to increase sales. Now, in the uh not in the healthcare context, I don't know if you saw earlier this week there was a lawsuit against Spotify that someone was messing with the algorithm to try and uh increase um the sales for Drake to suggest that people are listening to more Drake. So the stakes on that are are pretty low, right? Who cares if um the algorithm is going to give you more Drake? How about when the algorithm is gonna give you more opioids? Now, if you recall, you and I, with our colleague Andrew Van Landingham, uh, published an article in Jam Internal Medicine back in 2020 where we discussed a case where the clinical decision support, the CDS, the clinical decision support in the electronic health record was manipulated by a drug company that manufactured opioids. And it was manipulated in such a way to encourage physicians to prescribe more opioids, uh, more of this particular kind, um expanding the indication so for more patients, higher doses more frequently. So the stakes are really high there if the algorithms are being influenced by someone with an agenda to push a particular kind of treatment.
SPEAKER_01:Yes. Uh clinical decision supports okay to remind someone about it being flu season, not so great to be uh, you know, pushing. Yes.
SPEAKER_02:Right, like what we used to call academic detailing versus commercial detailing, where you're promoting certain kinds of prescribing for proven good reasons versus certain kinds of prescribing for financial gain.
SPEAKER_01:So, Julie, uh, tell us about unintentional bias in the machine learning depends on what's being fed in.
SPEAKER_02:So there's the organic, uh, what cases it's seeing. And then there's also some efforts to train the AI on LinkedIn. All the time I get um advertisements to see if I want to um get paid to uh train to review medical cases and train, um train as. I think they pay you in Amazon gift cards or something. I've I've never tried it. A friend of mine does it and uh he says it's kind of kind of fun, but it reminds me of a job I had back in college. I grew up near where the educational testing service, where ETS is. So if you remember taking your GRE on a computer, well, you're welcome. I was in the group that helped them transition from paper tests to a computerized version. But what's curious about that is, you know, I thought it was the best job in the world. You got$50 a day in a catered lunch to show up and take practice tests. And who did they hire to do it? It was college students who happened to be on break somewhere near Princeton, New Jersey, and whose mother maybe saw the ad in the Princeton packet or the town topic. And then lo and behold, you find out the test is biased. Of course it's biased when trained it exactly. So I think about the transition to artificial intelligence a lot like the transition from paper to um digital. And it's a tool with a lot of promise, but we have to be careful how we use it. And let me just give one important warning, just a real practical tip to folks listening who might be physicians or hospital administrators who are establishing uh AI tools for the medical records. Um, once you put information in the medical record, it's very hard to get that information out. And it's also important to remember that you need to set up your systems in a way that you're setting up your providers to succeed, not setting them up to fail. And we've seen with electronic health records, with the autofill and automatically creating a uh review of systems and carry forward, that we see documentation of things that didn't happen in the medical record. And you'll remember this from one of our um OIG audits, where we had a patient who came to the emergency room unconscious and never regained consciousness. But the box was checked in the medical record that the provider had counseled that patient about smoking cessation. And now it's great to have your protocol be encouraging, you know, no wrong door every time a patient touches the healthcare system to talk about smoking cessation. But when your system is set up, and that was probably an autofill, it required the provider to uncheck when they didn't do it. When you set it up that way, you're going to have false things in the medical record that the healthcare providers are responsible for. So AI is a powerful tool, but it must be used carefully.
SPEAKER_01:We would be remiss to not talk about cybersecurity uh and AI in the healthcare context. Yes.
SPEAKER_02:Yes, the stakes are so high now. When we used to do cybersecurity audits, you know, we'd set a van in the uh hospital parking lot and see if they could drop onto the system. We were only worried about gathering the data, about privacy and hacking into the system to gather the data. But now with the Internet of Things and the robotic surgeries and the connected IV drips and the little robots that deliver the medicines, there are so many things beyond just gathering information, but hacking in and taking over and causing bad things to happen. That the stakes are much higher than listening to a little bit extra Drake. Right.
SPEAKER_01:So uh you earlier talked about uh bias and AI equity. How can it be used to promote equity?
SPEAKER_02:Thanks. I love that question because you know, from the OIG perspective, you're highlighting vulnerabilities. And now I finally get a chance to uh have a little optimism and have some faith in the future for how some of these technologies will be used to promote equity. So I'll mention two. Um, first of all, uh scheduling. Um, one of the most important functions of the front-end uh revenue cycle management is patient scheduling. You can't access care. I mean, except for the emergency room where you can walk in, you can't access care without getting an appointment. And we've talked about this. The bane of my existence is calling the pediatrician to get an appointment. You wait on hold. Sometimes their workflow is tell us what you need and then we'll call you back. Now, if you're working in a warehouse somewhere to find a private time to be on hold for 15 minutes and then a call back, it's hard to get an appointment. It is much more equitable if you can privately on your own time access the appointments electronically. The other is translation services. I remember as a student before the class standards, um, sometimes awful, awful practice. You might ask a child who's with the patient to translate for you. Um, AI has made translation so much easier. And then even for people who speak English, uh, physicians, we often use words in language that's hard to understand. Um, some patients apparently turn to an AI summary to provide the plan in digestible, accessible language.
SPEAKER_01:So, uh Julie, what are some key takeaways uh to mitigate risk?
SPEAKER_02:I would say always keep a human in the loop, the augmented intelligence not replacing human intelligence. And then two risks that we should mention are hallucinations and sycophency. Uh, hallucination is when the AI makes things up. And in government, at the highest levels, people have fallen for that and included um hallucinated uh citations in government reports. Um, sycophency is a special type of sort of related to um hallucination, where it gives you more of what you want. It confirms what you're already saying. So sycophancy is great for those AI friends you see advertised. Sycopency is super dangerous for an AI therapist, right? If you're asking the AI friend, should I go to a happy hour? Great for a sycophantic answer. If you're asking the AI therapist, should I harm myself? Incredibly dangerous. And then the um the next thing I'll mention is uh the importance of complying with laws. It's a dynamic field, and there are multiple authorities uh that that apply, federal and state, um, the FDA regulating AI as a medical device, um HIPAA for data privacy, even the Federal Trade Commission for deceptive claims and unfair practices, and then states might be having um particular laws on the uh cutting edge is uh California, Colorado, and Utah. But again, I would just say, you know, keeping humans in the loop, uh, humans are accountable, and maybe um making some efforts to uh minimize bias. It's it's not enough to just be uh be be neutral, but um making efforts to uh keep the technology improving and minimize the bias.
SPEAKER_01:Uh one more question, uh Dr. Tietzman. What are a few key questions to ask if you're thinking about adopting a particular AI function?
SPEAKER_02:Sure. I would say the most basic question is does it work? And then I would think about do I have to disclose that I'm using it and to whom? Do I tell the patients? Do I tell the payers? Can I bill for it? If I'm billing, how? And then are my data protected? And then finally, um is the technology biased? Is it promoting equity? And how can we make it better? And um now a whole other podcast on just that.
SPEAKER_01:A lot to unpack in that, in that.
SPEAKER_02:But yes, go ahead. And now, if I could ask you a few questions, let's uh let's turn to the government oversight angle. And if if you could talk a bit about how technology, including AI, is being used for government program oversight.
SPEAKER_01:I'm I'm really glad, Julie, that you phrase that very broadly because in oversight uh right now, in you know, in contrast to what's happening in a clinical setting, um, it, you know, we are oversight isn't sort of at that same point in terms of deployment. Um, and so it isn't so much about the use of AI as it is about the broader use of technology and data in turn in oversight. Uh and so while AI is moving very, very fast in clinical environments, uh government um is really still in that early, in the early stages of AI adoptions. And the most meaningful advancements uh have come from analytics platforms, uh, systems that bring together data, tools, services, and people uh that you know do things like support investigative audit and monitoring oversight work, uh, that generate leads by doing things like spotting trends and billing and payments or other types of like grant drawdowns that could indicate uh improper payments or something, some funny business that might be happening. Uh so think of it more as early detection. Uh, when something looks off, the system is then flagging it for deeper review by a person. Um, it really comes down to data helping humans decide uh where to look next. Um, a good example uh and one that's very, very current, I think, is uh of an analytics platform is the Pandemic Analytics Center of Excellence, or PACE. And that was developed uh during COVID and it came out of the CARES Act. It brought together uh oversight professionals from across government, across different departments, auditors, investigators, data scientists, and it provided a shared environment uh for them to analyze data in near real time. And it let them link information, and that you know, that's the most important piece, honestly, across programs like uh the provider relief funds coming out of HHS, unemployment, insurance out of labor, and small business loans coming out of SBA. And with that, with that kind of platform and shared data, shared platform, you could see patterns that wouldn't um have been visible without within any single data set. You could see if somebody um you know was applying for all of these different programs simultaneously and really sort of have then um a trigger to be able to kind of monitor what was going on there. And that was an enormous uh step forward in oversight capability. Truly shouldn't be overstated. I mean, it was it was it was very, very important. Um and analytics platforms like PACE really are a model for what's possible. Um, for when you combine the right data, the right technology, the right expertise, it can keep pace um pun intended with uh fast moving programs. And the next uh big way government oversight is using um technology is through risk modeling. Some oversight organizations, including uh the HHS OIG, have built uh systems, and these have been in place for quite a long time, frankly, um, that don't just look backward at what went wrong, but try to estimate where the next vulnerabilities are likely to occur and how those problems, how big those problems could be. Um, when I was inspector general, um, you know this, I consistently spoke uh about the importance of making data-driven decisions, uh, how we used a wide range of data to zero in on the highest risk because we couldn't possibly audit or evaluate everything under the HHS umbrella. You know, at times there was over$2 trillion in uh, you know, in payments coming out of HHS and hundreds of programs. Uh, you couldn't audit, evaluate, uh, investigate everything. And so um certainly for the audit and in evaluations, risk models helped us target areas uh uh that sort of signaled risk and offered uh potentially the greatest potential for positive impact.
SPEAKER_02:Do you mean like trying to anticipate what this season's flu strain will look like?
SPEAKER_01:Exactly. Um I do think very much of it as the oversight uh version of population health. Uh for both the analytics platform and for risk modeling, the value comes from connecting the data, um, not just collecting it, where you're bringing together those data points that are typically isolated to see if there's a pattern to maybe triangulate something. And in order to do that, you need data. And uh HHS OIG has just a lot of data, uh, enrollment data, encounter data, prescription and laboratory data, and uh, you know, even enforcement and compliance data. And because of that richness, OIG uh can and does model what normal looks like across millions of providers and flag anomalies and outliers that might signal a problem, signal a problem, as in, you know, you need to go behind that and verify whether there actually is one. But, you know, to signal, you know, detect things like rapid upcycles in billing, where a provider's claims volume suddenly uh spikes in ways that don't match patient volume, or unusually high billing for a service type in one geographic area that doesn't, you know, seem to exist anywhere else, or even when there are provider to beneficiary ratios that seem unrealistic to predict where there's a risk and a need for follow-up action. You know, it's uh a clue if there are, you know, if there is uh one hospice for you know uh every four beneficiaries in uh you know in a county in California. That's a clue. You want to follow up on that. So data-driven model risk modeling um really does and did help inform oversight priorities in deciding where you know that follow-up action may be needed, audits, evaluations, or inspections. And, you know, really anytime you're cracking the OIG work plan or you know, more like pulling it up on your website, it whenever you see something in the public work plan, it is grounded in some sort of um risk analysis, risk modeling.
SPEAKER_02:Yeah, so it sounds like uh you can use the AI to identify the pattern, flag the anomaly, the deviation from the pattern, but then puts on the ground. Right. Right. So what else is the technology making an oversight difference?
SPEAKER_01:Uh an area that I that I want to zero in on is um uh automation and robotic procedures, and because it doesn't really grab a lot of headlines, but healthcare oversight um includes automating routine monitoring tasks, the kinds of things that used to require teams of people running manually, manual checks daily, monthly, or some other regular basis. And um, some of this automation has been around for a long time. Things like um automated edits. Uh, you know, one example that listeners will be familiar with are rules that are built into claims processing systems that automatically flag or deny payments that don't meet policy parameters, duplicate claims, mismatch procedure and diagnoses codes, uh billing for deceased beneficiaries. Uh, another area um, you know, uh is in IT and cyber oversight. You talked, you talked about that from a risk um perspective from a you know clinician side. Uh, but from a program oversight perspective, um, you know, in IT and cyber, uh, there is continuous vulnerability scanning of program systems to detect new weaknesses or configuration changes that expose the system to risk. Like that is an ongoing um process that happens. And then um, you know, looking at compliance monitoring, continuous scanning to identify where there might be compliance risk. And I'll give a hypothetical example of when this could happen, although some of this is more uh idea than what's uh uh potentially in practice right now. So uh um when OIG resolves a fraud or kickback case with a healthcare entity, it sometimes offers what's called a corporate integrity agreement or CIA. And that uh is essentially a deal. The company stays eligible to bill Medicare and Medicaid by avoiding exclusion from federal healthcare programs, but it agrees to strict compliance monitoring, independent reviews, board accountability, training, and uh reporting to OIG. But every so often a company refuses to participate in a CIA, and OIG publicly deems this healthcare entity high risk and commits to enhanced scrutiny for that healthcare entity. And this is where automation could really make a difference. Imagine a set of robotic tools that quietly keep uh tabs on that high-risk company without having a team have to do it manually. For instance, um it could be monitoring uh claims and enrollment. Uh bots could be automatically flagging new billing activity from any newly registered affiliate under common ownership. There uh, you know, it could be doing corporate link analysis where algorithms are cross-referencing business registrations and NPI ownership data to see if the company reappears under a different name or a tax ID. Uh, it could even be um, you know, looking across public records, automated searches, uh, could watch for new enforcement actions, bankruptcy filings, or mergers involving that company. In theory, all of that uh could be happening passively. And uh the same type of automation could even uh involve alerting OIG if there is a pattern shift. A lot of potential there.
SPEAKER_02:Now, talk a little more about the AI. From a healthcare oversight perspective, how much AI do you think is really in use?
SPEAKER_01:Uh that's a great question and one that I get a lot. And the the honest answer is that AI and government oversight is growing, um, but it's uneven. Most of what's in use uh would still fall under the umbrella of what I just talked about, advanced analytics or automation, um, rather than what we'd consider true AI. Um, but having said that, there are some real signals of change. Um, for example, the one big beautiful act specifically directs HHS to invest in AI tools aimed at reducing and recovering improper payments. And that's uh that's a big statement. Congress is saying we expect technology to help us be better stewards of taxpayer funds. And uh CMS, the Centers for Medicare and Medicaid Services, announced that it will use machine learning for its risk adjustment data validation audits, its RAD V audits for Medicare Advantage, essentially using AI to help determine whether diagnoses that drive payment are actually supported by the medical record. And that is truly a significant application of AI inside the sort of regulatory audit process.
SPEAKER_02:Now, when you were the HHS Inspector General, what were some of the considerations or questions you had about AI use in program oversight?
SPEAKER_01:Certainly, um, you know, starting with the enormous potential for the use of AI, looking at it, you know, just from the fraud angle alone. Um, and I think this is where the most potential exists. Um, agencies like HHS OIG have decades of enforcement data that can be used to teach algorithms what red flags look like. And but and because that information spans uh providers, geographies, program types, it's a really rich source for AI deployment. Um, you know, but I thought about you know how oversight had uh to walk before you know it could run. And when I was IG, the conversation always kept at the fore the purpose of what we were looking to do and transparency around it, not just whether AI could flag something, but why it did, you know, thinking about some of those inherent biases that you were talking about earlier. Um, also whether we could explain the reasoning, uh, could we validate the output? You know, um critically important, you know, were the underlying data complete, accurate, timely? If you look at um, you know, OIG reports, you know, probably five, 10 years ago, huge focus was just simply looking at whether the data were there in areas like Medicare Advantage. And so I thought those questions, um, which are really seemingly simple questions, were essential to maintaining fairness and public trust. And it is the most responsible thing to do. Um and of course, you've talked about this so much, um, that human element and how oversight work really does depend on professional judgment always. And that's not, you know, um specific to use of automation and risk modeling and AI. Um, you know, there does always have to be professional judgment. Um, you know, when a pattern is identified, we still need to test it, to contextualize it and decide whether, you know, there's actually uh fire behind that, behind that smoke. And there it's no different for AI. Those things still need to be um, the the human element still needs to be there. You know, and I realize much of what I'm describing is a governance framework, and that will be, I think, um, an ongoing process, an aider process process for oversight entities as they deploy AI.
SPEAKER_02:And what can you tell us about OIG's current work plan? Or is OIG zeroing in on the use of AI uh specifically yet?
SPEAKER_01:Uh AI specifically, you know, not much. And and that's that's telling. Um, if you look at their plan, you won't see many projects focused on AI. Uh, and that makes perfect sense to me because the rules around AI, um, how it can or should be used in Medicare and Medicaid just, you know, really don't exist yet. Um, as far as I know, there aren't formal and broadly applied standards that define, you know, what constitutes a clinically valid algorithm in a coverage payment or prior authorization context. Are you are you aware of any um, you know, uh formal uh formal standards? I'm not. Uh and any OIG, you know, needs to have uh a law, a rule, um, you know, guidance with which it can compare something against. And when that doesn't exist, you know, um, you're really sort of at the point where you're um maybe describing things, uh describing things, doing descriptive work, uh, you know, and without sort of documentation specific to things like algorithmic decision making, um, you know, that says how a pair provider should record the use of AI when making or justifying a decision, you know, you're really kind of, you know, oversight needs to take that into account. And we don't yet have guardrails on the fundamentals. And until those things exist or are better defined, you know, oversight is more in a in a sort of gray space. And so out of OIG, I would expect um, maybe as they're collecting data, uh, maybe for them to be asking if AI was sort of used in clinical settings and how, and um to show how that algorithm, you know, um how to explain that algorithm, uh, I wouldn't expect it to be, you know, for them to come out with improper payments because AI is used because those rules just aren't really there yet.
SPEAKER_02:Very interesting. And could you close us out with maybe three key takeaways for healthcare entities?
SPEAKER_01:Uh healthcare entities um build governance before you need it. Uh, even though regulations on AI and advanced analytics are still forming, um, that governance shouldn't wait. Um, organizations that already document how they design, test, validate, and monitor AI will really be uh ahead of the game when that oversight catches up. Um second, uh transparency uh builds trust. Uh program administrators and oversight entities will be asking you to show your work. And whether it's AI-assisted documents, um uh building automation or predictive analytics, um, you know, to anticipate those questions about how those tools reach the conclusions and actions that were taken. Uh, and then you know, it can't be said enough, right? Um, Dr. Teetsman, oversight uh needs to keep people in the loop. So keep people in the loop.
SPEAKER_02:That is that is terrific advice. Thank you for sharing those words of wisdom, uh, Inspector General Grimm. And thank you, Dr. Teetsman. This was a delightful conversation. And thank you all for listening through to the end. And thank you, BDO and AHLA, for making this podcast possible.
SPEAKER_00:If you enjoyed this episode, be sure to subscribe to AHLA Speaking of Health Law wherever you get your podcasts. For more information about AHLA and the educational resources available to the health law community, visit American Health Law.org and stay updated on breaking healthcare industry news from the major media outlets with AHLA's Health Law Daily Podcast, exclusively for AHLA comprehensive members. To subscribe and add this private podcast feed to your podcast app, go to American Health Law.org slash daily podcast.