
AHLA's Speaking of Health Law
The American Health Law Association (AHLA) is the largest nonprofit, nonpartisan educational organization devoted to legal issues in the health care field with nearly 14,000 members. As part of its educational mission, AHLA's Speaking of Health Law podcasts offer thoughtful analysis and insightful commentary on the legal and policy issues affecting the health care system. AHLA is committed to ensuring equitable access to our educational content. We are continually improving the user experience for everyone and applying the relevant accessibility standards. If you experience accessibility issues, please contact accessibility@americanhealthlaw.org.
AHLA's Speaking of Health Law
Top Ten 2025: Medical Malpractice in the Age of AI
Based on AHLA's annual Health Law Connections article, this special series brings together thought leaders from across the health law field to discuss the top ten issues of 2025. In the eighth episode, Shalyn Watkins, Associate, Holland & Knight LLP, speaks with Anjali B. Dooley, Senior Partner, DBM Legal Services LLC, about the key litigation risks that are coming to the forefront regarding the use of artificial intelligence. They discuss issues related to accountability, strict liability versus negligence, data lineage and bias, and validation/reliability. From AHLA’s Health Care Liability and Litigation Practice Group.
Watch the conversation here.
AHLA's Health Law Daily Podcast Is Here!
AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.
<silence>
Speaker 2:A HLA is pleased to present this special series highlighting the top 10 health law issues of 2025, where we bring together thought leaders from across the health law field to discuss the major trends and developments of the year. To stay updated on all the major health law news, subscribe to ALA's New Health Law Daily podcast, available exclusively for premium members@americanhealthlaw.org slash daily podcast .
Speaker 3:Hi everyone, and welcome to today's podcast, where we're talking about the top 10 issues , um, in healthcare that A HLA seems for coming in 2025. Today we're talking about number eight, which is who do you sue medical malpractice in the age of AI speaking with one of the art or authors from , uh, this article, which is Angel Dooley , um, from DBM Legal Services. My name is Shalin Watkins, and I'm an attorney at HOL M Knight , where I work in our healthcare regulatory practice. I'm also Vice Share Education for the A HLA Healthcare Liability and Litigation Practice Group, which is hosting today's podcast. All right , angel , would you like to introduce yourself and tell us a little bit about you?
Speaker 4:Yeah, sure. Hi , um, my name is Angelie Dooley . Um, I am a healthcare regulatory corporate , uh, transactional attorney. I work for Duncan Bergman and Mandel. Uh , um, we, we call ourselves DBM Legal Services. Um, they're based out of New York, New Jersey, but , uh, we're across the nation. Um, we're, we're fully basically remote law firm , um, uh, doing corporate transactions, but I've been in healthcare regulatory for now almost 20 years. Um, I've watched a lot of things change. It's constantly changing now. Um, and so this kind of, I was at a party and this article kind of came to my attention, and I was, I was already asked to do an article and I was like, oh, let me write about this, because I was talking to a doctor friend of mine and she works for Mayo, and, you know, she gave me some insight on what was going on. So it was, it was just one of those things that kind of naturally happened, and it was interesting piece, so I ended up writing about it.
Speaker 3:Yeah, I think that's kind of what I've seen as a trend in healthcare right now. When everyone's talking about ai, there's a lot of like panic and scares that are happening from every element of healthcare, right. Be it the provider side of work , the developers, what regulations are not catching up necessarily with the fast pace of the technology. So there's a lot of questions out there,
Speaker 4:Right? Right. There are a lot of questions. I mean, some are news . I felt like summer are definitely new questions because like you said, AI and technology is moving so fast, but some are like, okay, are we answering the same question? Just using artificial intelligence as another tool? Is it a tool? Is it, you know, like I think we sometimes treat artificial intelligence as a whole new person in the conversation. <laugh> , like, it's like an actual human being. So <laugh> , I don't know. It's, it's very hard to separate at some points , but, you know , um, and I think that's where, where doctors are scared. They're like, oh, are we gonna be , be replaced by ai? But I, I say, no, you know, and we'll get into why I don't think that's gonna happen as well based on like, negligence, strict liability, all of those questions that we're gonna talk about. So
Speaker 3:Definitely, I, as a side note, I often think that some of the fear that we're experiencing is the fact that we've been watching robot movies for decades now, <laugh>, and we just worry that the robots can do everything and will do everything. But I think that's kind of the beauty of your article. It does kind of talk about the nuances here and how there are different layers of liability , um, that kind of come in with the implementation of AI usage in healthcare. So let's dig into it for sure. Okay. Um, you made your article really easy to read , um, because you also sectioned it off, so guilty up . I wanna ask you questions about each section. Okay. Um , first being your accountability for AI decisions section. I, when I read it, I was thinking a lot about the fact that, you know, we're seeing now a lot of state medical boards are starting to weave in the AI conversation and the unprofessional conduct con concepts. Mm-hmm <affirmative> . You see, even the FDA has had its good machine learning practices and standards for the development of ai. Um, and I wonder, will either of these areas kind of drive up litigation risk between , um, the two groups, the providers or the developers? For example , um, you , you have kind of this negligence per se. If, if, if the regulation says you're supposed to be doing X , um, does that mean that by default, the provider or the developer by non complying with these regulations or this guidance would be negligent?
Speaker 4:You know , um, again, I think there's layers to that question. Um, so let's take the provider, and I always go with the physician or the clinician side of it . So , um, when you're talking about state medical boards and what they're implementing in unprofessional conduct, I think here, here's where I think think AI should be kind of separated from clinical. Like ultimately it's the physician that's making the clinical decision making . Um, we have, you know, now we have Google, let's just say Google. I as a mother, I do it all the time. Is Robitussin , <laugh> good with , you know, whatever medication I'm giving? And Google gives me an answer, and I kind of rely on that answer. I , uh, I , I come from a family of docs though. So my, both my parents are physicians. So I call one of 'em up and I say, Hey, I can I, Google told me this. And they're like, yeah, actually you can, right? But ultimately, there no, after 50 years of medical decision making , or 10 years or five years, you're trained on the human body. AI is trained on the inputs of people and codes, right? So AI is not trained on this human body and what we're seeing and touching and feeling and what a doctor touches and feel is , and looks at and observes and all of those things, right? So that's a simplistic, like Robitussin, but there are some things that are really, really obviously complex, right? So I think medical boards are saying, Hey, if you haven't , it's kind of like digital health and telemedicine, right? You haven't seen this person in person mm-hmm <affirmative> . And you're relying on AI to give you this answer. Did you, why'd you even go to medical school then? Right? Like, you know, that kind of situation. So I think medical boards are trying to, trying to implement safeguards, right? And saying like, Hey, clinicians, you are ultimately responsible still because you are , you are , you are , if you're only relying on the ai, right? Right . Yeah . Now take that another step, right? Then you have the, per the company that developed a product or a instrument or something that is AI driven , and they put in all of this data. So the physician has used it multiple times, no issues, things, you know , uh, happen . Well , that data that was in inputted into this device is wrong, right? And so the physician didn't solely rely on that, but he's used it many times. This, you know, this time it didn't do the right outcome that can happen. And so who's liable in that situation is very different than just simply saying, I, I, you know, I didn't ask this patient any questions. He said he had a cold, and, you know, I relied on my, you know, my auto thing on my phone and dispensed him this medication, and he had a side effect and reaction and stuff . I don't think any doctor's that stupid to do that, but , um, that I know of anyway, <laugh>. But, but , um, but there, there might be, you know, and, and, and stuff , um, on that, and I mean, I know what's coming out. They're saying, I forgot what company's developing this, but they're doing a doc in a box where a person walks into this room that like, can diagnose what's wrong with you? Well, who, who's gonna be liable for that, right? Mm-hmm <affirmative> . It's gonna be the company that developed that box that you walk into. It's kind of like a all over x-ray or something, I don't know. So these are new things that are coming up. So it's gonna be interesting as, as we get into more and more of this, and we litigate how this kind of plays out and what law, law is developed. But again, malpractice is local, right? So it depends on where you live, right? Standard of care is local, so we haven't even, you know, what is the standard of care in that jurisdiction? And I think that's, these are all questions that are gonna come into, we don't have any standards. That's the problem is like, we don't have any standards, right? So that's, this is all gonna be coming into how we, how medical boards treat, treat each situation case by case, how the cases argued in court case by case in Georgia. It could be one thing in Illinois, it could be another, we don't know.
Speaker 3:Yeah, I think that's right. It's actually terrifying to me to hear of the doc in the box that I'm like, <laugh> not gonna get it . A medical license. I don't know. Like who , who , who's even able to sit for a medical board exam. I have no idea. You, you've gotten to gimme a million more questions.
Speaker 4:<laugh> . Um,
Speaker 3:Moving to the second kind of part of your article, we talk about strict liability versus negligence, which I found very interesting mm-hmm <affirmative> . Um , have there, at least in your experience, has there been any examples from past diagnostic tools that might demonstrate who would win in this fight? Uh, of, you know, if there will be a strict liability or if this would just be a negligence issue? Um, considering that the tool would have been created by a developer that has kind of marketed its product is something that could be relied on upon by physicians,
Speaker 4:Right? So I obviously with strict liability, I think it would be a product liability case, right? So I think AI would be the product right in, in this situation. So, I mean, in strict liability, it was defective, right? That's all you have to really prove. And it was unreasonably dangerous and, you know , um, and that defect caused the plaintiff's injury . So if that happens, for example, if I think there's past case loss in private, but I like quickly looked up something I had to use my <laugh>, but I think there's a Medtronic case, Bedo v Medtronic, and it's 1993, right? And it's strict liability and product, you know, pacemaker, right? Yeah . Yeah. So, so I mean, I think we're going down that path as is artificial intelligent a just the diagnostic tool or is it the decision maker? Like that's, that's where we are, right? Like, who's the decision maker here? Right. In this situation, who's trained on this tool? Like, like, I don't think every physician should be able to use ai, right? Because it's maybe used as a diagnostic , is it , you know, used for diagnosis, but ultimately, who's the ultimate decision maker ? Is it that tool or is it that doc?
Speaker 3:Right ? I mean , you also bring up the point that, you know, if you're thinking about training of physicians, right? Maybe the medical schools right now could be implementing AI as part of the training of, you know , young doctors. But imagine someone who's been practicing for 50, 60 years trying to implement AI in their practice. Um, it , it might give extremely different results. I think we're seeing that even in the practice of law now as we try to use AI as part of our tools, right. You know? Right . The lawyers are just a little bit better at using it than us older lawyers.
Speaker 4:Right. Well, that's a good point. So in medical school, they should implement using very tested like AI diagnostic tools, I would think. Like it's been tested and, but how can you have years of testing if we don't even have years of ai? Right? So <laugh> , so, but also what we're finding is I'm talking to my parents are in their seventies and still working, right? And I can guarantee they don't use a , my dad refuses to, like, he doesn't even wanna get on a Zoom. He doesn't do telemedicine either . <laugh>, so, so, but he is still working. And he's like, well, physicians have to u they went to medical school. So you have to use your mind, you have to use your, your skills of detection, your eyes, your, you know , what's in there now as a diagnostic. Maybe we use it to find rare things that we wouldn't have discovered before. Right? And that's what the medical school , but what if you have medical school students who are relying on ai? I mean, we see it in high school, we're gonna see it across the board in college and things. They're literally not thinking for themselves. Right ? They're relying on tools that are helping them . Right. And they're not thinking for themselves. Like the good lawyers who use AI think we are like, okay, here are all these questions that I have, but we're gonna go review this again, and then we're gonna make sure the case law is accurate and we're gonna do, we're not gonna submit something that is not double, triple checked . Right? But we need a starting point, right. And I think AI is a starting point right now in healthcare versus let's just rely on it.
Speaker 3:Yeah. Right ,
Speaker 4:Right. Don't you , I mean, I think, I
Speaker 3:Think that's right . Yeah. I think that actually brings us kind of to the third point in your article and something you were talking about a little bit earlier, which is about, you know, the data that's being input is also an extremely important part, <laugh> of this analysis, right? And so you talk about the data lineage and how there's this balance that needs to happen between algorithmic design and integrity. So how do you ensure that the data that's being input is being analyzed by a non-biased ai? Like does this itself, the, the , the existence of bias create another litigation risk of some sort for developers ?
Speaker 4:Developers ? Oh yeah, absolutely. I think we're seeing it, and I think that's what , uh, the Mayos and the Geisinger and the Kaisers are all all struggling with, right? And people are getting sued over this, right? There's bias, and let's take policing like, just like, let's take policing AI is used , can be used in policing or TSA or whatever I'm flying on this weekend, so like, I'm going out of the country this weekend. So I'm just like, what are they looking like? It , there's bias in that, right? Um, there's racial bias, there's, you know, all sorts of things then that is coming into play. And I think, like I said in my article, garbage in garbage out, right? Like if you're, if you don't input the right data and figure out what is the ground to there is going to be be bad outcomes by using AI and misdiagnoses in situations because , uh, as an Indian person, I, I , uh, nationality, I don't have sometimes the same , we have a higher, like women, we have a higher rate of heart disease or whatever, right? So might not, based on my age, my weight and all this kind of stuff, it might not be able to diagnose that properly because they don't know my racial background. Right? And, and , and a lot of physicians don't put that racial background or take into that, into consideration. A lot of them are not trained on it, right? Right . So if they're not trained on it, how is the AI gonna tool gonna be trained on it?
Speaker 3:Yeah.
Speaker 4:Right. Yeah.
Speaker 3:It's almost, it's almost interesting because the question is can you teach the AI all the biases to weed weed out? Because biases very human and health equity concerns, even before AI have been, you know, bringing up these right questions of , um, the implicit bias that occurs in the practice of medicine all the time,
Speaker 4:Right? So like kidney disease is, is , um, prevalent in , um, the black, black community and , um, if, if , uh, based on just pure data, it might not be able to calculate because you have to input stuff like blackmail at , you know, this, and you have to input those other data points in there to make the diagnoses correct. Because not all things are based on height, weight, and things like that, right? You know, it's genetics, it's all of these different things that don't, don't, they don't under , you know, people don't understand. So I think we're very far from getting really accurate, accurate diagnoses. I think it's a starting point. It's basically my Robitussin sample <laugh> question <laugh> , right? Does Robitussin have a , you know , um, si like, can you take Sudafed and rot ? That's like my, my question every single time I forget the answer is, can you take Sudafed and Robitussin together? You know, because I don't want <laugh> anything to happen to my kid or something like that. But I think that's where we are at . We're at that level and we're not quite at the levels that we should be. And that means that there's still so many questions to be answered, right? I mean, I can give another example with dermatology. You know, how , um, for darker complected people , we still get skin cancer, right? We can still get , we can still, we still need to wear sun , sun protection and, and stuff, but AI doesn't know that necessarily because they're looking just at the camera that's taking the picture. And maybe like, okay, well, you know, they have a , but there is a risk, you know, and things like that. So there's just certain things that I just don't think we're quite , um, ready for. And , um, you know, so there's still so much to do and there's just, again, no standards for this,
Speaker 3:Right? Well, I think that comes to the piece on validation, which, I mean, I may , I may be creating more issues by asking the question, but , um, you know, there are some people who would say that this bias is gonna exist whether or not AI is used, right? Because there's also gonna be practitioners who are not considering , um, things based on implicit biases that they may have. Um, so is it even possible to ensure that the AI is gonna be reliable? And I , I take your, your, your , um, Robitussin question, right? Mm-hmm <affirmative> . Um , if I've gotten very comfortable googling my symptoms and expecting the internet to tell me what's wrong is , is there really much difference in the provider doing the same thing? You know? Yes. Maybe I've created , maybe I've created this spiral of conspiracies within my head that, you know, that, that I need you to talk me down from,
Speaker 4:Right? So I do think that, so I'll go, like, doctors hate it when us as hu like non-physicians come in and say, well, I read on WebMD, I googled all my symptoms, and I've read on WebMD that my dad is like, absolutely. When I start off that conversation with that, he is like, stop reading <laugh> , stop reading. That's my job. Right? But , and so, so um, because again, I think what has evolved, and this is society in general, I can take it from healthcare to just personal relationships. We've relied on the use of technology and not human, again, the mind, the touch, the seeing a person, you know, we're relying on, on , um, you know, so , so many things . But like if, for example, in this situation, what if we were using AI and that AI automatically updated, just like our iPhones automatically updated, and that update was, had a bug in it, right? And that bug that goes down that, well , let's go down this path, you know, <laugh> and that bug may , you know, makes some diagnostic accuracy worse instead of better than who is responsible in that situation. So how do we ensure that reliability? I mean, I think what, what's what , where I'm getting at is that we're relying too much on technology and ai, and we're not relying on our human capabilities and what we've been trained to do, whether we're a lawyer, a physician, or whatever, we've been trained to think, you know, as lawyers, like if we don't put the input, the question right in chat, GBT, <laugh>, you know, I'm gonna look at that answer going, what the heck? You know, you know? Mm-hmm . Um, this is not the right answer, you know, and stuff. So , um, I think that's the, the , again , like you said, human decision making has to be first. And then I think the use of AI is second. And I think we're just not there. And I don't know if we'll ever be there, right? Mm-hmm <affirmative> . And I know there's gonna be countries, I think Australia, I know India's doing it where kids can't use AI to, it's, it's banned right off of phones and things like that. Because again, people aren't using their minds to think Yeah , you know, it's, it's coding, it's doing yes, no zero ones and stuff like that. You're not really trained on humanity , as I saw , um, this has nothing to do with this article, but kind of does , um, the Nvidia CEO , he said, who do I hire? I hire people trained in sociology, psychology , um, history, all of those social skill social areas versus person that can code, because I can tell AI how to code me. Yeah. Right? So you need the thinker. And I think that's where I got from the party that I was at when the doctor, she works for Mayo and she's not a clinical physician, but she's doing a ton of AI work for, for Mayo. And these are questions they're asking, they're thinking about it. They're actually doctors out there thinking like, how can we, we wanna use it, we think it can be accurate, but how do we use it ethically, morally, intellectually, right? Mm-hmm <affirmative> . Way to use it, right? And I think there's people thinking about this, and that's why this kind of came up for me. Um, and is there, there's no, again, the standards around it and are so like, just like we have compliance, right? There has to be that compliance officer that like at a hospital, let's just say that is continuously monitoring a I use and that's their only job, maybe because it's so prevalent, you know? And are they using the , are we using the right testing models? Are we, you know, using , um, is the , are things getting updated? Are they auditing in real time AI u you know, how are AI is being used and the tools that are being used? Um, is there gonna be a certification process in this, right? Like, is there gonna be a certification pro ? There has to be. Yeah. I think that if you don't know how to use it, like that 60-year-old doctor that doesn't know how to use it, don't let him use it, right?
Speaker 3:Yeah, for sure. Unless
Speaker 4:He's trained and gets the certificate on it. Yeah.
Speaker 3:Yeah, I think that's right. Um , right . What would, so what would you say are the key takeaways of this topic for what we're gonna expect in this new year? 2025?
Speaker 4:2025? I think , um, well, <laugh> , um, we're , I think to gain efficiency and let's go down the efficiency path for a minute, <laugh> , um, to gain efficiency, I don't think AI is gonna be used . There is some efficiency for, let's use it for the things that we can make things efficient. Is it gonna be efficient for scribing? Is it gonna be efficient for the administrative tasks? Sure. And let's use it for that. I think you're gonna see a lot more of that in 2025 is for physicians to make their lives a little bit easier. Hopefully, maybe it'll make it harder. They thought the EHR was gonna make it easier, but it's not really, they have more burnout of physicians, right? Using EHR . So are physicians gonna be replaced? I highly doubt it, right? Or clinicians in general, nurses, doctors, whatever. Are we gonna be replaced? I think there's certain areas that we can make efficient. Are lawyers gonna be replaced? I highly doubt it. You know what , maybe what's gonna be replaced is a legal assistant, possibly, right? So that, that one layer of administration, I think is going to , we're gonna see a lot more efficiency and we're gonna see high , we're gonna see people that are trained as docs trained at as nurses or pa those are going to rise. The good trained ones are gonna rise up, but we're gonna see some of these administrative tasks get a lot easier, I think, and faster in 2025. I think we have a long way to go before a doctor is replaced. I really do.
Speaker 3:Yeah. I think to take it back to my robot movie analysis from earlier, you know, since we've all been watching those robot movies, we want to avoid them taking over the world. So we're always gonna have that stop, that stop gap at the, at the end of the story. There's gotta be a human, he can turn everything off.
Speaker 4:Yeah. I don't know . I'm watching Paradise and right now, <laugh>
Speaker 3:Yeah, I'm , I'm literally watching the same show, so
Speaker 4:I'm like, oh my God, this can happen, you know, and stuff and there's gonna be replacement of like the whole cities and things like that. But I don't, you know, I really think it's just like any other tool. I think there's going to , I think ai, what I predict is gonna happen , um, is that AI is going to be a sole diagnostic tool. There's gonna be some regulations surrounding AI use and how we're gonna use it. And there's gonna be, medical boards gonna have to catch up everywhere. And every medical board , uh, is going to have to put in ethical standards. It's gonna be have to put in unprofessional conduct standards. It's gonna have to be, it's just gonna be like that lawyer who ever , I think it was in New York where submitted a brief and they used bad case law because there was hallucinations in AI in using the case law. And like , who doesn't check their case law before submitting a brief, right? I mean, I thought that was just silly. They just submitted it. So I think there's going to be some additional work for, to double check , but I think there is gonna also be some efficiencies. It's just like in anything else, right? That we do. So , um, it's going to be, it might make our make a , a , a clinician's life a little bit easier. It might make a pharmacist's life a little bit easier. Pharmacy is where I think it's gonna be like very, very important. But again, clinical trials, they're getting faster and faster, right? For new drugs. I mean, we didn't even go there and talk about that, but clinical trials and I use , they're using a smaller population, right? Mm-hmm <affirmative>. So is there gonna be a bias in clinic , in drug drugs, in drugs , interactions? So I think this is going to be something that is really, we're gonna see a lot more litigation in it too . Mm-hmm <affirmative> . And use . And I just don't think plaintiff's attorneys, there's some that have already figured it out and I don't have case law in front of me, but I think there's gonna be a lot more litigation in this as well to get those rules solidified in writing and regulations in writing on how things have to operate with ai. And the only way that's done and policies are changed is if somebody has to pay a lot of money because they solely rely <laugh> on that. Right. Right . And there's gonna be litigation surrounding it.
Speaker 3:Yeah . Well, thanks so much, angel . I really had fun talking to you about this and going down our own death spiral
Speaker 4:<laugh>.
Speaker 3:I just wanna encourage our listeners to go back and read the top 10 issues in health law article that is out on the A HLA website. And just as a reminder, we were talking today about number eight, who do you sue? Medical malpractice and the age of ai.
Speaker 4:All right . Thanks Shaylyn .
Speaker 2:Thank you for listening. If you enjoyed this episode, be sure to to subscribe to ALA's speaking of health law, wherever you get your podcasts. To learn more about a HLA and the educational resources available to the health law community, visit American health law.org.