AHLA's Speaking of Health Law

Navigating Health AI and Nondiscrimination Compliance

American Health Law Association

Andrew Mahler, Vice President of Privacy and Compliance Services, Clearwater, speaks with Drew Stevens, Of Counsel, Parker Hudson Rainer & Dobbs LLP, about the intersection of artificial intelligence (AI) in health care and the evolving landscape of nondiscrimination regulations. They discuss the significance of the final rule on Section 1557 and nondiscrimination in the use of patient care decision support tools, legal frameworks that apply to the use of health AI, how the “deliberate indifference” standard might be applied, how hospitals and health systems can demonstrate they are not being deliberately indifferent to potential discrimination risks in their AI tools, and enforcement trends. Drew recently authored an article for AHLA’s Health Law Weekly about this topic. Sponsored by Clearwater.

AHLA's Health Law Daily Podcast Is Here!

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Speaker 1:

<silence>

Speaker 2:

Support for A HLA comes from Clearwater. As the healthcare industry's largest pure play provider of cybersecurity and compliance solutions, Clearwater helps organizations across the healthcare ecosystem move to a more secure, compliant and resilient state so they can achieve their mission. The company provides a deep pool of experts across a broad range of cybersecurity, privacy, and compliance domains. Purpose-built software that enables efficient identification and management of cybersecurity and compliance risks, and a tech enabled 24 7 365 security operations center with managed threat detection and response capabilities. For more information, visit clearwater security.com.

Speaker 3:

Welcome everyone. This is Andrew Mahler , vice President of Privacy and Compliance Services at Clearwater , uh, where we help healthcare organizations move to a more secure, compliant and resilient state. Welcome to the HLA podcast on health, ai, and non-discrimination compliance. In today's episode, I'll be , uh, speaking with Drew, exploring the critical intersection of, of AI and healthcare in the evolving landscape of non-discrimination regs. Um, our guest today , uh, drew Stevens of counsel at Parker Hudson, Rainer and Dobbs , uh, in Atlanta, Georgia. Drew brings a , uh, a really wealth of experience in complex litigation , uh, particular focus , uh, on healthcare law. Uh, his practice involves counseling, hospitals, and health systems on compliance with federal non-discrimination laws, including title , uh, title three of the a DA Section 1557 , uh, title six of the Civil Rights Act of 1964. Uh, and he represents healthcare providers and litigation under , uh, under these statutes and in civil rights investigations conducted by the DOJ , uh, the Office for Civil Rights , uh, within , uh, HHS , uh, drew recently authored, and , and hopefully you all had a chance to take a look at this, but authored a , a really insightful article , uh, on the implications of potential changes to section 1557 and their impact on health i health, ai , non-discrimination compliance. And today we'll dive in , uh, to this topic, exploring some of those challenges and considerations that that providers face , uh, in ensuring non-discrimination in their use of AI tools . So , uh, with all that said, drew , uh, welcome to the podcast. Really, really excited to have you here to, to share your expertise.

Speaker 4:

Thank you, Andrew, for that , uh, excellent introduction, and thank you to Clearwater and a HLA for having me.

Speaker 3:

Great. Well, excited to have you here. So let's just go ahead and dive in. Um, so again, your , your article, I thought, very, very insightful, and, and I , I don't know if the , the podcast link will link to the article, but , um, certainly it's something that, you know, I think all folks in this area , uh, sh should take a look at and, and read. Um, but let's just sort of start at the beginning and, and if you wouldn't mind, drew, could you talk a bit about the significance of the Biden administration's , you know, standard under 1557 on non-discrimination and the use of, of patient support tools , and , you know , how , how do you know, how does this affect decisions related to AI in the, in the provider context?

Speaker 4:

Absolutely. So taking a step back , um, many listeners will recall , uh, the original Obama era regulation intersection 1557 , uh, which, which did not directly address the use of AI in healthcare. Then, of course, the, the Trump, the first Trump administration , uh, revised that regulation. And then the Biden administration, when it came into office , uh, issued a proposed regulation that included a generic prohibition on discrimination in the use of what they called clinical algorithms. Uh, that was back before the chat GPT revolution began. It was that long ago. And the Biden administration took quite a while to finalize its final regulation under , uh, section 1557, of course, the non-discrimination provision of the Affordable Care Act. And when the final regulation came out , uh, gone was any reference to clinical algorithms, and in its place was this term patient care decision tool , um, patient, patient care decision support tools. And , and so the Biden final regulation , uh, it , it took so long to come out in part because of the importance, the emerging importance of ai. And so it can't really be overstated, the watershed shed , uh, uh, foundation that the non-discrimination in the use of patient care decision support tools, which, you know, we can use shorthand reference to health ai, although it's technically broader than that. The Biden Administration's final rule imposed this ongoing , uh, duty to make reasonable efforts to identify uses of health ai that, that use variables that measure race, color, national origin, sex, age, or disability, these classic protected classes. So it created this ongoing , uh, due diligence standard to , uh, undertake reasonable efforts to identify those uses and then impose for any such use that you identify an ongoing responsibility to take reasonable efforts to mitigate the risks of discrimination in those health AI tools. So truly , uh, watershed , uh, regulatory standard , uh, imposed there that is still on the books. Uh, it is not yet in effect , uh, under the regulation. This effective date was delayed until May 1st of this year. So a little less than two months. This , uh, regular stand regulatory standard is set to go into effect. Um, the purpose of the , the article, of course, is to kind of answer the burning question of if the Trump administration does away with this standard in whole or in part , uh, what will be left in its place. Um, so we can, we can discuss that next, but we could also spend some time elaborating on how HHS thought , um, a healthcare provider should take those reasonable efforts , um, which , uh, we can almost circle back to it because they have continuing relevance even if the regulation is rescinded. So it , that's a long way of answering your question and saying that , um, the Biden administration's regulatory standard , um, created a whole new domain of AI due diligence in healthcare, especially as it relates to non-discrimination.

Speaker 3:

No, I, I, I think that's, I mean, it makes a lot of sense. And, and, you know, we're sort of gonna continue to dive into this, but I , I think that the , the point that you sort of were touching on just a minute ago , um, you know, even if we see some pullback of enforcement , um, pullback of some of the rules and regulations under, under the, the new administration , um, you know, there still is this ongoing conversation around, you know, what, what sort of this looks like within, you know, the sort of litigation realm within sort of the broader , uh, legal risk, I guess, for lack of a better, better phrase , uh, realm looks like when we're deciding to use these types of tools and, and decision making and, and really wanting to make sure that our clients and, and just organizations more broadly in this context, or , you know, they have the right controls in place, they have the right, you know, the sort of , uh, the, the thoughtfulness that, that I think, you know, is really going to be required , uh, moving forward as we're thinking about these , uh, these tools. Um, so with, with all that said, I mean, I, you know, more interested to hear from you, you know, in terms of, you know, legal frameworks that , um, that might apply. You know, your article explores this a bit. Um, you know , uh, you know , if the Trump administration does renounce some pull back some of these regulations, could you talk a bit about, you know, some of the main points for listeners who may not have read the article yet about, you know, sort of legal frameworks, constructs that may still apply even if we see 1557 , um, you know, pulled back or, or whittled down?

Speaker 4:

Certainly, yeah. And, you know, to your point , um, the, there are certainly going to be state laws that , um, may apply to healthcare entities use of health, ai , um, and , uh, every healthcare institution is gonna do its due diligence , um, from a, from a safety quality equity perspective. Um, so that's all outside the scope of, of this discussion that I'm about to have, which is focused on, to your question, if the Trump administration does renounce this Biden , uh, regulatory standard under section 1557, what then? And so there , and this is , uh, more of a, something of a legal point, but it's very important for in-house counsel and compliance professionals to understand as it relates to non-discrimination. So it's, it's a little bit of a basic legal proposition that the reg , if a regulation is rescinded or renounced, the, the statutory authority that Congress passed into law continues to apply. So, section 1557 of the Affordable Care Act will continue to govern healthcare actors' use of technology, which would include health ai. There's not really a serious , uh, debate about that point. It's not as though the use of, of a particular technology would fall outside the purview of preexisting federal civil rights statutes. And Section 1557, of course incorporates preexisting civil rights statutes, title VI of the Civil Rights Act, title ix, the Age Discrimination Act, and Section 5 0 4 of the Rehabilitation Act. So these federal statutory authorities would continue to apply to , uh, these non-discrimination principles. So what I elaborated on in my article is that litigants and regulators are, are very likely to, to argue, for example, this deliberate indifference standard for intentional discrimination under these statutes. And, and before I dive into that, I should take a step back and just explain for, for viewers who may be , uh, unfamiliar with this, that Ty , you can divide these type of discrimination claims in healthcare and even in employment context into two big categories. The first category is intentional discrimination, which I'll , I'll talk more about in this context , uh, briefly or, or shortly. The second category would be unintentional discrimination claims. And these are often referred to as disparate impact discrimination claims. And what's significant about , um, these claims in general, they are based on a facially neutral policy or practice that disproportionately affects a protected class. So you can see how a hospital or health system's use of health AI would lend itself to this type of disparate impact claim. The trouble with that framework is that the Supreme Court back in 2001, greatly limited the availability of a, an individual, a private party to assert those type of disparate impact claims of discrimination on the basis of race, color, or national origin. It's the same under Title IX for sex discrimination. So this has left a void that there is very little disparate impact discrimination litigation in healthcare or elsewhere in , in federal funded programs because the federal government has not been active in pursuing these claims. So in the context of health ai, let's say, you know, this, the health ai , if we assume no one is using it to intentionally discriminate against anyone intentionally , uh, leave someone out of the benefits of health ai , then if that happens just as a result, that would be this unintentional discrimination type, disparate impact framework. But if the Trump administration as expected , um, continues to not police those type of claims , um, which is a big if, by the way, we we're seeing of course, a , uh, an emergence of a, of a fundamental fundamentally new and transformative technology, that it is possible, of course, that regulators could take an interest in policing disparate impact in the use of ai. And so we shouldn't gloss over that too quickly, that we can't say for sure that there won't be any disparate impact enforcement in the use of health ai. But if we assume that there is not going to be much enforcement, which is the historic , um, trend in that area, what you could be left with is the framework under these preexisting statutes, which impose a, a, a standard on institutions that requires them to take action to mitigate the known risks or known instances of discrimination. And so that is what this article , um, really gets at, is how that type of deliberate indifference standard , um, would apply in in the context of health ai , which we can discuss more in a moment, but I'll, I'll just turn it back over to you and see if, if, if all those frameworks make sense as a general proposition.

Speaker 3:

No, I, I mean, it , it , it really does. And I think it's, it's some , um, you know, it's, it's really a, I mean, I was gonna say sort of a creative argument, but it's, it's actually something that I think is, is very straightforward and, and makes a lot of sense. And I think , um, you know, I appreciate also what you were saying about, you know , uh, us you know, being a bit cautious when we think about trying to make predictions about the way that, you know, the wind may be blowing because there , you know, we are seeing some, you know, some enforcement , um, that, that continues. And, you know, I , I think your point, particularly around, you know, 1550 sevens reliance on other civil rights laws, I , I think is is an important note for people out there that are thinking, well, if you know 1557 goes away, then, you know, we don't really, you know, maybe we don't have to have policies or practices or, or maybe we, you know , can, can sort of reshape our thinking , uh, around, you know, some of these rules. And, and I think the point that you're making that sort of the answer is, well, you know, not , not really. <laugh> I think is , I think is a really important one because there's still a lot of risks here. And, you know, and I also think it's also worth underscoring the point too , that you made that, you know, I , I think, you know, you would hope at least, and at least from my experience, and probably yours too, you know, we're not seeing, you know, malicious actors out there that are standing up organizations to try to design tools that, you know , intentionally discriminate against patients. Um, you know, these things are, are most likely gonna happen, you know, in the ways that you described in , in sort of unintentional , um, ways, which still is significant, right? But , um, but I , I think that's an important note that you're making, and it really leads in, I think, too , to , to sort of the, the bulk of your article around, you know, this, the deliberate indifferent standard. And, and I don't know, you know , I know you've just shared quite a bit about the background there, but don't know if you have any other thoughts that, you know, but before we start to think about, you know , best practices and, and operational questions and thoughts like a , anything else that you think is worth sharing around, you know, how the standard could be applied to cases involving discrimination that results from the use of, of health ai ?

Speaker 4:

Certainly, certainly. So , uh, a great example came out of a recent fourth circuit court of appeals opinion. So I just mentioned how private individuals under that 2001 Supreme Court case have been unable to bring disparate impact, unintentional discrimination claims on the basis of race, color, or national origin , uh, for decades now, that is the law of the land. But , uh, a fourth circuit opinion recently just demonstrated in , in very clearly how a unin, or I'm sorry, an intentional discrimination claim in healthcare under a deliberate indifference standard, can provide an individual claimant with an intentional discrimination claim under these statutes. So here the case just in a nutshell, involved , um, allegations that a hospital retaliated against a patient for complaining of discrimination and retaliated against them and terminated them as a patient. So ordinarily , um, you know, an institution for a hospital or health system to be liable for the actions of its agents, you need to show some sort of knowledge of that event in order to have a claim against the larger institution or , or health system itself. So in this case, this individual was able to allege that a manager or supervisor was aware of the retaliation. So the retaliation is allegedly occurring, you know , at the level of her interactions in the hospital with staff. But she was able to allege that a manager and a supervisor was aware and took no steps to correct the alleged discrimination or retaliation. And that allegation that no steps were taken by a manager or a supervisor to, to take corrective actions allowed this individual to state claim for intentional discrimination against the hospital itself for deliberate indifference. So this recent case , um, driving home the deliberate indifference standard, which requires a deliberate or conscious choice to ignore something , um, that drives home how in the context of health ai , you can imagine scenarios in which individuals, even at a class action basis or, or public interest groups or, or public regulators, state agencies, could allege that a hospital or health system was aware, knew about certain disparities or discrimination in the use of health ai , and failed to take corrective action such that it is intentional discrimination. So the point I was trying to make in this article, while no court has applied this framework to health AI specifically , um, what ends up happening if, if this type of framework is applied, is you have a scenario in which hospitals or health systems should continue to be proactive in taking steps to mitigate known instances of discrimination in healthcare. So it's almost as if paradoxically we're coming back somewhat to , um, a portion of the regulation which, which imposed this duty to mitigate known, you know, the risk of discrimination. So it, it's driving home the point that even if the Trump administration renounces this regulatory standard at this statutory level, the risks remain under this deliberate, indifferent standard. And therefore hospitals and health systems would be wise to continue to , uh, do their due diligence, vet these products, monitor them, and have a process for responding to complaints. If they do not, they run the risk of a deliberate indifference type claim being alleged against them .

Speaker 3:

Yeah, and I , I mean, this this case is, is, you know, it was decided, you know, really just last month, right? So this is, I mean , I , I think for , for many reasons, I think your article and perspective is really timely. But , um, I mean, I think that's important for those that, you know, that may not be as familiar with this case or, or some of the background here that this is, you know, this isn't a case that was decided, you know, four years ago, right? This is , um, this is something that's, that's fairly new. And , um, and , and I , I think, again, you know, why your perspective is , uh, is really important. Um, I , I think probably a really good segue here is , is to talk a bit about sort of the operational and, and practical steps here. So there's, you know , you sort of, I , I think done a , a fantastic job, you know, walking, you know, sort of walking me through the, the background here. Um, but for those that are just saying, okay, so what do we do now, you know, we've, we've got this fourth circuit opinion. Um, we've got a lot of , uh, sort of nebulous gray area around, you know, this administration and 1557 and, and even ai. Um, what are some of your recommendations and, and thoughts around practical steps that, you know, hospitals, health systems providers can take to, to demonstrate that they're not being deliberately different , um, to potential discrimination risks in , uh, in AI tools?

Speaker 4:

Yeah, absolutely. Uh , great question. It's, it's the question. Um, and at bottom, under this framework that I've been discussing, a hospital or health system should be prepared to demonstrate that it took reasonable steps to mitigate , um, the known instances or known risks of discrimination. So at, at bottom, at, at a bare minimum, you would, you would want to be able to demonstrate that you had a process to receive and review and evaluate complaints , uh, in the use of health ai . And when complaints or risks of discrimination or disparities or inequities are, are surfaced , um, via a process or even if <laugh> , even if they're surfaced outside of the established process, that, that there was a reasonable effort to investigate, to review, to gather the facts, gather the , the data, and, and then of course, the real hard work would begin of deciding whether this is legitimate , um, what is the extent of it, how prevalent is it? Um , how much exposure is being created? And, you know, again, I don't wanna overlook , um, you know, this raises safety, quality equity concerns anytime that we're talking about discrimination. So, while I often speak in terms of, of legal risks, really , uh, that's maybe the third or fourth consideration for a hospital or health system when we're talking about tools resulting in disparities, clinical disparities or something like that, access disparities. So evaluating it from a safety, quality equity, and then legal risk perspective , um, in deciding how to address it. Um, that will really be the hard work. But this process of receiving complaints, it , it's not unlike the grievance procedures that hospitals and health systems have had to have in place for some time. It , it's, you can think of it as a similar type obligation that , um, receiving the complaints, evaluating them and responding appropriately is the, is really what the deliberate, indifferent standard would impose. Mm-hmm <affirmative> . So mm-hmm <affirmative> . Um, training staff on that, training clinicians on that, creating those processes, having the governance structures around health ai , which are already being stood up across the , the nation. Um, building in to those frameworks, this type of process , um, it will be critical , uh, to showing that a hospital or health system was not deliberately indifferent and providing the subject matter expertise or the training , uh, with , on these tools will, will be key as well to enable these individuals tasked with responding, reviewing, and acting to do so effectively. Uh , so the , the task is very complex. It , it , it's, no , it's much easier said than done , uh, and requires tremendous interdisciplinary collaboration between mm-hmm <affirmative> . Chief , chief AI officers, chief medical officers, frontline staff, patient quality and safety, and of course, in-house counsel and compliance professionals as well.

Speaker 3:

Yeah. You know, it's , um, it can be very , um, you , you know, having sort of wa helped organizations sort of walk through , you know, assessing AI risk. Um, I think your point about interdisciplinary collaboration can't be , um, emphasized enough. You know, it's, we sort of think about, you know, and , and we sort of look at frameworks like, you know, the AI risk ma , you know, nist, AI risk management framework, which for those of you who haven't looked at that yet , um, you know, encourage you to do so, they , there's a, NIST has a , uh, a playbook that accompanies the, the framework , um, that they published. And , um, it's, you know, when you sort of take a look at, you know, whether you're using NIST or, or, or other types of frameworks, it, it sort of starts to give you this picture of the way in which, you know, these kinds of assessments around, you know, ethical use of ai, non-discrimination in ai, bias in AI really requires everybody to be talking together. And so, just, I mean, you mentioned, you know, the patient grievance process and, and that's, you know, those are people that we often speak to, you know, when we're , um, when we're doing these types of assessments, because that, that sort of feedback loop, you know, what they're hearing from patients around, you know, clinical decision making . I mean, granted, the patient prob , you know, may or may not be aware of how AI's being used, but you know, patients oftentimes will sort of have a, have a sense that something maybe felt different about their treatment. And , um, those feedback loops are incredibly important. And, and so it's one thing to have, I , I'm just sort of talking here for for a minute, drew, but I think just elaborating on your point, you know, it's one thing to have the governance in place and the policies in place , um, you know, and maybe even the training in place, but it , it's another question, you know, is the chief compliance officer, you know, are, are they meeting with, you know, clinical, IT leads , um, you know, AI development leads, data sci , you know, data scientists that are helping to develop this, manage this , um, within clinical settings because it, you know, the , these things are, you know, these systems are being developed very quickly and in many way , you know, I think for good reason, right? There's, there's a lot of benefit that that happens because of these systems. Um, but you know, with sort of the fast deployment, you know, the fast development , um, there's risks there. If you're not monitoring the quality, if you don't have the right sort of audit mechanisms in place, the right committees in place to have these meaningful discussions, it can be, it can just make things more challenging. I , I think maybe is what I've seen, and I , I don't know if that's that resonates at all for you. I , I know I've sort of just sort of talked a bit all over the place, but , um, yeah,

Speaker 4:

It does. It , it does. I think that in, in some ways what we're discussing in terms of non-discrimination is the, the larger challenge in, in deploying, vetting, deploying, monitoring, these tools in general. And this is just one more additional , uh, domain that That's right. Ha has continuing relevance, not withstanding the , the change in the regulatory standard that we're all expecting.

Speaker 3:

Yep . No, I , I think that's, that's a great point. Fantastic point. Um, well, I think maybe one, one way we can sort of, I think wind down the, the discussion a bit is, you know, I , I know if you had a crystal ball, and I, I know that, I know that we don't, but, you know, for those of , you know, for those folks that are listening and saying, well, you know, where is this going? Um, you know, can you, you know, what are your thoughts, you know, your own personal opinion, you know, where do you see enforcement , um, of non-discrimination and ai, you know, changing under this administration, you know, particularly around, you know, disparate impact claims. And, you know, maybe just sort of a tied question to it, if you wanna talk about this too, is just any other thoughts around challenging aspects of ensuring compliance in this, in this, you know, sort of rapidly evolving field?

Speaker 4:

Certainly. So, as I mentioned it , it's, it's safe to expect no large shift in disparate impact , uh, policing or enforcement at the federal level. I, I, I, I hesitate to say that because , uh, again, this is, we are seeing the emergence of such a significant transformative technology that it, it's not outside the realm of possibility that , uh, regulators would see the need to be present in this space and offer guidance to the industry and to, to seek voluntary corrective compliance , uh, before they , uh, a matter is turned over to the Department of Justice for enforcement proceedings, for example. So it's important to, to keep that , uh, in mind and monitor developments in that area. I do anticipate , uh, private individuals , uh, public interest groups , uh, at partic maybe on a class action basis or maybe state certain states , uh, alleging similar claims of the deliberate indifference type frameworks. I think that it, it cannot be ignored. Um, and the, the real, the real , uh, purpose of an article like this in a discussion like this is to put this on the radar of in-house counsel , uh, compliance professionals to understand that the legal risks , uh, continue even despite the expected change in the regulatory standard. So, to answer your, your second question , uh, challenges and compliance, apart from everything that we've already discussed , um, what, what I would like to note is it , in one sense, this discussion would , would suggest that hospitals and health systems, their AI governance committees and everyone , uh, involved in that effort would be wise to continue doing everything that they're already doing, perhaps what they were doing with an eye towards the regulatory standard. And what do I mean by that is it would make no sense to not vet , um, AI tools and do your due diligence on the front end on a non-discrimination , uh, uh, basis, and, and leave it to a , a later day to discover that a particular tool , uh, results in a lot of , of drift results in disparities , uh, creates all kinds of problems down the road. So if you know that this , uh, legal analysis is, this framework is kind of hanging over, you know, the use of health ai , it, it might, should incentivize hospitals analysis to continue to do their due diligence on the front end to do their due diligence in monitoring. Um, so they can try to get ahead and minimize these types of claims. And then, of course , uh, when , uh, actual issues arise as we discussed , uh, reasonable steps should be taken to, to mitigate and address them. Um, so I just wanted to note that it, in a way, it's, it's almost as if , um, this framework provides the same incentives that HHS was aiming for with its regulation. Now, you know, as a technical matter, it , it's likely that the regulation under the Biden administration is not a perfect fit with the statutory standard that we're discussing , uh, you know, as a legal matter. Mm-hmm <affirmative> . I , I don't think that's at all what I'm saying, but what I'm saying is the incentives to do your due diligence to monitor these tools and then to , to address issues, the same incentives are provided with this deliberate indifference framework that we've been talking about.

Speaker 3:

Yeah, I , I, I think it's, it's a, I mean, again , um, can't sort of underscore enough, I think a really important point that you've made, you know, today and, and, you know, points that you're making in , in your article, which just as a quick plug , um, HLA , this was published February 28th of this year. So those that are, that are looking to pull this up , um, uh, Drew's Drew's article is, is there, I encourage everybody to, to take a look, dive a bit deeper and , um, to, to reach out with, with any questions. And you know, drew, gr really great talking with you today, really appreciate your perspective, a very timely , uh, I think really interesting and, you know, a , a a , a unique take on the , all of these sort of conversations that are, that are swirling around now around a , you know, the use of AI and, and particularly the use of AI in clinical settings. So, really appreciate your time. Thanks so much for, for taking the time to, to talk today and , um, really just wish you the best. So absolutely . Thanks.

Speaker 4:

Yep . Thank you Andrew. And thank you Clearwater and a HLA for the terrific discussion.

Speaker 2:

Thank you for listening. If you enjoyed this episode, be sure to subscribe to ALA's speaking of health law, wherever you get your podcasts. To learn more about a HLA and the educational resources available to the health law community, visit American health law.org.