AHLA's Speaking of Health Law

Top Ten 2026: Automation Accountability—AI Governance, Liability, and Oversight

American Health Law Association

Based on AHLA's annual Health Law Connections article, this special ten-part series brings together thought leaders from across the health law field to discuss the top ten issues of 2026. In the second episode, Zi Peng, Managing Director, StoneTurn, speaks with Alya Sulaiman, Chief Compliance and Privacy Officer & SVP of Regulatory Affairs, Datavant, about some of the primary issues around artificial intelligence (AI) in health care as organizations plan for 2026. They discuss the use of AI in prior authorization and utilization management, tensions between the federal government and the states over AI regulation, and actionable steps health care organizations should take to mitigate risk and ensure responsible AI deployment. Sponsored by StoneTurn.

Watch this episode: https://www.youtube.com/watch?v=LCJ5w4CN-Qo

Read AHLA's Top Ten 2026 article: https://www.americanhealthlaw.org/content-library/connections-magazine/article/a879dda5-35f9-46fb-ad45-1b0799343d74/Health-Law-Forecast-2026

Access all episodes in AHLA's Top Ten 2026 podcast series: https://www.americanhealthlaw.org/education-events/speaking-of-health-law-podcasts/top-ten-issues-in-health-law-podcast-series

Learn more about StoneTurn: https://stoneturn.com/ 

Essential Legal Updates, Now in Audio

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Comprehensive members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Stay At the Forefront of Health Legal Education

Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.

SPEAKER_01:

This episode of AHLA's annual Top 10 series, discussing the major health law trends and developments of 2026, is sponsored by Stone Turn. Learn more at Stone Turn.com.

SPEAKER_00:

Hi there. Welcome to American Health Law Association Top 10 Podcast Series for 2026. I'm Z Peng, a manual director of Stone Turn, a global professional services firm. My practice area focuses on providing economic consulting support, including expert analysis in labor and employment disputes, security fraud, financial misconduct cases, as well as antitrust and consumer class actions. Joining me today is Alia Suleiman, the Chief Compliance and Privacy Officer and SVP of Regulatory Affairs of Datavent. Aliyah is one of the authors of the Health Law Connections article we'll be diving into today. Aliyah, thank you so much for joining me today. For our listeners, please introduce yourself and your work and your role at Datavant.

SPEAKER_02:

Thank you, Zee. So I'm very happy to be here. I'm Alia, and as you mentioned, I manage regulatory affairs and also serve as Chief Compliance and Privacy Officer at Datavant. Datavant's a really unique company in that it is a data collaboration platform trusted by a wide range of stakeholders in healthcare. We work with 80,000 provider organizations, all major US health plans, and the top 20 life sciences companies, as well as authorized requesters of medical records and data. And our thesis is all about taking data connectivity and driving it into action. And purpose-built AI solutions are, you know, thankfully a part of that. So I've been a product privacy counsel for health information technology, including AI and machine learning initiatives my whole career, including as a law firm partner and in-house counsel at Epic. So this topic is near and dear to me, and I'm excited to dig in.

SPEAKER_00:

Thank you so much, Valia. There was probably no buzzier topic over the last year than the impact of AI on the world. And healthcare is an important part of the story. There's been a lot of discussion on how AI can revolutionize the way we work and live. But the questions remain about governance, oversight, and liability, which we'll tackle in today's discussion. And to get us started, Aria, can you give us an overview of what are some of the primary issues you're seeing around AI and healthcare as organizations and their console embark on their 2026 plan?

SPEAKER_02:

Yeah, I well, it's I really frame this around three interconnected challenges that I think health law professionals are likely to spend some time on this year. So, first, there's a little bit of a liability vacuum. We don't have formal accountability frameworks across folks who develop AI, who config it, who deploy it, who use it. And we see AI developers contractually disclaiming liability and kind of narrowing their lane of responsibilities regularly when it comes to AI use, which typically leaves healthcare organizations or other entities deploying AI holding the bag for their configuration decisions, the context of use of deployment, and the decisions that they make or don't make because of the AI tools they've introduced into workflow. And there's really no clear standard of care yet, especially for AI assisted clinical decision making. So the risk allocation conversation seems to be happening deal by deal, use case by use case, which is frankly inefficient, right? And it's going to create lots of inconsistencies across our industry. The second thing that I would highlight as kind of, again, one of these three interconnected challenges is this problem that I call our high-risk definition problem. And anyone who's talked to me about AI in the last three to four years knows that I feel very strongly that it is a major issue, that we do not have consensus as an industry nor as a society around what constitutes high-risk AI across both clinical and administrative domains. And not having a general consensus definition on what truly is high risk makes it very difficult to scale and flex governance to address those issues. So I do think that this second kind of issue of high-risk definitions being kind of all over the place, but also just AI terminology meaning different things to different people is a big driver of uh challenges when it comes to scaling governance. It is also something that makes it challenging to draft, you know, for our policymakers to draft regulation that scales with risk and impact. We really do not have that right now. We have a framework in the uh HHS Assistant Secretary for Technology Policies AI regulatory framework, which is currently on uh in a proposed rule on the chopping block to be removed from uh the health IT certification program. Uh and we certainly have some guidance from FDA, but again, we are talking about guidance that applies, you know, today after recent updates to an even more narrow set of applications and tools. The the third issue I would highlight is that the practical reality for everyone in healthcare right now is that we have a serious supply and demand problem that is only getting worse. Um, I when I was in private practice in law firm life, you know, if you ask any hospital executive what keeps them up at night, it's how to do more safely with less resources. And between staffing shortages and rising costs and evolving reimbursement models and increasing patient demand, um, we are seeing the patient experience degrading. And it's really interesting to see direct-to-consumer offerings in the healthcare space from major AI companies like OpenAI and Anthropic step in to potentially democratize access to care and you know, really potentially figure out, help us figure out how to optimize capacity, but we have to deploy these sorts of tools responsibly and again make a decision as a society about how we want those direct-to-consumer offerings to complement healthcare delivery. So I do think the tension between those three things, you know, the liability gaps, the um definitional ambiguity and operational kind of operational and demand needs, are what I see organizations continuing to wrestle with in 2026. And then of course, later on top of all that is the federal state tension, which I know we will get into a little bit later.

SPEAKER_00:

Thank you. And diving into your article, a crucial discussion you bring up in your article is the use of AI in prior authorization and utilization management, which is a key focus for regulators. So, what specific concerns have prompted states like Maryland and California to mandate AI-based coverage decisions be grounded in individual patient circumstances? And how might these state level requirements influence the broader adoption and governance of AI in health plans?

SPEAKER_02:

Yeah, it's a really great question because it kind of hits on both of the challenges we were just, you know, walking through this concept of definitional ambiguity, liability, and really like who's responsible. And the current the concerns here are really about access, delay, and fairness, right? I mean, I think there's a general concern that when AI drives um utilization management decisions, prior auth, post-claims adjudication, you know, those decisions could have major implications for whether patients actually get the care they need, when they get it, whether they're treated fairly by a process they're following. And we've seen in recent years investigations and litigation, you know, showing that maybe reviewers are spending seconds per case resulting in denial rates that would be essentially like statistically impossible, right? If individual circumstances were actually being evaluated. And we even have class action lawsuits now alleging that plans are using algorithms in ways that result in again those um really high denial rates. And look, whether those specific allegations are proven or not, I do think that they point to this generalized anxiety over this like structural AI risk in when it comes to AI used by health plans, which is is there a risk that AI that is so optimized for efficiency will actually systemically deprioritize particular individual patient circumstances? And we've seen this kind of play out too along this fairness dimension, where there's been research and writing done about algorithms trained on historical claims data that can encode existing disparities, um, both in terms of care that was delivered or the care that was possible to access. So, for example, if a certain population has been denied care historically at higher rates, whether due to socioeconomic factors or geographic access issues or just bias, right, in human decision making, there's this fear that an AI model trained on that data is going to replicate or potentially amplify those patterns. So I do think that the worry here is, you know, um a legitimate concern that we as an industry should just, you know, evaluate and have answers to. And to your point, some states have already looked at this and said it's not acceptable. Um, I think you mentioned Maryland and California. Um, Maryland's law, which took effect in, I believe, October, you know, explicitly requires that AI tools base coverage determinations on individual clinical history and circumstances, not solely on group data sets, and that it uh carriers also have to do quarterly internal reviews of AI system performance and outcomes, which is you know really requiring codification of a governance framework in you know more maybe formal terms than most other state laws in this space. And then California has some interesting laws uh, you know, kind of operating here as well. So, and they're sometimes conflated, like there's laws about um that are really more focused on transparency, right, about when AI is used in communications and consequential decisions. And then there's um a law specific to the use of AI and utilization management that prevent that prohibits or prevents AI from being the sole basis for medical necessity determinations and really puts licensed physicians in a priority position when it comes to making those calls. So I do think that the use cases and the opportunity are huge. And I hope that health plans and frankly companies like Dataman who serve them continue to invest in you know building and expanding those tools. I think the practical implication is that we are all learning that you can't just build one algorithm and deploy it universally everywhere and expect the performance and the outputs to be the same or fair or you know, widely appropriate. I think you need governance that accounts for human oversight, individual patient analysis, you know, the realities of what documentation you have. Um, but it's it is interesting to see how these state frameworks evolve because they are pushing the industry towards more and potentially better documentation to explain what it looks like for these algorithms to work successfully and in a way that doesn't disproportionately harm or you know damage sort of one part of a patient population versus another.

SPEAKER_00:

Thank you. And following up to our previous discussion, the Trump administration, as noted in your article, has emphasized deregulation and AI innovation incentives. How does this federal stance complicate the efforts of states trying to establish robust accountability frameworks? And what strategies can health law professionals employ to navigate potential preemptions conflicts while ensuring patient safety and ethic AI use?

SPEAKER_02:

Um very loaded question. I think uh I probably could have written my um entire article on federal and state uh tensions here, and I think this is really going to be the central tension for 2026. And I want to be really precise about what's happening because I think the details matter for folks, and this is relatively, you know, new stuff. So executive the executive order signed in early December titled Ensuring a National Policy Framework for Artificial Intelligence is very explicit about its purpose. The administration views state AI laws as creating a patchwork that is stifling innovation. And in some cases, uh they're you know revealing their hand that they think that some of these laws may even be unconstitutional. And there's a whole section of that executive order that establishes an AI litigation task force with a mandate to challenge state AI laws that are inconsistent with federal policy and could be preempted by federal regulations. And um I think we remain to see what exactly comes out of that AI litigation task force and which state AI laws, you know, either thematically or based on subject matter, are targeted for challenge. The other thing that I thought was interesting in the executive order was this idea of um kind of publishing an evaluation to identify onerous state AI laws. Um, and I think that you know there's this focus on bias testing and fairness requirements, um, and really just you know any kind of laws that may require AI models to alter outputs through filtering or configuration decisions. I think again, it's really an interesting focus area for an executive order, and we'll see which AI laws you know clash with what is published as a part of that evaluation framework. But I think the most important thing that health law practitioners need to understand about this legal landscape is that an executive order cannot directly preempt state laws. So um that that would require either successful litigation or some form of congressional action. So, yeah, there's a lot of really interesting and frankly um novel stuff in that executive order, but state laws remain enforceable until successfully challenged. So we talked about uh Maryland and California, there's Colorado, there's Arizona, there's Nebraska. There are so many different states with um AI laws on the books that are either in effect or taking effect on schedule. And it is really critical for health law professionals to continue to pay attention to those. I do also want to say that I will like, you know, what do you do with kind of all this uncertainty? I I would say don't underinvest in compliance, assuming that the federal preemption kind of um plan and that executive order is going to bail you out of having to, you know, implement a state law. I do not think that is a safe assumption to make in the near term. Uh second, I think it makes, I think all of this tension and uncertainty makes it even more important to document your AI governance decisions thoroughly. So whether it's through your AI governance committees, um, how you negotiate responsibilities and transparency requirements and contracts, the internal policies you have, and hopefully follow before you know going live with different AI use cases. I think it is in everyone's interest to create a paper trail that actually shows that you have evaluated risks, considered applicable requirements, and just frankly, like made reasonable choices when it comes to how AI is used in your organization. And the other, like then maybe in addition to that, right, if you don't have enough to do, engage in the policy conversation, whether it's at the state level or at the federal level, because I think there's real value in advocating for governance frameworks and standards that balance compliance and meaningful accountability with you know the ability to move fast and you know achieve kind of these high impact, low-risk possibilities that some AI models really present?

SPEAKER_00:

Thank God, yeah. And based on what we have discussed so far, and given the current uncertainty and fast-moving nature of AI governance, what immediate actionable steps should healthcare organizations be taking in 2026 to mitigate risk and ensure responsible AI development, particularly in areas like internal governance policies, training, and robust documentation of AI decisions?

SPEAKER_02:

Yeah, I will give folks four concrete things that um their organizations could be doing right now. And this is uh informed by some lived experience uh across some different organizations. So first establish or strengthen your AI governance committee. Um, it should not be a rubber stamp. We have an incredible AI task force here at Datavant where we Really actively debate AI use cases and initiatives that percolate up from across the organization. And we are the important sort of gateway that asks what problem is this going to solve? Is that problem crisply defined? Including with respect to what success looks like, what are the risks? What is the human oversight model? And how will we actually monitor post-deployment success or impact? And who's accountable if something goes wrong? Those are the sets of questions that they kind of transcend maybe existing committees your organization may have, whether it's ethics or data use or privacy, you name it. It is really worthwhile to have a dedicated folks, set of folks who are cross-functional and have the authority to approve, reject, condition sort of AI use cases and initiatives based on that holistic assessment. So that's number one, establish or strengthen your AI governance committee. Number two is implementing a tiered governance approach based on risk. So not every AI tool needs the same level of scrutiny. An algorithm that predicts no-shows is very different than from one that influences clinical, you know, dosing decisions. And this is a really important thing to build into your own governance approach because you could spin your wheels all day, you know, sending hundred-line questionnaires for every AI model or third-party AI vendor that comes to your door, but the work and effort may not be worth the reward or outcome when it comes to risk management, because there's probably a handful of those use cases that are very, very unlikely to create meaningful risk for your organization. In some cases, you know, the consequence of some AI models or AI tools not working is that an administrative process in your organization moves slower, right? So differentiating based on risk, I think, is key. I also think that aligning on how you talk about and share information about AI that's in use within your organization is very important. So at Datavant, um, one of the ways that we align on risk and intended use is through artificial intelligence or machine learning model fax labels, so nutritional fax labels. This is a really well publicized concept in health AI, uh, thanks to some great work by my friend Mark Sendak, uh, who used to be with Duke, now at Vega Health, and others. But essentially, these are model cards, you know, fax labels that document key information about AI capabilities or tools, what they do, what data they use, how they're validated, what the expected intended use is, what the known limitations and warnings are, right? Like the context in which they shouldn't be used. And this kind of documentation has just been so critical to scaling governance proportionally and creating kind of a defensible record of like what we did and how we intended a tool to be used, which has helped us with again that kind of overall tiered governance approach based on risk. The third thing I would say is investing in training. So that should extend not only to your you know technical teams who are working with these models day in and day out, but even people who interact with just AI outputs, right, to help them continue to discern when a tool is working well versus when it may need additional review or uh careful handling of an output before it's you know plugged into workflow. People need to understand for every new tool you deploy, people need to understand what the tool does, what it doesn't do, when to trust it, when to override it, and how to document their decisions when they make those calls. And that sort of really informed human-in-the-loop type oversight is only possible through education and training. Um, and then the fourth and final thing I would suggest is that you should define success metrics and actually monitor them. Like don't just turn something on, deploy AI, and walk away. I think decide up front what you're measuring and how you're again going to know whether an AI deployment is successful. Is it accuracy? Is it some sort of efficiency gain? Is it turnaround times for a workflow being reduced? Is it um error rates going down? You know, like decide what your metrics are and then actually review them periodically after you confirm that you can measure them. The people using these tools daily have ideas into what success looks like. So I do think that it is really under, it's almost discounted to just ask your end users, right? Like, what does it look like when this works? You know, how does your workflow change? How does your day change? And kind of work your way backwards to metrics from that feedback. But I do think across all of these four kind of concrete steps, right, around uh governance, the through line is documentation. And until the regulatory and liability landscape stabilizes, I do think that everyone's best protection is just a record that you governed AI thoughtfully in your organization, you trained your people appropriately, you monitored outcomes and didn't just set it and forget it. And I do think that folks who do that are positioned to adapt regardless of how the regulatory landscape evolves.

SPEAKER_00:

Thank you, Aria. And as we're coming to the end of today's podcast, Ariad, thank you so much again for joining me today and providing valuable insights for the listeners. I look forward to speaking with you again soon. And for our listeners, we'll see you again in our next episode of the podcast series for 2026. Thank you.

SPEAKER_02:

Thank you, Zee. Um, great conversation.

SPEAKER_01:

If you enjoyed this episode, be sure to subscribe to AHLA Speaking of Health Law wherever you get your podcasts. For more information about AHLA and the educational resources available to the health law community, visit AmericanHealth Law.org and stay updated on breaking healthcare industry news from the major media outlets with AHLA's Health Law Daily Podcast, exclusively for AHLA comprehensive members. To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org slash daily podcast.