AHLA's Speaking of Health Law

Top Ten 2024: Health Care's AI Transformation—Managing Risks and Rewards in an Evolving Landscape

January 12, 2024 AHLA Podcasts
AHLA's Speaking of Health Law
Top Ten 2024: Health Care's AI Transformation—Managing Risks and Rewards in an Evolving Landscape
Show Notes Transcript

Based on AHLA’s annual Health Law Connections article, this special series brings together thought leaders from across the health law field to discuss the top ten issues of 2024. In the first episode, Barry Mathis, Principal, PYA, speaks with Katherine Snow, Privacy Counsel, Hinge Health Inc., and Alya Sulaiman, Partner, McDermott Will & Emery LLP, about how artificial intelligence (AI) continues to transform health care and the various legal and compliance considerations. They discuss how health care organizations should manage governance of AI; the legislative and regulatory environment; and concerns related to intellectual property, privacy, and bias. Sponsored by PYA.

Watch the conversation here.

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Speaker 1:

A HLA is pleased to present this special series highlighting the top 10 health law issues of 2024, where we bring together thought leaders from across the health law field to discuss the major trends and developments of the year. Support for A HLA in this series is provided by PYA, which helps clients find value in the complex challenges related to mergers and acquisitions, clinical integrations, regulatory compliance, business valuations , and fair market value assessments, and tax and assurance. For more information, visit pya pc.com.

Speaker 2:

Thank you very much for that introduction. We are pleased once again , um, to facilitate and host on this podcast , uh, this one , uh, all about ai, which is in the news everywhere you go. We've got it downloaded on our phones. We're writing songs with it, we're doing all kinds of things with artificial intelligence now that it's in the public mainstream. But today we've got a couple of experts and we're gonna talk about healthcare's AI transformation, and managing the risk and rewards evolving landscape. That's what today's about. I have with me two of those subject matter experts. We've got Katie Snow, and we've got , um, Alia Solomon . All right . I'm gonna let you introduce yourselves. Give a little bit about your background, and then , uh, we're gonna get started. Go ahead, Katie .

Speaker 3:

Hi, I am Katie Snow. I am currently serving as privacy counsel for Hinge Health. Uh, we are in the digital health space. I have also , uh, served in a similar role for a health plan and also a health system. I'm happy to be here today.

Speaker 4:

And, hi, I'm Ollie Soleman . I'm a partner with the law firm McDermott, mul and Emery. I'm based in their Los Angeles office. I work on all things , uh, digital health , uh, product counseling, which these days includes a lot of , uh, counseling on AI enabled technologies. Before joining McDermott, I spent my entire career in-house , um, including at a provider organization, a California statewide HIE , and then most recently at Epic, the EHR company. So, really happy to be here with you today.

Speaker 2:

Thank you. And we are pleased to have both of you in, in this discussion. You, you wrote a, a , a, a great article. I might mention you've got another colleague in there , uh, Lisa, a Monte , who is also an author who couldn't be with us here today, but we want to give her some shout out . But read through the article again, great article. I think if you're, if you're looking for either some next step information or where to start information, this is a great article around , uh, AI transformation in healthcare and the risk and concerns there. So what I want to do is start with the start, and that is, if, if I'm joining this podcast or, or maybe I'm wondering what I should be doing, where should I start? If it's, I know it's coming to my organization, I know my colleagues are talking about it, but if it's my responsibility to try to add, and I think what your article eventually gets to is the governance around this. Where do I start with managing that governance?

Speaker 4:

I , I'm happy. Yeah, I'm happy to get us started. So I think at, at a high level, you know, 2023 was really the, the year where ai, you know, dominated headlines, dominated , um, boardroom conversations and dominated, you know, internal leadership meetings at healthcare organizations and healthcare adjacent organizations. And, and one of the, one of the key questions that everyone seemed to try to answer is, what's our AI strategy? Um, you know, how, what problems do we have , uh, that AI can solve? And, and, you know, I think all, you know , all three of us, you know me, Katie , Lisa, really agree on taking a use case driven approach to any ai , um, you know, kind of initiative. And what does that really mean? It means having a defined problem that you're trying to solve and making sure that you can translate that problem into questions that , um, an AI model can hopefully answer for you, or, you know, activities that an AI model can streamline for you. A lot of, you know, time spent on the problem formulation , uh, part of the, of the lifecycle can pay dividends later on when you are trying to identify risks, which, you know, really come down to AI operating outside the intended use case. You know, that intended problem statement that you defined upfront .

Speaker 3:

Very good . I could add to that a little bit , um, to , to kind of get to the governance piece. I think you're also trying to get at , um, to do that analysis, you have to know what the use cases are, but to know what the use cases are, you have to figure out a way to get that information to the people who are tasked with governance for your organization. So figuring out how those use cases can come and then applying your already existing governance frameworks to them, whether it be privacy, security, or otherwise , um, is , is a good way to start that analysis.

Speaker 2:

Very good. And , and Katie , this is , I'm gonna do a follow up question because you're, you're a bit of a privacy expert, my understanding. Yeah . <laugh>. So , uh, one of the things that we use to affect our governance is, is policies and procedures. Mm-Hmm. <affirmative> . And , and we've all been through the analysis paralysis over the years with the different frameworks coming in, unfunded mandates with hipaa, meaningful use, mips and all this. So we're all well indoctrinated on assessing and evaluating and building our policy procedure base. Uh, are we doing anything different than a , does AI create a special needs or is it really kind of fit within those?

Speaker 3:

Uh, I think it's a little bit of both. So I think your current existing policy and procedure framework will apply to use of AI because at the end of the day, you're essentially using AI to address already existing problems, which are already addressed by your current policies and procedures. So AI, as we all know, is very data-driven. So obviously our policy , your privacy po and security policies and procedures will apply. Um, but in addition to that , uh, I, I think what is helpful is creating an AI governance framework that kind of wraps around your , or your already existing , um, policies and procedures, frameworks that you already have set up in your organization. So it's kind of a layers on top of it, but pulls in those various different pieces that might be at issue.

Speaker 2:

Okay. So it sounds more like we're going to amend some of our existings to account for the procedures and things around ai, but not necessarily reinvent things because we've already got good policies and procedures likely in place over the years. We just have to do some augmenting.

Speaker 3:

Yes . That's kind of like more of like layer, how do you get the AI piece to you so you can evaluate it.

Speaker 2:

Very good. Um, so one of the questions that that, that I've had asked me a couple of times is , would there be additional legislation around this? And I know there's other bodies that are talking about this and , and other federal agencies that are discussing it. And I have to imagine that if I deploy an AI tool and that, let's say it's encoding and that AI tool winds up producing 300 erroneous bills , uh, for Medicare , uh, am I gonna be able to raise my hand and say, oops, sorry, that wasn't me, that was my ai, sorry about that. Uh, or am I still held the same standards ?

Speaker 4:

It's a great, it's, so I think this is where we can continue the theme that started with Katie's answer, which is that just like there are existing policies and procedures within your organization that likely apply to your AI use cases, there are plenty of existing laws on the books that apply to the use of AI in particular context. So from a fraud and abuse false claims perspective, Barry, you're absolutely right that, that, you know, healthcare organizations are going to be held accountable for activities, actions, decisions made , um, even if they, in the same way, even if they were , um, you know , sort of motivated or, or partially driven by AI enabled technologies. We're also seeing, you know, a real kind of evolution in the, in the concepts around sort of shared responsibility between developers of these tools, the institutions deploying these tools, the end users and sometimes clinicians , um, that are, you know, actually wielding these tools in workflow. And we are seeing these relationship dynamics , um, you know, kind of evolve over time without a , a ton to go off of , of when it comes to regulation or even legislation that, you know, specifically put certain responsibilities during the AI lifecycle onto specific entities. That's starting to change. I mean, just , um, you know, this week, you know, on , uh, January 9th, the Georgia , uh, legislature just introduced a bill that specifically prohibits healthcare professionals from making decisions or taking actions solely on the basis of ai , uh, you know, generated results. Uh, and also requires the state medical board to create rules, regulation standards that'll be applicable to clinicians. So in the absence of sort of industry and inter and stakeholders sort of deciding kind of what the shared responsibility model will be among themselves, I think we are gonna continue to see state legislatures , federal , uh, you know, regulators start to impose particular obligations or responsibilities on different, you know, folks , um, you know, in , in that developer deployer or end user category.

Speaker 2:

You know, and I think that's important, especially again for those that are, you know, within the sound of our voice here on this podcast. Um, you talk about the state that the , the Georgia , uh, mandate just came out. Uh , other states are looking at similar, but I know at the federal level, there's 2, 2, 2 things that you mentioned in your article. I think it's in the second paragraph , uh, around the multi-layer issue , uh, conversation. You talk about the White House blueprint that's out there, and then also an executive order that came down towards the end of last year , uh, from President Biden. Can you speak just a little bit to those and how, as someone who's just starting to define what the landscape may be for me and my institution, how those might help us understand what's coming down latest ?

Speaker 4:

Sure . I'm, I'm happy to start here. And I think the, the key thing to note is that, you know, we talked about how existing policies, procedures, you know, concepts apply. I think what the White House blueprint for an AI bill of rights and the executive order that came out in October are pointing out is that there are some unique risks and challenges associated with the deployment of ai, especially in healthcare, that require us to fold in principles, you know, guiding principles to sort of inform all of our decision making and action. So what are those guiding principles? You know, we, there's a huge emphasis on safety and security in those documents. There's huge emphasis on fairness and avoid avoiding biased outcomes or , um, you know, or , or, you know , kind of , uh, using data that's sort of unrepresented of the populations that you are trying to reach with AI products. You know, these are principles that aren't, you know, there are analogs reflected in some laws. You know, there are , there are anti-discrimination laws that exist at both the federal and state level, but I see the White House , uh, documents is really trying to put those concepts into focus with ai, particularly, and with the executive order, it is worth noting that healthcare specific use of AI is a huge area of focus. We have, we are now , um, you know, we are now a couple months into a wide range of directives with , uh, deadlines , uh, ex you know, from , uh, from now until the end of this year that are all specific to creating a , a kind of a, a more robust policy framework specifically to regulate , uh, health AI use cases. So that this is one for our listeners to, to keep track of for sure.

Speaker 2:

Very good. As, as a former C-I-O-C-T-O and IT compliance officer , uh, we have our primary EMR , you spoke about Epic, and of course there's Cerner and Allscripts, and, you know , even smaller ones like CPSI , everybody's moving towards something with ai, it's, it's almost, if it's not in your conversation, you're not up to speed with what's going on in the industry. But one of the things that I'm curious about, and I've used this analogy in the past, if I ask , Hey, have you been a in , in a jacuzzi or rid a jet ski in the last 10 years? Everybody would likely raise their hand, but then I might follow up and say, are you sure? Because Jet Ski iss a brand and jacuzzi's a brand, maybe you were on a watercraft or a hot tub. So when I hear somebody say, Hey, we're deploying ai, two questions come to mind. One, is it really ai? And two, do you have the rights to use whatever you're using in that? To pass it on to me, is there intellectual property issues? So talk just for a minute about what the risks may be around that in healthcare with these modules that are coming out.

Speaker 4:

Yeah, I think the, you Katie mentioned the how important data is to , uh, to these tools and you , there's a lot of data , um, and content and healthcare that sometimes folks label as proprietary. And when you are taking something that you consider confidential or proprietary and feeding it into , um, and AI model, it is worth understanding how exactly that AI model will process that information and how, what the output might look like, you know, to what extent the output might actually fold those inputs, you know, your proprietary information or content into , um, into sort of, you know, an outcome. On the other side, I think the, the challenge with really understanding how , uh, confidential or proprietary content may be used or transformed through some of these models is exacerbated by the fact that there's not really a , uh, well, actually I should say that until December of last year, there really wasn't an established standard for transparency into how these tools operate. We have the , uh, we have a , a new framework from the office of the national coordinator for health IT from December of last year that applies specifically to those EHRs that you mentioned that , uh, have certified health IT products. And that framework does require them by the end of this year to provide transparency into sort of the design configuration, you know, key features of the , of , um, what the ONC defines as predictive decision support interventions, which is technology that supports decision making used based on algorithms or, or models that are driving relationships from training data. So I think, you know , one of the, some of these IP questions about, you know, what's, how exactly is the output being generated? And to what extent can we draw a one-to-one relationship between the output and proprietary input? Some of that may be illuminated by what we learn about predictive DSI models as , uh, EHRs implement the , uh, new , uh, regulatory requirements from ONC.

Speaker 3:

And if I can add something else, and from like an ownership perspective, it's a little outside of the IP realm , uh, but given my privacy bent , um, a big issue is actually the ownership of the data itself as well. So depending on what type of hat you're wearing with respect to the data, you have to also evaluate whether you can even use it for the particular use case that you wanna use it for.

Speaker 2:

So it sounds to me that the old adage, buyer beware still applies here. So , uh, there , there should be some type of vetting and diligence when someone is saying, we have this new AI module that does either predictive or intervention medicine using ai, or it just makes it more efficient into using it. There , there ought to be some digging in. And what I would say is probably ought to have counsel involved in that to make sure that you're, you're, you're not, you know, getting one thing from somebody but violating the rights of somebody else, be it the patient from a privacy or from another vendor, from an intellectual property. So, yeah , uh, very good. AI's not perfect. We, we've all read and heard the stories, the , the poor attorney who used it to , to actually file a briefing and it made up a case that didn't exist. Uh, I think now they're, they're calling that mirage is that AI can actually make a lie and call it a mirage, which scares a lot of people who says , if , if AI's coming into healthcare are there not also risks around it being biased, reliable fairness. So talk just a minute, 'cause this is something you covered in your article and I found it fascinating. Talk a little bit about how that's impacting in healthcare specifically.

Speaker 4:

Yeah, I'm happy to get us started on this one too. So I think the, this, this really while absolutely health healthcare organizations , uh, digital health companies, you know, undoubtedly have commitments against, you know, perpetuating bias and, you know, commitments to promote fairness. But a broad overarching commitment or principle might not be enough when it comes to making sure that you're deploying AI in a way that actually lives up to that standard. And this is where we go back to the importance of how you formulate the problem you're trying to solve. You know, I think you, when you are thinking about what your task is that you wanna accomplish with way ai, what your target is to predict or influence or streamline, you've also gotta think about what you are encoding about, you know, from society, about society , uh, when as you're structuring, you know , your model and your deployment. So what does that really mean? Bias could look very different, you know, AI solution to AI solution, right? If you are talking about discrimination with algorithms, it's, it's kind of hard to pin down unless you really have a crisp definition of what it looks like for your AI model to work and what it looks like for your AI model to fail. So if you've got a clinical decision support tool that is producing different outcomes for two people with the same , uh, same symptoms , same comorbidities, and the only difference between those people are there , you know, is there ethnicity, you know, there might be something at , at issue there that you wanna investigate and add into how you define bias for that particular AI tool. But the, the, the, the tough answer here is that it isn't , um, you know, everyone gets out angry at the algorithms, but the algorithm is doing exactly what we told it to do. It's just that, you know, the algorithm happens to be mirroring a bias or gaps that may exist in underlying data. Um, you know, the, the actual code itself and what it , what the algorithm algorithm was told is predictive or not predictive. I don't know , Katie , if you have other, other reflections on this one .

Speaker 3:

Um, I think what Alia said is exactly spot on . And I think one of the ways that you have to address that and account for it is in how you create, like, kind of like your SDLC process when you're testing new tools that you're creating to make sure that the outcomes you're getting from it are what you expect it to be. And if not, like what kind of changes can you make to the, the infrastructure of the tool, if you will, like the prompts that you're putting, the, the directives you're giving to the tool, speaking strictly from a generative AI perspective. Um, so having a robust process where you're testing before going live is really critical and being really creative in how you test it as well. Because if you're cre , if you're testing it just as if you are wanting to perform the way you want it to perform, you're not gonna evaluate all the potential risks that your tool could be creating.

Speaker 2:

Excellent. I think that's great advice from both of you. Um, we've got just a couple minutes left and I want to end it this way. Um , all of our podcast listeners, I'm gonna give them an empty backpack. And in that backpack I'm gonna put your article, 'cause I think it's worth having in that backpack, but each of you, Katie , we'll start with you. What else would you give them as they start their journey on , um, managing risk and rewards around the AI and healthcare? What would you put in that backpack? What's the advice that you would give them to put in there?

Speaker 3:

Uh , to go back to the beginning of our conversation, I would say the first is a centralized way to figure out how your organization wants to use ai. So you have that regular stream of input so you can do the evaluation on the front end. Uh, and then I also think really helpful is great partnerships, whether it be with your product or your engineering or your frontline staff, depending on what type of organization you work with. Um, because the better your partnership you have, the more robust your evaluation can be.

Speaker 2:

Okay. Thank you. Aaliyah , same question to you.

Speaker 4:

Yeah, I, I think having that, you know, having a a , a true cross-functional committee to in to review AI use cases to evaluate the risks is critical. I think, you know, having your, your group of people , um, you know, anointed with the responsibility to think about these issues and weigh 'em for the organization is, is number one. Number two is , uh, the training for that group to make sure that they are, that they're not just coming in and speaking their own language with respect to their area of subject matter expertise, but that you upskill the entire group to understand each other's domains of expertise. And it may seem really impractical to require, you know, the , um, software developers to learn the privacy rules and to require the privacy folks to understand the technology deeper. But I really believe that the best work I've seen with AI models and AI technologies , um, ha has that upskilled cross-functional committee that speaks, you know, that speaks a similar language to one another as really a common element.

Speaker 2:

Outstanding. Thank you. Um, this has been fascinating for me. Again, I really enjoyed reading the article. I've enjoyed, you know , uh, talking with you on this. I hope our listeners have as well. Uh , we've been talking with , uh, Aliyah Soloman , uh, with McDermott, will and Emery and Catherine Snow with Hinge Health , um, and their colleague who's absent but not forgotten. Uh, Lisa Monty , uh, with Athena , uh, contributing to this article. It's the first article. If you go out and look at the American Health Lawyer Association's Top 10 podcast, it's number one on the list there. It's healthcare's AI transformation, managing risk and rewards in an evolving landscape. We encourage you to go read it and reach out to these folks if you have following questions. We want to thank you for attending today. We look forward to hearing from you and and on the next podcast.

Speaker 5:

<silence>

Speaker 1:

Thank you for listening. If you enjoyed this episode, be sure to subscribe to a HLA speaking of health law wherever you get your podcasts. To learn more about a HLA and the educational resources available to the health law community, visit American health law.org.