AHLA's Speaking of Health Law

Digital Health and Artificial Intelligence: Latest Trends and Developments

May 03, 2024 AHLA Podcasts
Digital Health and Artificial Intelligence: Latest Trends and Developments
AHLA's Speaking of Health Law
More Info
AHLA's Speaking of Health Law
Digital Health and Artificial Intelligence: Latest Trends and Developments
May 03, 2024
AHLA Podcasts

John Howlett, Senior Vice President and Chief Marketing Officer, Clearwater, speaks with Carolyn V. Metnick, Partner, Sheppard Mullin Richter & Hampton LLP, and Vanessa K. Burrows, Partner, Simpson Thacher & Bartlett LLP, about the current regulatory and legal landscape of artificial intelligence (AI) in health care. They discuss some of the most compelling ways health care organizations are using AI; the legal, ethical, privacy, and data security considerations around AI; enforcement activity and consumer lawsuits; and what health care organizations should be doing when getting ready to deploy AI. Carolyn and Vanessa spoke about this topic at AHLA’s 2024 Advising Providers: Legal Strategies for AMCs, Physicians, and Hospitals, in New Orleans, LA. Sponsored by Clearwater.

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Show Notes Transcript

John Howlett, Senior Vice President and Chief Marketing Officer, Clearwater, speaks with Carolyn V. Metnick, Partner, Sheppard Mullin Richter & Hampton LLP, and Vanessa K. Burrows, Partner, Simpson Thacher & Bartlett LLP, about the current regulatory and legal landscape of artificial intelligence (AI) in health care. They discuss some of the most compelling ways health care organizations are using AI; the legal, ethical, privacy, and data security considerations around AI; enforcement activity and consumer lawsuits; and what health care organizations should be doing when getting ready to deploy AI. Carolyn and Vanessa spoke about this topic at AHLA’s 2024 Advising Providers: Legal Strategies for AMCs, Physicians, and Hospitals, in New Orleans, LA. Sponsored by Clearwater.

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Speaker 1:

Support for A HLA comes from Clearwater. As the healthcare industry's largest pure play provider of cybersecurity and compliance solutions, Clearwater helps organizations across the healthcare ecosystem move to a more secure, compliant and resilient state so they can achieve their mission. The company provides a deep pool of experts across a broad range of cybersecurity, privacy, and compliance domains. Purpose-built software that enables efficient identification and management of cybersecurity and compliance risks. And the tech enabled twenty four seven three hundred and sixty five security operation center with managed threat detection and response capabilities . For more information, visit clearwater security.com.

Speaker 2:

Good day. This is John Hallett with the Healthcare Cybersecurity and Compliance Company. Clearwater from reducing clinician burnout by making manual tasks less cumbersome to more efficiently matching potential part participants in clinical trials every day . It seems we're learning about new ways that artificial intelligence is improving care, delivery and management. As AI applications become more and more prevalent in healthcare, it's critical for stakeholders on both the buy side and sell side to have a strong foundation and the regulations that govern the use of the technology. Joining me for a discussion of the ai AI landscape and applicable laws are two experts in the field. Vanessa Burrows , a partner with Simpson Thacher and Bartlett, and Carolyn Menick , a partner with Shepherd Mullin , Vanessa and Carolyn presented on this subject at ALA's Advising Providers conference earlier this year. And I'm excited to have the opportunity to speak with you both today and discuss the latest developments and this exciting and rapidly evolving arena. Let's dive in. Vanessa , uh, tell me what are some of the most compelling ways that you've seen healthcare provider organizations using ai?

Speaker 3:

Generally speaking, healthcare providers have indicated that they wanna use generative AI in ways that would ease the administrative burdens on physicians. So, for example, using AI for clinical documentation or to automate tasks such as creating a list of resources for discharge patients. Providers are also looking at drafting letters to appeal denials based on certain criteria. And health and hospital systems are using AI to help track surgeries and emergency room operations. But I think there's three compelling ways that healthcare providers are using ai. First is to improve early diagnoses and prevent a variety of conditions. And where we've seen this , um, in practice is with mammography AI algorithms that are potentially outperforming kind of clinical risk models for predicting breast cancer risk at zero to five years. There was a study published last year in the general radiology where the mammography AI algorithms outperformed the breast cancer surveillance consortium clinical risk model. And that BCSC risk model is a standard of care model that predicts breast cancer risk. It uses self-reported information from the patient and other factors such as age or family history of breast cancer. But the retrospective study used negative screening mammograms and the five AI algorithms generated a continuous risk score and predicted cancer risk rather than finding cancer. So that's , uh, one of the ways you know that AI can be used to improve early diagnoses or potentially create a better understanding of a cancer risk score. The second way would be to enhance imaging accuracy and efficiency. Radiologists can use AI to provide initial image interpretations and then perform subsequent secondary reads. They can also use AI to detect coronary artery disease or avoid overlooked fractures on x-rays. And AI is also , um, hopefully going to help detect early stage Alzheimer's disease or dementia by analyzing brain scans and identifying changes in brain structure and volume. The third way would be to streamline and expand access to clinical trials. So AI can be used to assist with patient recruitment, selection of clinical trial sites and identify areas of the country where a disease is more prevalent, and there might be more patients who could participate in a clinical trial. AI can also be used to analyze medical records for potential trial opportunities and parse , um, through existing information about a particular disease and cross-reference it with the clinical trial parameters to generate eligibility criteria for patients.

Speaker 2:

Karen , those examples of Vanessa, very exciting innovation that's occurring. Uh, Carolyn, what do you see as the biggest legal risks surrounding the use of ai?

Speaker 4:

Oh, gosh, John, well, there, there are a number , um, you know, three primary risks come to mind. First, the potential for error, inaccurate or incomplete responses, and which stakeholder is gonna be liable and how you're gonna apportion liability. Um, this problem and potential for error is exacerbated by this, this belief that the AI functions, you know, perfectly or very, very well, if you will, which is based on an assumption that, you know, studies have shown that people make that the AI works and it's accurate without validation. And, you know, we're, we know that validation is important and , um, regulators are starting to require it. Um, it's important to be able to validate the data and have an understanding of, of what kind of went into the training of the model . Um , so potential for error and liability , liability . Second is the potential for the misuse of the data and the risks to privacy and security. As we know, AI is data driven . It uses vast amount of data. And so, you know, the risks with respect to this kind of technology are, are magnified compared to other digital health solutions. Um, a presentation at FTCs Privacy Con 2024, which I , um, participated in, observed the privacy con there . They had did a segment on ai, and during that AI segment, there was a focus on concerns of LLMs and how these platforms are offering plugin ecosystems to expand the use of third party applications. And while this, you know, the theoretically enhances functionality, it due to ambiguities in the coding language between the two, these services are going out to the public without addressing systemic security and privacy issues. Um, and so this was a presentation , um, that was given based on a study at Privacy Con last month. Um, so privacy and security clearly need to be considered when these platforms are created with any digital solution. There are inherent privacy and security risks, but again, these are just magnified, given the amount of data that is being used with ai. And then finally, I would add , um, that there are risks of collect , you know, collecting data without awareness and meaningful consent. Transparency is key. And, you know, I don't know how, I don't know that all organizations are thinking about transparency and making sure that you have documented consent and that individuals who are perhaps contributing to the training of AI or who are experiencing AI or where AI is being used to provide them with a result that they may not be aware and they may not have consented to that , um, experience, if you will, which could come in a number of different ways. So I think transparency, awareness and consent are very important. Um, considerations and not getting, not having transparency, not obtaining consent would really , would really put an organization at risk. So those are the , the three that primary that primarily come to mind, John , but, you know, we could spend a whole hour on the legal risks .

Speaker 2:

Yep . No, absolutely. A lot of ground to cover there, but I appreciate your thoughts. I want to talk a bit about ethics as well. I know that's another key issue , uh, surrounding ai. So I'd like to get your perspective on , uh, what the, what ethical considerations must be taken into account when using AI in healthcare? Uh, what are your thoughts on that, Vanessa?

Speaker 3:

Sure , Jen . I , I'd say, you know, obviously AI is introducing new risks as Carolyn mentioned, because the autonomous nature of the technology, and in some cases , it's being used to , um, perform tasks or inform decisions, automate decisions, and then make predictions. So the ability to easily scale AI enabled decision making can introduce new risks, and these are obviously the most significant and sensitive areas. So if a provider is using AI to augment and automate decision making and processes in a clinical trial or for medical decision making , then the patient might be at risk if the AI hasn't been properly validated. So , uh, what we've seen is that providers and AI companies are examining ways to use AI responsibly, and they're proposing transparent and public principles to which they will adhere. There's been 28 healthcare provider and payer organizations that have made voluntary commitments to develop AI solutions that optimize healthcare delivery and payment by advancing health equity, expanding access and making healthcare more affordable, and then improving outcomes and patient experiences. And they are working to ensure that these outcomes are aligned with the faves principles, which are fair, appropriate, valid, effective, and safe. There's also the deployment of trust mechanisms and ensuring that there's a risk management framework work that's been put in place. And a little bit later when we talk about health equity, I can get into some of the National Institute of Standards and technology publications and some of their recommendations about , uh, responsible development of AI and ensuring that biases are not posing harms to humans.

Speaker 2:

Great . Thanks. Let's do , uh, maybe talk further about health, health equity right now, if you don't mind. What would you say folks can do, organizations that are implementing ai, how can they best address issues of health equity from your perspective, Vanessa?

Speaker 3:

Sure. Yeah. I think the first thing is to be aware that AI can amplify biased and discriminatory decision making . So AI could perform reliably, it could malfunction, it can generate insights , uh, that are difficult to interpret or explain. And there's the potential to cause direct harm to individuals and groups. So those who are implementing AI need to , uh, you know, consider certain standards or recommendations. Uh , for example, that , uh, fitness has put out , um, there's a special publication called 1270 that's aimed at identifying and managing biases in ai. So this publication dis discusses systemic biases, for example, a historical bias or a societal or institutional bias. And , uh, being aware of these publications and, and using their mechanisms that they've set forward, I think will go a long way to ensuring that health equity is considered when AI is implemented, and that , uh, biases and errors , uh, are not posing harms to humans. So NIST recommends that providers and other entities have to use a broad lens to look for the sources of their biases beyond just the processes and data that are used to train ai. So in addition to the systemic biases, we have human biases like group think , and then we have statistical and computational biases, such as errors that occur if a sample's not representative. Um, so if you have a sample that doesn't have a true distribution of the population, and then it's being used as a training dataset or for the development of ai, you're more likely to have a biased outcome , um, from that , uh, because of that dataset that's been used to train the ai, the NIST publication also , uh, points out that we need to recognize social biases such as inequities that can result in suboptimal outcomes for certain groups. And there's this really great health fairs article that kind of describes this in detail where , um, people were trying to help , um, understand how certain outcomes could be improved, but the actual outcomes that were recommended were the opposite of what would've resulted in improvements. So just, you know, being aware that , um, as you're using AI to try to address health equity that you're considering biases , um, in all forms that are going into the ai.

Speaker 2:

Great . Thank you Vanessa. Um, Carolyn , you touched on data privacy and security issues a bit earlier. I'd like to come back to that topic a little bit and ask you what measures must be taken, specifically privacy and data security measures when evaluating, creating, using, or , or training AI solutions? Uh , what are your thoughts there?

Speaker 4:

Well, there's a lot , there's a lot there. I mean, first , um, organizations need to comply with the applicable privacy laws and figuring out which privacy laws apply , um, could be quite an undertaking, right? If we are dealing with PHI, HIPAA will be implicated. If we're dealing with, you know, EU residents, we may have to deal with GDPR . And of course, in the last year or two, we've seen so many states roll out different privacy laws , um, and that the privacy landscape is just, it's , it's constantly evolving and getting more complex. So complying with privacy laws and understanding what laws are actually applicable , um, I would say is one thing. Secondly, you need to have a robust security infrastructure and make sure that you're also complying with any applicable security laws. Again, as I previously stated, digital health structures, including ai, they , they involve mass amounts of data, and therefore there's so many risks relating to privacy and security of the information that's collected. And AI in particular, does not easily reconcile with, you know, longstanding data protection principles that are incorporated into certain laws , um, including GDPR . And now we have the new EU AI Act that, you know, that providers and other organizations businesses need to consider if they are going to be operating or , um, using that product in the, in the eu. Um, and I, I think we're gonna see, you know, a lot to come in this area. So we had the , the president's executive order that came out in October of last year. And as part of that order, he directed certain federal agencies to implement various directives, many of which involve healthcare and, and for example, NIST was directed to establish guidelines and best practices within 270 days of the order. Um, and I think those guidelines and best practices will be informative and , um, worth taking a look at for those who are gonna be operating in this space. HHS was also required to develop an AI task force that will be tasked , um, with developing a plan that includes policies and frameworks and potential regulatory action on the deployment of AI and AI enabled technology. And as part of that , um, task force , they're going to be looking at the incorporation of safety, privacy, and security standards to protect PII. Um, so there's, there's a lot here to consider and kind of stay on top of. Um, but privacy and security are, are really important areas. And I think just like any digital health solution , um, you know, privacy and security are really important and it's important to have a very tight and robust security infrastructure.

Speaker 2:

Absolutely. Thanks for your thoughts. Uh , curious , uh, what entities you have found to be the most active in enforcing ai re reg , excuse me, regulations to date? Um ,

Speaker 4:

Sure. Um, and I guess I would say there are, there are few regulations , um, fortunately or unfortunately at this stage. I mean, the legal framework here in the US is in its infancy, although it is starting to take off , um, in the sense that there's just a lot to keep, keep up with. Um, there are some states rolling out legislation, for example, as you both probably know, Utah's governor signed the AI Policy Act and the law last month, and it requires the disclosure of AI use to consumers in certain situations. And interestingly, it requires , um, that physicians and other pref professionals must clearly disclose the use of AI in advance. So while it's setting , um, it's kind of calling for two different behaviors depending on whether you're a professional and if you're a professional or you're a licensed position , you must disclose the use of ai. Whereas for another category of professionals, you only have to disclose upon inquiry. I mean, this is an kind of interesting approach. Other states are starting to propose legislation. Some have, some have legislation on very specific , um, areas that, that don't tie back nicely to health law. Um, but I would say in terms of active agencies, the FTC, it has been super active. It launched an investigation into OpenAI, we're all aware of that. Um, it alleging that, you know, looking into whether OpenAI engaged in unfair deceptive practices, FTC , um, enforces section five of the FT C Act, which prohibits unfair deceptive trade practices. And the FTC has brought actions under section five for violations of consumer privacy rights and misleading consumers and failing to protect their information. It has also been very active in enforcing section five against companies with inadequate security controls, as well as those who've misrepresented their privacy practices. So there's a lot here , um, in the AI space. The FDC has also been following and monitoring AI for years. Um, it has come out and said the AI that is, that , um, is unfair violates the FTC acts and a practice is unfair if it causes more harm than good. Um, the FTC has also suggested the AI that is biased or discriminatory would be unfair. And it's noted that certain generative tools that steer emotions and belief can pose risks to people and maybe , um, people may be suscept susceptible to persuasion. Um, so this has been an area that is of great interest. The FTC, I think we'll continue to see more. I mean, that agency is super active. Providers in the healthcare space should be aware of section five of the FTC Act and that it could be implicated and they should take steps to protect privacy and security and avoid misleading consumers and patients. Which goes back to my, you know, prior , um, comment about transparency being key and awareness stakeholders should be aware that regulators like the FTC, may hold developers of AI manufacturers and providers responsible for making unsubstantiated claims. Vanessa, I don't know if, if, if you have thoughts about the FDA or other agencies?

Speaker 3:

Yeah, I think you hit , hit on the head , um, the nail on the head, Carolyn, with the, you know, FTC being the most active entity in terms of enforcing AI regulations. But I would also , uh, flag FDA because there's a number of medical devices that the FDA has approved or cleared that have ai , um, embedded in them AI capabilities. So FDA would , uh, determine if a product is a medical device based on the product's claims and how it's indicated for use. And FDA would also conduct inspections of a medical device manufacturer , um, including software as medical device manufacturers. And some of these manufacturers might make claims that are exaggerated or false or misleading. So FDA would have authority under the Federal Food Drug and Cosmetic Act to take action against those manufacturers or other entities that are making those claims. And they have issued warning letters to certain medical device manufacturers that have claimed to have AI codes or capabilities in their software. So for example, there was a warning letter that FDA issued to a manufacturer that had , um, kind of a typical medical device with electrodes as well as ear clips, but also desktop software. And the device itself was intended to collect, analyze, and interpret data to aid in the diagnosis of certain neurological conditions or potential neurological conditions. FDA said, of course, this is a device. So , um, if you don't have clearance or approval for this device and you're making AI claims and you have these particular indications for use, such as a diagnosis of a medical condition, then you need to fit within, you know, FDA's framework. You can't just introduce this device into interstate commerce without FDA , reviewing the device and clearing it or approving it for distribution into interstate commerce. So , uh, the FDA issued a warning letter to the company and cited a number of medical device regulations as, as violations , um, as well as noting that, you know, there were no quality audits. The , um, quality systems regulations that applied to medical devices were not being followed. And as a result of this failure to validate the AI processes and the software and TER maintain certain history records for the device, then the device itself would be misleading , um, under FDA's regulatory scheme. So DA of course has the ability to take actions against medical devices that are making particular claims about diagnosis or treatment of medical conditions.

Speaker 2:

Thank you both. Um , curious, have you seen any particular trends emerging with regard to enforcement action, which you can comment on ?

Speaker 3:

Sure, I'll take this one. So there's , um, been a number of FTC actions that Carolyn already highlighted, but I think we've seen a number of news reports in major media outlets about , uh, inquiries from D-O-J-F-T-C and others into the use of AI in electronic medical records systems and others. So for example , um, there has been , uh, a number of reported civil investigative demands that have been asking questions about algorithms and prompts and electronic medical record systems. Uh, so the question here is , um, I we're not really sure what DOJ is exploring at this point in time, but , um, companies are reportedly receiving subpoenas about generative AI's role in facilitating federal anti-kickback statute violations, or Federal False Claims Act violations. Um , the AI might be prompting , uh, notes or care that's medically unnecessary , um, or notes that , um, are documenting things that didn't actually occur , uh, between a physician and a patient. So if care is , um, provided that's in excess of what would otherwise have , what would otherwise have been rendered or what would've otherwise been medically necessary that could result in false claims. So I think we'll see a number of inquiries in the future, maybe potential settlements for criminal and civil false Claims Act or anti violations. Um , if, you know , the AI is prompting physicians to do things that don't follow current medical standards of care, or if the AI is resulting in additional prescriptions that would not have otherwise been , um, issued, and then the federal healthcare programs such as Medicare or Medicaid , uh, Tricare are paying for these additional prescriptions, for example.

Speaker 2:

So what about consumer lawsuits? Anything developing in that area that healthcare organizations should be factoring in or thinking about ai?

Speaker 3:

Yes, there've been a number of allegations. So I think we're in the early stages of , uh, some of these consumer lawsuits. Um, there's been allegations that healthcare facilities are using algorithms to guide staffing decisions. So any healthcare provider that's using AI to make certain staffing decisions, and , um, if those staffing decisions are resulting in understaffing of a particular facility or care that's not meeting certain standards , um, there should be parameters put around any AI used to , uh, ensure adequate staffing at a particular facility. Um, there's also been allegations , uh, that staff who've complained about , uh, algorithms that were used for staffing or for medical care were , um, the subject of retaliation or terminated. So, you know, there's a potential for whistleblowers at companies if they're using AI and they don't have the appropriate parameters around it or , uh, you know, if AI is maybe resulting in some decision making that , uh, doesn't have proper human oversight. Uh , so I think we'll see a number of ke tam suits , uh, a number of false claims actions. Uh, now whether the allegations will be proven to be true or whether, you know, there is actually adequate staffing, I think will , will be determined in court. But there are a number of allegations as well about denials of medically necessary care as a result of algorithms that are powered by AI or allegations that physician decision making is being overridden by ai. Um, you know, again, it's unclear whether any of these allegations are, are proven at this point, but I think we'll see a number of , um, actions in the future there. And healthcare organizations should be aware that there's the potential for these types of lawsuits to be brought if they are using AI in decision making , um, in a medical capacity.

Speaker 2:

Thanks, Vanessa. Uh, as a final topic , uh, Caroline , I'd like to hear your thoughts on what provider organizations should be doing to get ready for deploying ai.

Speaker 4:

Well, there are a number of things. Healthcare providers, considering the deployment of AI should think about establishing an AI compliance or AI governance committee and evaluate who are appropriate stakeholders, participants to serve on the committee. There should be some thought given to that and some structure, and there needs to be expert involvement. Um, these people need to be at the table and kind of overseeing this structure within the organization . Providers should also perform an inventory of existing AI tools. I can't tell you how many times I've heard, you know, stories about clients not being aware of what's being used within their organization. They may think there's no AI being used, and then they find out that the doctors are using scribe or , um, you know, different departments within the organization have procured it. Um, it's like the left hand doesn't know what the right hand's doing. So it's critical to do an inventory across the organization to really get a grasp on what is currently being used. You know, I think it , it can be surprising and eye-opening for many clients. Um, again, they're just not all aware of what's being used in their organization. And then they can also figure out, you know, do we have needs where we can start to deploy or even look at ai? But you need to have kind of an infrastructure in place in order to do this. Um, so you know, it's important that any AI is vetted by legal and that there's appropriate documentation of course, and consents. And without having an infrastructure in place, it's easy for these things to just kind of appear without, you know, putting in , um, the necessary guardrails, if you will, and doc , and having the documentation. So an inventory throughout the organization will help the business understand what is being deployed and, and manage it further. Provider organizations could establish policies and procedures , um, around the consideration, exploration, deployment of AI, as well as oversight tools , um, standards of conduct, rules around procurement , um, documentation, documentation. Um, there should be a process for acquiring AI and deploying it. That includes, again, review by legal and procurement and , um, just many different stakeholders. And the organization needs to figure out who the appropriate stakeholders are, and there probably needs to be representation from those different departments on the AI committee. Right. Security, I think, would also be critical in it, depending. So , um, documentation is one thing. Providers should also audit and monitor the use of ai. There should be training and education. And then finally, the organizations need to develop clear and transparent documentation. I mean, this goes back to what I was just saying and that I've said kind of consistently throughout this program is that documentation is key. Transparency and awareness are very important. So disclosures and consent is appropriate. Um, they really go a long way to protecting the organization.

Speaker 2:

Thank you, Carolyn and Vanessa for the excellent insights you both shared. And thanks to our audience for listening. I hope you have a great day.

Speaker 4:

Thanks, John .

Speaker 5:

Thank

Speaker 1:

You for listening. If you enjoyed this episode, be sure to subscribe to a HLA speaking of health law wherever you get your podcasts. To learn more about a HLA and the educational resources available to the health law community, visit american health law.org.

Speaker 5:

<silence> .