AHLA's Speaking of Health Law

Health Care Data Governance: How to Build a Culture of Compliance

May 23, 2023 AHLA Podcasts
AHLA's Speaking of Health Law
Health Care Data Governance: How to Build a Culture of Compliance
Show Notes Transcript

Hal Porter, Director of Consulting Services, Health IT and Digital Health, Clearwater, speaks with Nia M. Jenkins, Vice President, Deputy General Counsel & Head of Commercial Legal, Surescripts LLC, and Alaap B. Shah, Partner, Epstein Becker & Green PC, about how health care organizations can build a culture of compliance around data governance. They discuss some of the latest trends and developments in health care data privacy, best practices for creating a data governance program, and considerations for vendor management and artificial intelligence. Alaap and Nia spoke on an AHLA webinar last year related to this topic. From AHLA’s Health Information and Technology Practice Group. Sponsored by Clearwater.

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Speaker 1:

Support for A H L A comes from Clearwater, the leading provider of enterprise cyber risk management and HIPAA compliance software and services for healthcare organizations, including health systems, physician groups, and health IT companies. Their solutions include their proprietary software as a service-based platform, I R M Pro, which helps organizations manage cyber risk and HIPAA compliance across the enterprise. An advisory support from their deep team of information security experts. For more information, visit clearwater compliance.com.

Speaker 2:

Hello and welcome everyone to the podcast. My name is Hal Porter, and I'm the director of Consulting Services for Health IT and Digital Health vertical segment at Clearwater Security. Clearwater Security's mission is to provide commitment to customer success, lead with accountability, integrity and collaboration excellence in all that we do, advance colleague success and respect and transparency. Joining me today for the podcast, I'm happy to introduce Nia Jenkins, who is vice President, deputy General Counsel, and head of commercial legal at Surescripts llc, and also Alap, who is a partner with Epstein, Becker and Green. Um, Nia, if you would introduce yourself and then Alop.

Speaker 3:

Yes. So, uh, once again, Nia Jenkins. Um, so I have been with Surescripts now for almost two years. Um, and prior to my role as Deputy gc, head of commercial legal at Surescripts, I was with United Health Groups, specifically Optum. And prior to that just working at law firms and then other health tech corporations and always primarily focusing in on transactions, deal work, uh, strategic partnerships, m and a deals, as well as procurement.

Speaker 2:

Thank you very much. Hello?

Speaker 4:

Yeah, hi Hal. Hi Mia. Thanks for having me on the podcast. Um, so by way of background, I'm a partner in the healthcare and life sciences group at Epstein and Becker and Green in their DC office, and I co-chair their privacy, cybersecurity and data asset management team. So most of my day is spent counseling clients on proactive and reactive solutions to help them, um, navigate the really, really increasingly complicated privacy waters that we're in, as well as manage risk around cybersecurity with respect to their data and their technology, and doing a fair bit of transactional work as well, just like Mia, um, doing a lot of due diligence work with respect to buying and selling of, of companies as well as con contracting around complex data sharing arrangements. So happy to be here and happy to dig into any of those issues.

Speaker 2:

Well, thank you both very much for joining us today. Um, so last year, may, uh, 18th of 2022, you both participated in the health data considerations for digital health and technology transactions Webinar, uh, at the time moderated by Heather Desler, uh, partner at Latham and Watkins. And, and you discussed several topics, uh, and, and that included legal framework, information blocking, vendor management, data governance, uh, artificial intelligence, use of data, data ownership, uh, as well as as several other areas. Um, and over the last year since that webinar, uh, we've seen both legal and threat landscapes, uh, have continued to change and mature and evolve. Uh, and so really hoping to tap in from your experience to see if any noteworthy changes or shifts that you've seen since, since that webinar. Um, and if there's anything you could identify specifically or elaborate specifically on to kind of get us started, you know, we've seen a, a lot of momentum, uh, starting to gain a and build around ai, uh, both good and bad in several different areas. Uh, and so I wanted to get your thoughts in, in, in that area.

Speaker 4:

The one thing that's, uh, ever constant in this space is that things change all the time. So the state of the world when we last spoke on this topic, um, has, has been changing pretty rapidly. In particular, from my viewpoint, one thing that's happened is the appetite for data has increased and increased across the ecosystem. Uh, not only are pharma companies continuing to look for data assets to be able to use, to innovate and create efficiencies in what they do, but we have lots of other players in this space that are interested in getting access to data. And we have increasingly data sources now coming into, into the play with, um, ideas to monetize their data in various ways. But that's not, you know, unfettered. I think there's lots of changes that are happening with respect to the privacy landscape. For example, I think since we last spoke, there are now six comprehensive state privacy laws that have come into view, some of which that are effective now, some which will come into effect later this year and, and some that'll come into effect, um, in subsequent years. And among those we have California, which led the way, followed by Virginia, Connecticut, Colorado, Utah, and most recently Washington State passed, uh, co uh, consumer privacy law focused on healthcare data, which is the first one of its kind. So we're seeing increasing focus on consumer protection in the privacy space and at the state level in particular, um, with, with notably a view to protecting health data as a very sensitive class of data. I think there's other changes too. The Office for National Coordinator of Health, IT at HHS has been proposing new rules to, um, address interoperability requirements. Um, we've seen the, the landmark Supreme Court holding and the, the Dobbs case around reproductive health, um, make some waves with respect to what can be done with reproductive health data. And there are states on both sides of that equation. S and then there's always this ever-present, um, viral thing happening. Uh, artificial intelligence and things like chat G p T are popping up, um, just the tip of the iceberg, but that's really calling to question how we should be leveraging data and sharing data, especially in the healthcare space to power these really, really innovative technologies. Um, it keeps me on my toes, keeps me, um, really busy at work, but, um, keeps me, keeps me challenged.

Speaker 3:

Yeah, no, and I, I completely agree with you a lot. I mean, I, I would say, you know, the F T C certainly has been playing a prominent role in terms of some recent fines. Um, also teca. So you know, very much in line with all of the things you just mentioned, that's become a lot more of a focal point, especially with the more recent announcement of those who are now Q hens. And then of course, the application process is still running. So those are kind of some more of the, the latest kind of things happening that I know many digital health companies are looking at. Um, and more specifically all of those, you know, state-based regulations as well. Um, that's constantly an evolving area where many of us who practice within, you know, these particular states need to be thoughtful about what is occurring and how it really applies to some of your products and services as you're building them or as they already exist.

Speaker 2:

Excellent. Um, so one, you know, as we kind of go through, um, this landscape and the, and the way it's changing, they're, you know, potentially FDA and CMS that are involved in, in several different aspects, for example, from a HIPAA perspective. Um, you know, can you guys talk a little bit around some of the overlap there or some of the considerations for governance, uh, that organizations in this space would want to consider as they try to navigate these waters?

Speaker 4:

Sure. So HIPAA is the bread and butter of healthcare privacy and security. It's been around for a long time. It's pretty mature, although there are some modifications, uh, that are being proposed and in the wings, although I don't think we'll see anything issued, uh, from the officer's civil rights until 2024 based on some recent comments they had at the HYS event. But what, what I'd like to sort of focus on is there's a lot of different things happening at the federal level, at the state level. And the reality is that to navigate all of that, oftentimes overlapping law, sometimes in consistent law, and sometimes, you know, law that's just lacking what calls into focus is the need to really build a culture of data governance to be able to intentionally deal with navigating this, this ever-changing climate. To be able to effectively monetize data, you need to first know what your data is, you need to do some data mapping, and your governance group needs to sort of spearhead those efforts. You need to then have a clear viewpoint about what kinds of data uses are you willing to support using that data. Uh, does it align with your mission? You know, are the partners that are coming to you for data, um, aligned with that mission? And you wanna be able to scale based on that formality and that sort of intentional way of managing data. Um, because if you're doing it haphazardly, you're really not gonna unlock data to the maximum utility it can have as well as, uh, maximal, uh, value it can have in the market. And where there's law that's great. Let's leverage that law, whether it's HIPAA as sort of our baseline starting point for how to share data, or perhaps as data moves out of the HIPAA regulated space. Is it state law like these consumer privacy laws that we were talking about? Or is it just general consumer protection sentiment from the FTC or, or otherwise? And sometimes when, when none of that's really clear, the data governance body really needs to look to ethics about how we should be dealing with data and fair information principles with respect to how we're can be good data stewards. Those are the things that would guide us as well. So again, you really need to sort of, if you wanna unlock data in your organization, you need to be thinking about creating a culture of data governance and then formalizing a data governance body to handle these kinds of issues. And that could be a multidisciplinary group, including legal and compliance folks, business folks, you know, others as well. So I think, um, you know, that's sort of the takeaway today is if you're really looking to get into the data game, you should be thinking about data governance holistically and, and, uh, mature that function.

Speaker 3:

And I completely agree, um, in terms of thinking about, you know, setting up a data governance board or, you know, group, um, that has cross-functional representation, uh, you know, being in an in-house role that's for many, I know that's not necessarily something that some organizations have as top of mind, but it is so important to, uh, what AAP was just mentioning. You know, you have to think about your use limitations. So as you set up your data mapping, you wanna be thoughtful about, well, what are the rights you have from your data provider? And then how does that map throughout? And then when you get requests for data, you know, you have information blocking and other things that are coming out. And even as you're thinking about standing up new products, you wanna be thoughtful about, well, how can we utilize the data we have? We, we may have mapped what the current structure is, but what about future use cases? What about expansions of the current products? And you wanna be consistent in how you think about all of that, which is why having a board is so thoughtful. I mean, and, and so really necessary to be methodical and consistent in your approach. The other piece that I would just mention and highlight is also that information blocking concern. Um, you know, it's obviously an ever-evolving kind of area of law and it's new, uh, for many organizations to have to think about, but you really want to be thoughtful about similarly situated organizations and how you're treating them. And so really sitting down and starting to do that hard work and really training up and informing your board so that they can be really strong data stewards and be very thoughtful about what the guardrails are, what the framework is, and kind of what is a go and what is not, is very imperative to be able to have some level of nimbleness and I will say consistency across your organization.

Speaker 4:

Yeah. Mia, you raised some really great points. You know, I think the other thing, once you do your data mapping and you do your data rights, you know, association with that map, and you start to really look at how to unlock that data from a use case perspective that aligns with your mission, there are different, there are definitely other issues you are gonna need to face too, in terms of whether the data that you wanna leverage is identified. There's a set of rules and requirements with respect to that and risk associated with that versus de-identified data. What are your processes to get data from an identified state to identified de-identified data state? Um, and then what are the data security controls you wanna have in place? What kind of data protection do you wanna have in place from a technical perspective as well as from a contractual risk management perspective? And also, what kind of controls do you wanna have in place with respect to what happens with that data and who owns what happens with that data? There's IP rights with respect to the data itself. There's IP rights that might be generated around derivative products that are generated from the data. There could be a data analytics that are, that are created based on the data. So there's a lot of considerations that go even further than just the, Hey, do we have data and can we use it, um, that need to be considered as part of this, this overall governance program.

Speaker 3:

Yeah, I completely, uh, agree with that as well. And I thank you for pointing out even the kind of the piece about making sure your data is secure and being thoughtful about that. And I would say in-house, you know, something to always keep top of mind is your contracts, um, especially with your data providers, they oftentimes dictate, you know, certain obligations you have as being, you know, the holder of the data of that data provider. And so some of these are things you're going to have to be thoughtful about, and you may have to operate at a certain level that will be at the highest, most protective based upon the agreements and contractual obligations you have. But the other thing on the other side of that is flowing through those obligations. So being aware of what they are, but being able to make sure that those who gain access to the data that you have are also being responsible. On the other end of that,

Speaker 4:

Mia has some really great points, and it really starts to open this conversation about vendor management because a lot of data use and unlocking data is not done, you know, by the entity that's the data source. In fact, we have lots of intermediaries in the ecosystem that are looking to collect data, aggregate data, manage that data, and ult ultimately unlock it and create value. But that's another link in the chain of custody. And we wanna make sure that as data flows through that chain, that we are being intentional and thoughtful about managing risk along that chain and, and allocating risk along that chain as well. So if, let's take for example, Nia Surescripts is the data source, um, and has a downstream vendor that's helping Surescripts unlock that data. We wanna make sure that Surescripts is on the, uh, on the protected side with respect to what might happen with that data downstream with the vendor, and has rights with respect to control of that information. Um, has oversight over integration with systems as data flows from one system to another and has some sort of, uh, remedies available to it to the extent the vendor doesn't operate consistently with what the expectations are or, or runs into some problem like a data breach. So there's a lot of considerations that go into contracting, um, and managing risk for your organization as you, uh, depending on where you sit in the ecosystem.

Speaker 2:

Absolutely. And, and one of the things that we see through that too is that there's more and more data sharing that are, is happening between regulated and unregulated organizations. And so there's gonna be some gaps, uh, potentially there based on just the business model that they're following and the, the laws that, uh, and, and regulations that they're required to follow as either a regulator or unregulated organization. So any thoughts, uh, around some considerations in that scenario?

Speaker 4:

The first question to really ask yourself as any organization is what laws apply to me? And it isn't always clear, and even if you think one set of laws, for example, HIPAA applies to you because you're a healthcare provider, which is a very classic scenario, increasingly there's lots of data being collected by HIPAA regulated entities that isn't actually protected health information. And therefore the question becomes what is, what is regulating that information? And it may be a state law, for example, or it may be no law and it may be just a consumer protection issue. So we have to get savvy as organizations, whether we're classically regulated by mature laws like HIPAA or sort of innovative companies that are operating in the gray space to really take a hard look at what laws apply to us, um, and then maybe make decisions about how our business model should, should operate depending on what those compliance burdens are. But I think that one of the things that really gets complicated in these data transaction is when you have a heavily regulated entity that knows how it's regulated and has built compliance structures over a long periods of time that are trying to do business with companies that don't really have a regulatory regime to operate within. And then you have to bring these two parties together to negotiate something that that feels good to both parties. And, and I think that's where, um, Mia and my work comes in, is to really start to un understand and appreciate what the regulatory regimes are that apply. Mm-hmm.<affirmative> figure out how to navigate the conversation and negotiate something that works for both parties to give both parties assurances. They need to actually have that data flow. Um, and it's really about trust ultimately. So it may be that one party's contracting with another that is not really that regulated, in which case you start to look to the private market to fill that assurance need, like getting third party certifications on audits completed and evidence of that, or cybersecurity insurance or, or other similar kinds of things that you could point to and say, okay, I'm comfortable dealing with this entity and sharing my data with that entity because they've gone through some scrutiny. Um, it may may not be regulatory nature, but it's there and they've done a good job of showcasing that they're being thoughtful and intentional with respect to their data stewardship in that chain.

Speaker 3:

And I would just add to what you just shared is, you know, um, a vendor questionnaire, um, in terms of from the in-house perspective, that's really where you wanna think through how do you vet your vendors because you know, you're right, they may not have necessarily the same regulatory obligations, and you wanna be thoughtful with who you're sharing your data with. Um, you know, is this a new company? Um, what is their structure? You know, where's this data going? And to your point, are there any third party attestations and, and different things that they can provide? Um, all of that can help give you more comfort. But also to the extent that your data provider has an audit provision, you wanna be thoughtful about your diligence and what you are doing and being able to, you know, set up and demonstrate a thoughtful vendor, you know, assessment structure so that you have that confidence. Because, you know, when you talk about risks, one of the biggest risks for many organizations is reputational harm. And that comes from unfortunately not being a good data steward. And so you really wanna be as diligent as you possibly can be, no matter how exciting. You know, some of our tech teams get about kind of some of these, you know, companies that are standing up pretty quickly, you know, it's great, it's exciting, you wanna be in the boat with them, but at the same time, you wanna be thoughtful and there is a lot more risk with, you know, newer companies who are not necessarily as established, and especially if they are having to kind of take time to get up to speed with some regulatory requirements.

Speaker 2:

So what, what would could be some potential downstream consequences of errors, uh, or, or maybe incorrect or inaccurate, maybe bias, uh, that gets introduced into like a vendor management program. Uh, so, you know, a as you mentioned, you know, there are a lot of organizations that are that in this space that are starting up in digital health and health, it that are, they're early stage companies. They're, they're newer to the program. They may not have a lot of governance, uh, a around how they're structuring their data and how they're digesting and, and, uh, ingesting and, and putting out data. So, um, there may be the potential for some bias getting introduced into their results that were unintentioned or, or, or not expected. Um, and so just wanted to, you know, within a vendor management program, is there a way, uh, that you would recommend or some considerations that an organization could take within their vendor management program to try to either identify or, you know, flag those potentials?

Speaker 3:

Yeah, I, you know, I definitely would be interested, um, on also hearing a lot, um, experience here just because you would have more of an expanded view of different companies, but I, of course am speaking much for, for my exposures. So when I, I think about this, when you're getting data in and, and I think many are very aware that, you know, the data is only as good as kind of the data that's coming in. In essence, if the data itself when a healthcare organization, um, is operating has some level biases in it, um, it's going to flow through to the data that you are building, if it's informing a product or if you're utilizing this to then inform ai. So it's for machine learning, et cetera. When you, when that information is coming in, you have to be thoughtful about it. The best way that many have talked about trying to minimize biases that exist is recognizing it, calling it out. And what that can do is you can actually go in and then you can start to train the model to operate in a more unbiased fashion. So you all, you really have to account for it. And I think many of us, you know, in general, we probably don't necessarily wanna run towards the bias, but when you are dealing with ai, you have to be curious and you have to be willing to then say, you know what? There's some flaws here and how do we correct for it? Oftentimes they, um, there's a lot of different articles that talk about having a more diverse team who is working with the data, because that also helps to then have others raise questions about, you know, whether it's dealing with age or gender or what have you, that people then start saying, is that really true? And pushing back a bit, which really helps open the door for better understanding what is underlying this data and how it might be applied. And if you apply it without correcting for some of these biases, will you continue to perpetuate said bias. So those are definitely some of my experiences and, and articles and different things that I've read that I think really help to combat what you're referring to. Hal.

Speaker 4:

Yeah, I I think those are great points, Nia. And just to build on that a little bit, um, you know, it's, it's the problem with garbage in, garbage out in an AI model, you know, if that's really the focus area that the world is going towards, especially in healthcare, we really wanna make sure that we're thoughtful about having a data governance program that manages that risk. And there's a lot of different risks that can occur. You know, let's, let's take for example, something that hits home. We're all patients to some degree. Uh, we go to the doctor, we go to the hospital, you know, there's lots of different kind types of care that we may seek in our lifetime, but increasingly in the digital world we work in and, and the electronic health record world that doctors operate in, there's increasingly artificial intelligence built into workflows that are informing doctors about what decisions to make and recommending things. And there's clinical decision support algorithms. Any one of those things could go awry if the data that U was used to develop that model and train that model was inaccurate, um, or biased in some way that could lead to harm to any one of us. And so that, that's really, you know, at the core of the sort of risk calculus when it comes to the healthcare ecosystem. But with that aside, there's lots of other issues that can be impacting a, a company, for example, let's say you want to get your really innovative tool, digital health app onto the market, and you have to go through f d a clearance, um, because your device is designated as software as a medical device. And let's just say that you used sort of inaccurate or erroneous data in creation of that. And it may be that you either don't get approved because the F D A says this doesn't work properly, or it's unsafe or ineffective, um, or perhaps it gets approved, but in fact creates some sort of harm to the community that you're serving. Um, there are consumer protection issues as well. The FTC certainly cares about this stuff greatly. There have been a few cases where in the privacy context in particular, the FTC has said, Hey, AI company, you've created something that was predicated on data you'd never have rights to use in the first place. And because your AI is sort of the, the fruits of that, they're spoiled fruits essentially. And the F T C ordered that those AI companies destroy those algorithms, which, you know, wasn't, you know, potentially existential threat for those companies who spent so much time and resource to build them. Um, and I would say even moving forward, the question's a really good one, Hal, because we're starting to see increasingly in various places in the regulatory ecosystem at the state and federal level, uh, legislators are looking to, and regulators are looking to build guardrails for ai. It's still very early days, but there are a few things that are, you know, are writing on the wall, for example, discrimination seems to be, uh, a key, key topic, a hot issue. And we've seen discrimination be addressed in a couple of places so far, uh, in the healthcare space. The Department of Health and Human Services proposed a rule not too long ago that essentially says to the extent that a provider organization uses AI in, its in its workflow and its in its clinical care, that the provider is responsible to the extent that the AI produces bias, that leads to discrimination in that care. Um, that would be in violation of, uh, section 1557, um, under OCRs purview. So that's one example. We also see AI being used, and this is not in healthcare necessarily, but in the hiring and firing space for employment law. And there's a law in New York that similarly says, to the extent that you have an AI making decisions about hiring and firing, that's something that you need to make sure is not doing it in a biased way. Um, all this to say, um, we're recognizing this issue and we're certainly trying to position our clients to be successful in this space. Uh, one of the things, um, that we counsel clients to do is conduct audits, just like Nia mentioned, make sure you do a bias audit and that sort of a nascent field right now. But our firm has certainly, um, started that capability and partnered with some outside consulting firms as well to be able to give clients a way to audit their systems and also give them the legal and regulatory, um, lens to interpret whatever those audits re results are. So really exciting space. Um, but there's a lot of things to consider.

Speaker 2:

Definitely. And, and I think that brings up a a, a really good question, you know, for organizations like Clearwater, um, in helping other organizations in doing risk analysis and, and risk remediation efforts, you know, as we go in and look at these systems, uh, towards HIPAA compliance and other regulatory requirements, compli, uh, compliance guidance, uh, and requirements that they have, um, you know, where does AI fit into that? Where, where should we be taking considerations, uh, around AI as part of that risk analysis? Um, and that, uh, we're still very early in that process. So, um, just wanted to see if, if either of you had any insight into, you know, uh, what some considerations there should be because we're talking about the, the systems that house the, the, the, the algorithms as well as the data itself.

Speaker 4:

Yeah, I, I go back to a similar point I made, which is, and I think Mia made too about this is about trust for your organization in the markets it operates in. And maintaining that trust, whether it be with respect to a patient that you serve, or a beneficiary or a customer, um, trust in this ecosystem is so critical with respect to the products and services that you're, you're providing to that ecosystem. Um, and I think that to build trust, you need to showcase to people that you take these issues seriously, that you are taking a hard look at yourself with respect to risk that might be generated from the products and services and technology that you're, you're using, and give people comfort about that. And that ultimately is to not only build trust on the positive side, but maintain, maintain reputational, um, uh, the risk around reputation. Because any misstep, whether it be with respect to use of AI in an appropriate inappropriate way or a biased way, or use of any other technology or data in a way that's sort of unsavory, can land you on the front page of the New York Times or the Washington Post or the LA you know, LA Times. So that's where we want to keep people away,<laugh> from that kind of scrutiny. Uh, we wanna make sure that they are positioned well to have a defensible program if someone comes asking about it and has technology that, you know, manages risk effectively. But it all comes back to the same point. You need a culture of governance around these things. You need a culture of data governance, um, of companies that are heavy into ai. We actually say you need an AI governance body and, uh, policy and process by which to manage your AI that you're putting out into the world in whatever ways you do. So again, think about it holistically in terms of who are the people that need to come to a table to talk about these issues intentionally, consistently to manage that risk. That's increasingly over time because these new technologies are just popping up left and right and people are using him in all these different applications that no one even imagined even just a few months back. So I think if, if anyone takes anything away from this discussion, it's ask that hard question, what do we do to manage risk around this to stay off the front page of a major newspaper? Uh, whether it's with respect to AI or data or, or technology.

Speaker 3:

Yeah, I, I really can't say it any better than how you just said it a lot. I mean, you know, the only piece that I would add to your question, Hal, is just where is that underlying, um, data that's informing the AI coming from what guidelines are being used? And as we know, especially in healthcare, guidelines change all the time. So how quickly is that being updated and updated within the tools? But on top of that, how is it being challenged? Are they looking at other sources to potentially challenge that information? And so I, that's, I, I think is one of the other important pieces that I've worked with my teams on is, okay, we're here, but how are we updating this? And, and oftentimes, you know, that's not necessarily at the forefront when everyone's so excited. And so really thinking about what does that look like and building that into your process. And as you mentioned a lot, having that be a part of the AI governance board, and it really speaks, once again to that reputational harm. Things change and are you able to get your tools to change as quickly as you know, things are changing out in the world because you really wanna be, you know, making sure that you are practicing and, and your AI tools and products and offerings are, you know, at the forefront of what's going on, which is likely why you chose AI in the first place. And so you need to make sure that those products stay with the changing times.

Speaker 2:

Yes. It, it, it really does kind of go back to that, that culture of governance, um, and that's meaningful, effective, and trustworthy. Um, and you know, Mia, to your point, to, to challenge, uh, some of the assumptions of the underlying data or in the algorithms, uh, and make sure that you know, the, the assumptions are accurate, um, and that the data that's being collected and the data that is being manipulated by AI is doing so in a manner that's intended. All right. Any other thoughts or considerations?

Speaker 4:

Yeah, I'll just add one more thought since we didn't touch it before. Um, I deal with a lot of transactional matters where in the past, data was not really an important issue. Um, company buying another company, for example, but now data is front and center of pretty much every deal that we are involved in because people have really started to understand and appreciate the value of data to operate whatever business it is that they're operating. But one thing that has also become, uh, apparent and abundantly clear to me in this process is that historically people were not doing this thing in terms of governance around data. And so when the transaction comes up, company A tries to buy company B thinking that if I get all this data, I'll be able to monetize it, I'll be able to unlock it, I'll be able to aggregate the data of this company I'm buying today with the other companies I bought yesterday and start to use that data in really, really valuable ways. But the, there's sometimes a fallacy with that assumption, and that means that company B never took a hard look at their contracts to make sure they had the rights they needed to use the data, or they're operating in a regulatory regime that's too restrictive to use the data in the way that the acquiring company wants. And it sort of creates a lot of wrinkles in the way that people are transacting. Um, so be very thoughtful and intentional when you're trying to engage with other partners or looking for strategic acquisition targets to make sure you ask that question early and often. Can I actually use the data the way I wanted to, to fits my strategy? Because a lot of deals end up dying. The value's just not in the deal because the data rights weren't there in the first place. So just one random thought.

Speaker 2:

That's an excellent point. A lot. Thank you very much. Um, I want to thank a lot and Nia both.

Speaker 4:

Thank you Hal. Really, really great chatting with you. And, and thanks to h l a for having us.

Speaker 3:

Yes, the same. Thank you so much for organizing this. Really appreciated the conversation.

Speaker 1:

Thank you for listening. If you enjoy this episode, be sure to subscribe to a H L A speaking of health law wherever you get your podcasts. To learn more about a H L A and the educational resources available to the health law community, visit American health law.org.