AHLA's Speaking of Health Law

Health Care Corporate Governance: Board Oversight of AI, Part Two—What Does a Framework Look Like?

American Health Law Association

In this special two-part series, Rob Gerberry, Senior Vice President and Chief Legal Officer, Summa Health, speaks with Michael Peregrine, Partner, McDermott Will & Schulte, about the health care corporate governance oversight of artificial intelligence (AI). In Part Two, they discuss what an AI governance framework might look like, the board/management dynamic, the role of an AI subcommittee, oversight of workforce issues, and whether AI can support board functions.

Watch this episode: https://www.youtube.com/watch?v=frFnd8VMT1g

Essential Legal Updates, Now in Audio

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Stay At the Forefront of Health Legal Education

Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.

SPEAKER_00:

This episode of AHLA Speaking of Health Law is brought to you by AHLA members and donors like you. For more information, visit American Health Law.org.

SPEAKER_02:

Hello, everyone. This is Rob Gerberry. I'm the Chief Legal Officer of SUMA Health and the president-elect designate of the American Health Law Association. I'd like to welcome you to the latest in our continuing series of podcasts on corporate governance issues affecting healthcare organizations. Today's topic is the second part of our two-part series on a critically important governance responsibility the oversight of artificial intelligence as deployed by healthcare organizations. As we underscored in our first episode, there can be few issues as pressing to the healthcare industry as the use of artificial intelligence to support operations, administration, and patient care services. AI developments offer the potential for blazing new trails and how healthcare is managed and delivered. It also creates certain risks with its deployment that need to be monitored closely. Rapidity of change and innovation in this field places great focus on the possible role of the board of directors in connection with its oversight and in its decision-making responsibilities. And it's a role that's not previously been well defined, but rather has been evolving. In our first episode, we covered the basics of the board's relationship to the governance of AI, the factors that prompting it, the barriers to implementing it, the fundamental rationality for it and a related approach to establishing a governance framework. In the second episode, we're going to focus on what the governance framework would look like, as well as how AI may play out in assisting the functions of the board, E, and new issues associated with AI deployment in the board's oversight of human capital. And as always, I'm joined by our HLA colleague Michael Peregrine, who's also an HLA fellow and a fellow of the American College of Governance Council. So, Michael, when we left off in our last episode, the cliffhanger episode, you described the NACD's report and its recommended practices for enhancing board integration with AI. Can you now share with our audience some of the rules and bolts, the nuts and bolts, rather, of what a governance structure might look like?

SPEAKER_01:

Sure, but Rob, first let me make a point that I think needs to be explained before we do that. And it's a point which I think kind of underscores some of the complexity we're dealing with here. And it really also reflects the dueling interests of management and the board. Over the last several years, we've seen the rise of a concept or organizational function referred to as AI governance. It's critical, it's vital, it's essential. From my perspective, my understanding, the way I view it, it's an internal operational framework that's specifically tasked with establishing and implementing policies, procedures, and standards for the proper development and management of AI, and to be monitoring risk, compliance, and related functions to making sure things are working, the right hand, left hand, the whole essentially operational ball of wax. It's like a management level executive team. It's a totally laudable arrangement. It works, it's critical, but it shouldn't be confused with or viewed as a substitute for actual governance by the organization's board of directors. I wish we could use another phrase because you know I'll get questions. Why do we need a focused governing board effort? We've already got AI governance. Isn't that enough? It surely helps that the board should ordinarily be expected to rely on much of the work of AI governance. Absolutely, it does a it's a tremendously focused, coordinated effort. But the board also should have the right to exercise oversight over its portfolio. So again, let's be careful when we use the term AI governance because in many organizations it means something completely different than the board oversight and governance of AI.

SPEAKER_02:

So I must admit, sometimes there are people that get confused by that reference. As you think, though, again, about clarifying that and making sure our members know uh the proper path to building a structure. What would you suggest?

SPEAKER_01:

I think you go to the leaders of the AI governance group and say, what do you, you know, what's your definition? And then you go back and say, here's what we think. It goes back to what we talked about, uh Rob in our first episode. I think the most critical step is the internal sales job, why there needs to be board oversight of AI. And we're dealing oftentimes with people at the management level who are not used to working with the board function, then don't necessarily see the value of the board function as well as the board does or senior management. So there's got to be a communication. We go to them and say, here's what we view our role as at a board. Tell us more of what you believe AI governance entails. Where does this stuff overlap? How can we work together? If the board sees you as some type of super compliance committee uh uh or compliance committee on steroids, which is fabulous, great, how do we work together? And then do you are you listening to us and understand that we have a role too? A lot of conversation. Sometimes it needs to be facilitated, has to happen.

SPEAKER_02:

So as we move past some of that conversation in theory and maybe to some of the practical impact, when a chief legal officer calls you and says, Michael, help me draft some duties in my charter around an AI governance committee, what do you typically think of?

SPEAKER_01:

But besides, I want$100,000 up front right away in cash without before I no. Um, I I think we start with the basic role here, um, and that is an understanding that what we're talking about here is is application of the board's basic fiduciary duties uh in support of research, development, acquisition, application, what else uh of the AI by the organization. We're we're taking the basic duties of care and loyalty and candor, as we talked about in one of our last podcasts, and we're applying them to this function within the organization, the strategy. Uh supplemental to this would be to support management and the coordination of the various internal touch points, as we just talked about, the right-hand, left-hand thing. So I think we we we start off when we're looking at organizational structure, we're saying we're not bringing in some new wild system. We're applying the traditional rules and the traditional approach to a non-traditional business function strategy, and we're coordinating it with those folks who are not necessarily used to working with the board. And we're clarifying this is what we're doing, this is what you're doing, here's how we integrate, but we're not trying to pull a fast one. We're simply applying our traditional expectations to this function.

SPEAKER_02:

So if the board takes all that on, what's left for management?

SPEAKER_01:

Well, let's talk about that. You know, what are the typical duties exercised by the board in this regard? I think they need to be narrow and focused, um, but they're traditional. Working with management on the developer organizational strategy, the board's not developing the strategy, the board is asking management to develop the strategy. The board then exercises oversight of its implementation. And that, you know, again, that goes back to the proficiency thing, Rob. We have to keep coming back to that. Another task is monitoring the use of AI within the organization. Um that goes to keeping a finger on the pulse of what the what of what we're doing in AI across the health system or across the healthcare company. Uh and then I think there's a sh this is where the tie-in to AI governance comes. It absolutely has to mean assuring the development of effective compliance and risk management controls over AI, basic policy formation and decision making, uh, as to you know the deal making. Uh, the board needs to have a decision tree, I think, set forth as to uh deals, uh, you know, innovation projects, ventures, whatnot, investment strategies that relate to AI. Because, as you know, a lot of big money is involved in that. And so the board says we got to know about it and we have to have an approval process for it. But we're not second-guessing management and we're not micromanaging. We're just simply saying basic kinds of oversight structures, and then when did when do AI-related issues come to us for a decision? The board essentially takes everything else. They're developing the strategy, they're implementing it with board control, they're um they're they're monitoring the org the application of organizational controls, uh, and they're monitoring human resources, they're hiring and firing people, they're doing all the things that management would ordinarily do. All we're saying is that there is, as it relates to the development and deployment of AI within the organization, nothing is too complex, nothing is too fancy, nothing is too new not to be subject to some level of board oversight. Where it gets a little sensitive, Rob, is um where the board is exercising oversight over of issues of conflicts of interest, compensation, and consistency with corporate values as those arise in the AI function, because they can be a little different there. Um but the my message to Manji would be this is nothing different than the way the board ordinarily extends its control or supervision over operations. It's just this is a pretty funky kind of operation, and we all need to work more clearly because we're not used to it.

SPEAKER_02:

So that takes us to expertise in the subject matter. I noticed in the NACD report and in some of the national media coverage, there's been a lot of noise about how do you find board member expertise in this space? And for those of us in the chief legal officer role, is sometimes we play the chief recruiter role. We know where to find an audit and compliance committee member or an investment committee member and where those expertises sit. But where do we find the AI governance committee members?

SPEAKER_01:

Not easy. And this comes back to a point I think we're gonna talk about later about the use of committees to oversee AI. You talk about a seller's market. Holy cow. Um, you know, yeah, every corporation in America is looking for AI specific tech uh directors. Um, it how you find them, you know, I think we're gonna have to go to search firms uh especially. You're gonna have to really um be patient in your search for them. You're gonna need external help. Um, this will increase the pressure on existing board members to have some level of AI uh proficiency. Again, it all comes back to that. There's just no excuse, uh evading it. But I think the other thing that's important, and and I'll get on my soapbox for a little bit, as you know, I I've just long, I've done this now for 46 years, and I have long been a proponent for compensating board members. I uh even not the profit board members. I just don't get the analysis that, oh, you're a charity, you shouldn't be compensating board members. The law allows billion-dollar charities, uh, you ought to be able to hire and pay uh and compensate directors, uh, nevertheless, and you're gonna have to do that, as you and I were joking. It's kind of like NIL in college football. You're gonna have to do it if you're gonna get the right talent. Uh and why should some archaic rule about uh being unable to compensate board members restrict you from getting top talent to exercise oversight over a function that that meets people's needs? I don't get it. But again, off a soapbox, uh, proficiency, recruitment, spend money. Those are the big three there.

SPEAKER_02:

So, Michael, back to structure. When we think about where this function uh best reports, you know, should management be reporting to the full board? Should it be reporting to the subset of the board? How do you see the calibration there?

SPEAKER_01:

I think that's a real challenge. Um, the and I think it's it's really uh a facts and circumstances situation. Um when we're talking about information or and reporting flow, I think we we consider you know the CAREMARC standard and what its expectation is in some of the case law. Now, I have not seen any cases emerging at this point, and I may have missed them, that speciek speak specifically to application of uh CAREMARC to um AI, but uh you know I I think that it really depends upon the circumstances. And going to your question, how many board members do we have that are really truly proficient? And I think do we have enough? Um I I look at it kind of in a combination of things. Uh uh the uh whether you report to the full board or tech committee, I think it um you know in the perfect world, you're you're reporting to the full board. I think though that because of scarcity of resources, it probably makes sense. And NACD clearly recommends uh that you establish a tech committee that that carries much of the weight in terms of the oversight, and then it in turn reports to the full board. That's a totally appropriate uh approach. It doesn't excuse all board members from um gaining AI proficiency, it just acknowledges that you're going to have some board members who are more proficient than others. Uh, and it really, you know, what works best uh uh in that particular governance circumstance. Now the real question would be, I think we're taking it the next step. Is that committee uh does it have uh board-designated authority, a board-designated authority to make decisions, or is it simply an advisory committee? Um that's a tough one. And and I'm not sure that that's going to be again the byproduct of uh the level of expertise that's involved and experience. I'd be a little hesitant. Uh I mean, I I see arguments for both. I guess I'd start off with the suggestion that uh giving uh for for a uh a new and challenging exciting technology with ups with risks and rewards, uh I I would probably be less excited about delegating the board's authority to a committee. Uh uh, but you know, again, I think that's uh that's something that that just depends on a conversation between the board and its committees and its advisors.

SPEAKER_02:

So, Michael, you mentioned a technology committee. Are you a fan of a technology committee handling this work, or do you think they need a separate AI committee with the volume of uh adoption of AI tools?

SPEAKER_01:

I don't care what you call it. I think I I as I said before, I think ultimately uh there should be some kind of board-designated board committee with authority with respect to technology data, uh AI. I I don't care what it is. Uh I what I don't want to see, Rob, is that it's filled with board members who are serving on three or other four other committees. Uh I think in this situation you you put people on the committee, you do you make uh give really uh careful thought to what its charter is, and as we said, is it advisory or non-advisory, but what's the scope of its review? And I think you leave those people alone in terms of I'm not I'm not gonna ask you, Rob Ger Gerberry, to serve on the AI committee and also the audit committee. That's nuts. Uh uh So I think there this is what you the the decision on staffing the committee uh as well as the creation of the committee has to depend upon what our resources are, how big, how broad, how detailed is our uh application of AI within the organization, and uh how big is the board and and what are the what are the board members' capabilities of getting engaged in complex committee activities. Um it's gonna depend, you know, some organizations, you know, very heavy uh research-oriented AMCs are gonna probably want a very detailed, sophisticated AI committee. One thing I will say, Rob, in in my experience is it's the place to be. I think board members really like to sit on these committees. Some like to it because they can learn from it, others is where it clearly is the action is it's an exciting area, you're cutting edge and you're really at the forefront of where healthcare care is going. So I think in in one respect to your question, a lot of sitting board members are going to want to be on that committee. I don't think you're gonna want four people ready to serve. It's the question of where do you get the true experts? And one other point on that, Rob, is uh again, if you if you are limited under state law from what you can compensate a board member for, compensate, bring them in as a special advisor of the committee. And outside, you know, get the well, however you need to get that expertise. It doesn't have to be a board member if if that person is working for the board. That's great advice.

SPEAKER_02:

So you mentioned our last episode uh AI and the workforce, the human capital element of this. Do you think if the board undertakes that oversight responsibility, how does it balance not micromanaging uh as it learns more about AI adoption and its impact on the workforce?

SPEAKER_01:

Well, uh I think this is another one of those areas where the board needs to work with management and say, we have a seat at this particular table. Uh, we've all seen the news reports over the last days, weeks, and months about huge layoffs that are at companies across industry sectors that are arising from AI implementation. And you know, you think you you talk to the uh uh outplacement firms and the consulting firms like Challenger Grand Christmas, and they'll say, look, it's not all um, you know, we're not it's not all job loss. A lot of the AI deployment is resulting in job creation. But I think the management needs to understand that, and it's the board's responsibility to help them understand that there's the board has always had a fiduciary obligation for oversight of human capital. Whether it's exercised or not, I don't know. But it's always been there. Uh, workforce culture is a corporate asset that has to be protected uh and nurtured, uh, and the board's responsibility is to do so. What you're seeing now is a uh an interesting combination of law, regulation, best practice, and corporate social responsibility. Uh and let me explain. Uh I think the board's first question is are we obligated to get involved in these AI situations when uh it involves people losing their jobs? Well, you know, we all know that there are uh some federal and some uh some laws that uh SEC reporting requirements in the WAN Act that basically require a notice if you're going to be terminating a lot of employees. Uh there are even some state regulations. I think New York is the first one. I think New York even has its own state WAN Act. Again, that there so there's no obligation to the or law that says the board, you must monitor uh AI deployment when it affects the workforce. But there is a best practice, and again, it arises out of the obligation to oversee human capital, and and there are interesting, uh many interesting statements of thought leadership, including from NACD and others like it, would say that extends to the employment issues that are affected in technology transition, not just AI, but when you are implementing technology decisions and strategies, and that is going to necessarily cause workers to be replaced by machines, the board needs to be involved. And I don't think that's it that's a that is an issue at all. It just simply has to have a seat at the table, not to block management's decisions, but to be part of it. And frankly, to make sure the corporate values are supported. And the final thing, and people might say, we don't need to hear from our lawyers about moral responsibility, about corporate responsibility, but you need to be aware, you people, uh board boards and management need to be aware that there is a lot of emerging discussion about whether or not there's a moral obligation uh uh to for the board and the corporation. Uh we know that Pope Leo, and I'm Episcopalian, I'm not Catholic, so I'm not obligated to do what he says, but you know, Pope Leo has been very honest about concerns about where AI is going and the importance of protecting the dignity of the workers in this area. You even have people like uh uh the former Chief Justice of the Delaware Supreme Court, uh Justice Trine speaking out very, very authoritatively on the moral obligations with respect to uh supporting the rights and dignity of workers as they face the displacement by AI. So that's a long-winded answer to something that I think is going to be if if I was gonna predict, Ron, on today, November 12th, what is going to be an issue that's going to be at the forefront of AI discussions over the next year, it's gonna be this one. And and it's gonna be an issue that I think management is gonna push back on, understandably so. So it again it comes back to a discussion and of this is why the board needs to be involved in this. We there are expectations, not necessarily legal, but governance principles and corporate social responsibility principles, except the principles of fiduciary duty that said we need to be involved and work with you to make sure that this is not abusive, that we are respectful of respectful of the workforce, because that's what our corporate mission statement says we do.

SPEAKER_02:

So Michael is a good Catholic. You made my day by citing to the Pope as one of our authorities in our podcast.

SPEAKER_01:

Well, you know, I you know, it's it's interesting, and this is something I'm sure will fascinate all of our listeners. Uh the Pope said, uh who is a White Sox fan, I would point out, um uh the the Pope said he took the name Leo because his the last Pope that was named Leo was very active uh in speaking out during the first industrial revolution, which we'll all remember from high school, junior year history classes. And but he he wrote an encyclical on the whole question of the dignity of the workforce. So there's some thought here. This isn't just willy-nilly stuff. And and there will be people on the board uh who will be motivated by or who will be influenced by these issues, rightly or wrongly. It's just not an issue, Rob, I think that's going to go away.

SPEAKER_02:

So, Michael, one last question before we leave this topic. I know my board chair, my peers will ask doesn't this properly reside with the Human Resource Committee? Why should it sit potentially in the AI committee?

SPEAKER_01:

Well, I think it's I think there has to be coordination. That's a terrific idea. Um the uh the the reason that I would ask the AI committee to take a lead, and again, this is right-hand, left-hand stuff, is that the the ability to evaluate the um pros and cons of AI deployment that might affect the workforce, I think needs to involve board members who understand the strategy. Are those members on the AI committee and the human capital committee? I I don't know. I I would also note that that a lot of the emerging principles over this aspect of uh fiduciary duty uh expect the board to hold management accountable for its decisions. Uh, you better be right on this. If we're gonna go ahead and we're gonna remove a lot of workers, we're gonna we're gonna really hold you to making sure that we've achieved these benefits that you predict. Uh uh and so is the human capital committee the right one to do that? Is you know, I think it has to be a combination of working, a combination of reach out. And there's nothing wrong with that. Uh they just need to integrate the workforce. And I think the general counsel is the key person to say, hey, wait a minute. Uh these maybe we have co we have combined meetings of these committees to address these issues. No need to be silent.

SPEAKER_02:

So before we wrap up our two-part podcast series, Michael, I've got to ask, AI is pushing all aspects of a healthcare organization to be more efficient. How about the corporate governance function? Can you see uh ways that AI could support us in our board functions?

SPEAKER_01:

Uh you know, Rob, I'm a get off my lawn guy. I don't have warm and fussy thoughts about this. I think it's too early. I I've seen AI in the boardroom introduced in the areas like information flow, actual governance operations, which I think is crazy, risk evaluation, and of course, minute taking. I know a lot of really knowledgeable consultants are pushing the pedal on these and other applications. Good for them. You know, at some point uh it may work out. But but right now I'm not seeing the evidence where they really materially improve the quality and efficiency of governance. I've seen how they, especially with information flow, they actually can overwhelm the board and paralyze it with information and options and data. Uh they're just huge trust issues with essentially delegating um boardroom duties to AI. Uh I know that there are knowledgeable people out there right now who are breaking their pencils and throwing their computers against the wall and saying, what does this idiot think he's talking about? You know, he doesn't know what he's talking about. You know, on this I do, I'm sorry. Um and I'd also point out that AI can't take minutes like a board secretary. If Rob, if you're taking minutes of a board meeting, you sense what's going on, you sense the flow of the meetings, you understand what points are emphasized and what's not, you understand the tenor of comments. AI can't do that. Um, it cannot be alert to the emotion. Maybe it can, I don't know, but I'd rather trust Rob Gerberry than Robbie the robot to be alert to the emotions of various board members and officers. But let's wait a while, let's give it a chance. I don't close the door totally. I'm just saying right now, I don't see the evidence of positive. I see the evidence that it overwhelms board members and it actually works against what they're we're trying to do.

SPEAKER_02:

Well, Michael, we've thrown a lot at our listeners over these last uh two podcasts. I want to thank you for continuing to be a thought leader in the corporate governance space, particularly on an emerging issue like AI. So thank you for that. Thank you to our loyal listeners also for hanging in and being a part of this series with us.

SPEAKER_01:

And can we let our AI alter eagle do that presentation for us?

SPEAKER_02:

Only if you can finish with your CD player and allow it to happen.

SPEAKER_01:

There you go. All right, well, thank you, Rob, very much.

SPEAKER_02:

Thank you.

SPEAKER_00:

If you enjoyed this episode, be sure to subscribe to AHLA Speaking of Health Law wherever you get your podcasts. For more information about AHLA and the educational resources available to the health law community, visit American Health Law.org and stay updated on breaking healthcare industry news from the major media outlets with AHLA's Health Law Daily Podcast, exclusively for AHLA comprehensive members. To subscribe and add this private podcast feed to your podcast app. Go to americanhealthlaw.org slash daily podcast.