AHLA's Speaking of Health Law

Professionalism in Modern Times: The Ethical Considerations of Lawyering with Artificial Intelligence and Social Media

American Health Law Association

Elle Karimkhani, Chief Legal Officer, Marshall Medical, Faith Driscoll, Chief Legal Officer, United Health Centers of the San Joaquin Valley, and Caitlin Forsyth, Partner, Davis Wright Tremaine LLP, discuss how they are using artificial intelligence and social media in their practices and the ethical considerations surrounding the use of these technologies. They share strategies and practical tips for navigating these technologies in the legal field. Elle, Faith, and Caitlin spoke about this topic at AHLA's 2024 Annual Meeting in Washington, DC.

Learn more about the 2025 Annual Meeting in San Diego, CA here.

AHLA's Health Law Daily Podcast Is Here!

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Speaker 1:

<silence>

Speaker 2:

ALA's, popular Health Law Daily email newsletter is now a daily podcast exclusively for a HLA premium members. Get all your health law news from the major media outlets on this new podcast. To subscribe and add this private podcast, feed to your podcast app, go to American health law.org/daily podcast .

Speaker 3:

Hi everyone, my name is Elle Carrum . Connie , I'm Chief Legal Officer at Marshall Medical Center, which is a hospital-based , um, system in Northern California. And I am here joined by my two fellow speakers. We all spoke at the 2024 annual meeting this past year in Washington DC on professionalism in modern times, the ethical considerations of lawyering with social media, ai, text messaging, and other modern technologies, and how to use these technologies to advance your career.

Speaker 4:

Hi, my name is Faith Driscoll. I am Chief Legal Officer at United Health Centers of the San Joaquin Valley, which is located in Fresno, California.

Speaker 5:

I'm , uh, Kevin Forsyth . I am a pot at Beaver State Trae and I Tre Main in our Portland office, and I work with , um, hospitals and health systems and clinical laboratories as well. Uh, so yeah, we're here today to kind of , um, talk a little bit about , uh, kind of where we've gone after giving our presentation , uh, back in June and Washington DC um, during our presentation, we talked about , uh, we had kind of two different tracks. We talked about, you know, how do we use these technologies and our practices and with the idea of , um, kind of letting others see how we use them in case they might not be using them in that way and, and kind of inspire them to use technologies in that way. And the ones that I'm really thinking of are kind of social media, really, LinkedIn , um, text messaging, kind of. We briefly talked about , um, some of the benefits and drawbacks of using text, text messaging in your practice. And then , um, everybody's favorite topic. We talked about , uh, artificial intelligence, ai , uh, so talked a little bit about how do we use these technologies in our practice, but more importantly, you talked about kind of the rules of professional conduct to have in mind when we are using new technologies. Not as a way to kind of scare anybody away from using them, but just give us all, some, some kind of guardrails and guideposts to keep in mind when using these technologies. So with that, you know, we gave an hour long presentation in , uh, DC on this really interesting topic. And , um, we now wanted to kind of not talk about everything we, we talked about before, but talk about kind of where , where we've gone from there. So with that, I'm curious, Al and faith, how is your use of either social media , uh, or AI changed following our talk ?

Speaker 4:

Well, for me, this is faith. Um, I definitely use AI a lot more now. Um, we've incorporated it with our legal research as well as our contracts management. And so I am using ai, I mean, at least on a weekly basis, if not more. And then my favorite chat, GPT using it to assist me with writing the first draft of many different types of documents. Um, I've continued to use it and I think I am relying on it a little bit more as my specialized , um, login has become a lot more helpful and useful in those first drafts as the technology is starting to get to know me more and my role as Chief legal officer. And so I've really enjoyed , um, starting to use it even more and incorporated social media a little bit more as well , um, sharing some of the good news of United Health Center's growth and as well as , um, legal topics and , um, uh, as the California legislature went through a session , um, highlighting different bills or now new laws for 2025 that were passed that affect employers in California as well as affecting healthcare in particular.

Speaker 3:

Yeah, absolutely. So this is l same , uh, type of use for me too , have definitely been using chat GPT like we talked about during our talk in dc , um, to write those initial drafts, but also delving into some of the AI offerings from the legal services side of it. Um, there are a couple of really prominent companies , uh, legal research companies that are offering AI functionality, and I have found it particularly helpful 'cause me and Faith are both in-house attorneys. So we, there's a aspect of being a generalist and sometimes you're not sure where to start when something comes up. So to have a functionality built into these types of services where you can put in that initial question and, you know, and it thinks about kind of the different things that are implicated and it shows you some of the different sites, regulations and case law. It's just a really nice way to get the research started.

Speaker 5:

Yeah, I'll say for me , um, Caitlyn , I'll say for me, you know, I really had not used AI at all before our talk , um, primarily because I was , I just had a sense of being nervous about it. Um, so I think one of the really great things for me coming out of , you know, spending the time both kind of learning about what that do and then taking some time to think through the ethical issues with the use of ai, it actually made me less scared , um, to use it because I realized that there was some really great ways to kind of, I institute kind of checks and balances, if you will, on my use of AI so that it , it didn't have to be the , the scary thing that might land me in kind of hot water kind of ethically. Um, and so yeah, I, I encourage anybody who hasn't considered who's scared to use it , kind of who was in the same position that I was , um, to just kind of start getting your feet wet and , um, and , uh, keep in mind kind of some of the ethical principles that , that we're gonna talk about as well.

Speaker 3:

Yeah, absolutely. Caitlyn , on that point about like the fear of using AI and, you know, making sure that there are appropriate guardrails and, and all of that, it's, it's so interesting how that completely follows and aligns with what the regulators are struggling with right now, which is like, how do we balance not stifling this innovative thing that just has so much potential, but also like building in the appropriate guardrails. And as practicing attorneys, I'm sure every single one of this is trying to make sure that we, we hit the right balance as well.

Speaker 4:

Yeah. And speaking about the legislative changes over the last year, there were actually a few bills pending in California that would've regulated the use of automatic decision tools such as , um, in recruiting and, and, you know, early employment evaluating applicants. And those bills did not pass. However, California's civil rights department is passing regulations of their own. I've kind of been following the draft of those regulations as they are , um, furthering discussions about fine tuning what those regulations will end up looking like. But it's been really interesting to see that, although it tried to pop up through legislative action, it ended up , um, in the more of the regulatory realm underneath the purview of the Civil Rights Department with respect to employment as well as , um, housing and using tools to , uh, assist with making those decisions. So moving on to talking about some of the ethical principles of , as we've mentioned, those guardrails. Um, how do you guys continue to keep them in mind as you use or consider using social media or AI in your practice?

Speaker 5:

Yeah, for me, I mean, I think , um, one of the, particularly if I'm using something like chat GPT , uh, I really have kind of the duty of competence in mind. Um, it is, it is always the case that if I'm using AI to kind of get, get going on something that might be like a complicated research issue, maybe kind of state law research , um, the , it is really just the first draft. It is never kind of, oh , this is the answer I'm done. Um, it's, it's never , uh, whatever output I get, if you will, from Chad GBT on a question like, you know, does the state restrict health plans from giving discounts on non-covered services? That's an example of a real inquiry I put in recently. And it , and it certainly spit out some, some kind of valuable voting points of , oh, I do not think that that kind of law might be implicated by this practice. That really kind of helpful to think about. I'm gonna try to run down in legal research if that's really an issue. Um, so yeah, definitely that kind of duty of competence that this is really just , um, be the very first round of research there . There's certainly more research and analysis to come. Um, I also have in mind , uh, our duty of confidentiality, both with respect to my use of AI and social media. So in ai, any prompt that I put in is , uh, uh, of course has no client names and very often has no other identifiers. So if it's not really germane to the question, I'm not even identifying where the particular healthcare provider is located or what type of healthcare provider or facility they are . So really , um, not inputting any identifying information, whether it's the name or, or enough detail together that could identify , um, a particular client . Uh , same thing on social media. Um, I, for one, I don't, I personally have decided not to ever post about , um, my clients, whether it's, you know, naming them or, or kind of describing them in general terms . It's just an area that I'm not comfortable kind of dabbling in. I I think there's way to do it, but it's just not , um, what I do. And particularly as a regulatory kind of day-to-day advice lawyer and not a litigator, there's not really a lot of things that I can make public anyways. Um, so those are the two principles , uh, kind of the duty of competence and the duty of confidentiality that would really guide kind of my conduct in in the activity .

Speaker 3:

Yeah, absolutely. Um, those are definitely foremost in my mind as well, while using AI in my practice. Um, confidentiality in particular. So a disclosure to everyone , uh, listening to this podcast. Um , as in-house counsel in my role for another client, I , uh, was involved in a large scale data breach. So I feel like my paranoia about confidentiality may be a little bit more than others, but it is absolutely something that's foremost in my mind. So , um, I I probably take it a bit a step further, especially when it comes to like the use of chat GPT , et cetera . Not only do I , um, anonymize data and make sure that the data cannot be traced back to any of this specific things that I am working on, but I also routinely delete all the threads and make sure that in the settings on the back end , um, that chat, GPT cannot use my input or my questions into the pro into the generative AI to formulate responses to others. And that is a functionality that's built or that that legal services who have built out AI functionalities also tout as well as already being a part of the system. But definitely something to keep an eye on and make sure that that's actually happening. So for my, for my legal practice, again, very conscious of the things that Caitlyn just mentioned, but I'm in-house counsel , so we also have the amazing, innovative AI technologies that are coming into the healthcare space and, and making sure that I guide our team of amazing healthcare professionals in their use of ai. And for me , um, one of the things that, you know, two of the things that I'm particularly conscious about are the algorithmic discriminatory type of biases that are coming out from AI and being very conscious of that and trying to mitigate the risks associated. And then one of the things that we mentioned during the annual meeting , um, session is AI hallucinations that AI will actually sometimes just make things up and present it as facts and how those may put patients at risk and my organization at risk.

Speaker 4:

Mm-hmm <affirmative> . Very good. Very good. Now moving on to using social media. Caitlyn , how do you navigate the fine line between sharing general legal information and giving specific legal advice on social media posts?

Speaker 5:

Yeah, I mean, I , this is, this is a fine line. Um, I think for me, the, the key thing that I have in mind is that if I'm making a , a , a LinkedIn post, if you will, I certainly want to make it helpful to whoever's reading it. Um, so I want to have some piece of insight , um, that is not just, you know, here's a, here's a recent settlement agreement. Read it to go find out what happened or , or even here's a recent settlement agreement around kind of a , a common business practice. Here's a summary of the conduct. Um, I try to take it one step further and, and kind of point out things that are notable about the settlement , um, so that I'm, I'm showing my expertise in that way that , that , you know, I am more than just a reader and summarizer of enforcement actions , but I, I kind of analyze how other business practices might , uh, implicate the same kind of enforcement scenario. So I , I kind of highlight or point things out in that way, but I am careful not to say, kind of do, do this, don't do this, because that to me seems like I am kind of giving advice in that way. The other thing that I have in mind with respect to making sure that I'm not kind of giving specific legal advice through my use of LinkedIn is that I use LinkedIn really as a place where I , uh, can introduce myself to others and others can introduce themselves to me. So really like the second , um, somebody might start to engage with one of my posts, whether it's kind of a follow up question or sending me a message , um, via the messaging function that I try to very quickly take kind of off the, the social media platform and kind of connect through kind of the more formal ways that I talk with clients. So very quickly moving it from LinkedIn to either , uh, email or social media , uh, I'm sorry, email or , uh, phone call. Uh, and then working to kind of determine whether what we're talking about might need kind of a formal client engagement. I think the thing that I'm sensitive to is that I don't want there to be comments or messages in LinkedIn from , for example, a health plan when I am very often adverse to health plans , um, on behalf of my providers. And I would hate to set myself up really just by kind of nature of being on LinkedIn and, and offering kind of my, my thoughts on kind of legal development, creating a situation where I, in conflicting myself out from being able to do work for my clients because I've gotten too far down the road in some discourse back and forth with , uh, a representative of a health plan , for example.

Speaker 4:

I love that. That's very interesting because those attorney-client relationships can develop before we even as the attorney, think that they might, because it's really from the perspective of that potential client or client that believes that they've started to engage in a relationship with an attorney and , um, you know, are actively trying from their side to seek advice and feel like they're getting some from a lawyer. I mean, that can be a real, a real pickle, but it sounds like you've been very careful to formalize those type of relationships, even if they start out very informally on social media or LinkedIn. Um, one of the things that I wanted to share about, you know, kind of that, that fine line is the way that I tend to manage that is on articles that I write or, you know, posts that I share. 'cause sometimes I share longer , um, writings of , um, you know, summarizing or, or going over recent cases or things that impact employers or healthcare , um, entities, is that I try to use language that although it highlights, you know, the mistakes or missteps of another entity that says, you know, consider the risks or, you know, if , if this is something that you engage in, if you know in a similar behavior as the , um, the, the party that, you know, didn't end up faring so well in whatever , um, case or, or um, situation that I'm discussing. You know, trying to make sure to point out those, those concerning , um, behaviors, but with also , uh, the caveat of, you know, seek counsel , um, from, you know, your favorite attorney, whoever that is , um, before you make changes or try to implement any of the , um, feedback that maybe the court or regulatory agency has shared that would be useful , um, and , and worthwhile considering making those adjustments.

Speaker 3:

Absolutely. And so I can't, I can't speak too much on this question , um, because as my speaker, my fellow speakers know, I do not use social media, although , uh, many have tried to convince me otherwise, but I am holding firm . So to Caitlyn and Faith, you know, when you're using social media , um, this type of activity, how could it create conflicts of interest, particularly considering current or potential clients , which kind of you guys touched on, but can you delve a little bit more into the conflict of interest aspects

Speaker 5:

X ? Yeah, definitely. I mean, I think it, I think it, I did kind of wade into this a little bit earlier , um, getting ahead of myself, <laugh> . Um, but yeah, I mean, I think it being really mindful of whenever you, whenever I am kind of engaging with anyone on LinkedIn and I , and I know I consider engaging not to mean , uh, making a post, but, but more kind of directed engagement around whether it's messaging, commenting, or, you know, maybe even liking or reacting to something, if you will. Um, so I have kind of in my, my mind where , uh, positional conflicts might arise. So not even legal conflicts of interest, but , um, positional conflict. So I, I gave kind of the example earlier of i, I and my work am kind of very often adverse to , um, health plan and kind of payment disputes with providers. So because of that, I am not going out of my way to engage with whether that's liking or commenting on, you know, any PA representatives post. Not that, you know, not with, they could be making very interesting or valid points, but because this is kind of a public reflection on my activity, I choose not to engage with, with a health plan that I might be adverse to someday, even if I'm not currently adverse. So that's kind of one of the things I have in mind. And then really, like I said before, if somebody does the engaging with me and I believe that they , um, they're very well may be a conflict , um, really if it's anybody who's not a current client and I don't know for certain that they , um, are not conflicted out, I'm trying to take that offline pretty quickly and get that through our , from conflict check system just to make sure that, you know, any discussions I have with them, I'm gonna put it in a bind.

Speaker 4:

I totally agree. I, I think that one of the things that we discussed during our preparation for our , um, session at the annual meeting was running that conflict check even sometimes before you post something or repost, if you want to share , um, something that talks about, you know, the misfortune of a healthcare entity that found itself in hot water, sometimes it's , um, about someone that is a client of the firm. Even if you didn't know that I am thinking back to my private practice days, it was impossible to keep track of who was working with, with different entities on other matters that I wasn't involved in . And so keeping that , um, conflict check as kind of one of your steps before you start sharing , um, things that you find interesting or, you know, that may , um, have a, you know, unflattering story about , uh, a firm client , um, can really keep you out of hot water. And as far as creating that conflict of interest. Um, but I think that that was something that I realized, oh yeah, that's a very good idea when it comes to maintaining , um, those boundaries and avoiding any, any appearance of a conflict of interest.

Speaker 3:

Absolutely. And, you know, social media has been around for a while, so we're really fine tuning our use of it and our practices. But generative AI definitely , um, exploded on the scene in 2022 with chat GBT and we're all waiting with bated breath on how this patchwork of regulations is gonna develop and, and where we're going to go. But this is a very relatively new piece of our practices. So faith and Caitlin , how do you see AI changing the practice of law in the next five years?

Speaker 4:

Well, I definitely see, you know, using my little crystal ball, I see that the practice of law in general is gonna become more and more reliant on using AI as people become more familiar as AI begins to permeate the technology. I mean, even now at this kind of early new stage, although AI's been around for a little while, I think that it's just now starting to , um, become more heavily used in some of those, you know, common , um, services and, and legal research companies. You know, the bigger guys are starting to use it now too, that it's kind of becoming , um, to a point where you can't avoid it. Um, and so I think that as people, you know, can't avoid it any longer and have to begin learning how to use that tool , um, and also, you know, to maintain their duty of competence, they have to learn how to use these tools. Just like we had to learn how to do, you know, electronic discovery , um, you're gonna have to know how to use ai. And once, you know how I think that that's just gonna lead to further reliance on ai , um, in, you know, in the practice of law in general.

Speaker 5:

Yeah. One of the things that I have kind of very recently been grappling with, if you will , is that, you know, at a law firm we have kind of this kind of hierarchy of attorneys for every project. Um, and for many of my projects, whether it is, you know, kind of a simple billing question , um, can, can we bill using this modifier that appropriate for what we're doing? Or, you know, can we use registered nurses in this way for these services in this type of setting? Um, you know, kind of some examples of of issues that I might encounter in my factor . I think before ai I would have kind of relied on an associate, kind of a junior associate to kind of just like get the ball rolling and kind of see what's out there around, well what, what legal issues might this implicate? Is this gonna implicate a Medicare , uh, payment rule? Is it gonna implicate kind of state law from a relationship perspective? And so using a a , a kind of fairly junior associate to kind of kick start the research, I am now starting to use chat GPT to to , to kind of kick start the research in that way. And I think there's two ways to look at it, which is, you know, oh my gosh, like what are junior associates gonna do now if chat GBT is replacing them? But I actually think that there's another way of looking at it that I think is actually , um, really promising, which is that , uh, we are going to start relying on more junior associates then to do something that is kind of one step above than we used to. So we're going to start kind of training them to do kind of the harder work earlier instead of relying on them to, to do the work that now GPT is doing. Kind of that initial, that initial look. So I really have AI in mind with respect to how am I going to continue to keep , um, our younger attorney busy when we have this kind of new technology that, again, I don't think, I don't think AI is replacing or duplicating the work of a younger attorney, but there's certainly, they're certainly part of the way, it's certainly part of the way there . So I think that AI is going to kind of change how we at law firms that use , um, younger attorneys. But I, I don't think it has to be a change for the bad. It can be a change for let's get younger attorneys doing more , um, challenging stuff into work from the get

Speaker 4:

Go's a really interesting perspective. 'cause I, I guess I didn't really think about that as I've , um, been in-house, you know, I am a legal department of one and so, you know, it's me. But now thinking back to private practice days, absolutely you were staffing your projects or , um, cases with, you know, junior attorneys to do some of that beginning level write research, beginning level writing. Um, but you know, that's a really great point that we are able to replace some of those lower level , um, you know, projects or lower level work on the larger project with using AI to kind of get the ball rolling in the way that previously, you know, junior associates were serving in that capacity. So I love that. I love that for them , um, getting the opportunity to do work that's a little bit higher level than maybe they would've had the opportunity to do previously.

Speaker 3:

Faith, I think that that's such an astute concept as well, even for in-house counsel too. 'cause one of the things that , um, in-house counsel or I have struggled with in the past is, am I working at the top of my license , um, being in-house and in private versus like being at a firm in private practice, there is that, you know, push and pull of when are you getting too much into operations or are you really providing your client the top of your license skills and that, that top legal practice. So I think that on that aspect, it can also help a lot of us to make sure that we're doing that, working at the top of our license for exclusive clients for those who are in-house <laugh> .

Speaker 5:

Yeah, I I think there certainly a lot that , um, can be gained by , um, using AI practice , but as we've seen some of the horror stories now, it can sometimes produce , um, inaccurate or bias results . You know, Elle kind of mentioned the hallucination that , that they can produce. Um, I will say too , the, the difficult thing about AI sometimes is , and it it's not, AI is not a person , so to say confidence. It seemed to impose a , you know, a personal quality to it. But, but the confidence with which it sometimes spits out completely wrong information is misleading as well. 'cause you are like, well if so they, this is so confident , um, it has to be right. Um, but it's not always eight . So, you know, I'm curious, faith and al how you kind of work to verify , um, AI generated information before you kind of implement it into your practice.

Speaker 4:

Well, I am constantly, you know, double checking, triple checking any output that I receive . You know, I am looking for other sources that have similar perspective. I'm looking at sources that have different perspective. 'cause I kind of wanna cross reference and see what is the other side saying on this issue. Um, because, you know, I I truly agree that, you know, AI can be so confident, you know, as confident as I am, you know, I'm not always right. I try to be always right in my job, but at home in my personal life, certainly my husband will attest to me being wrong and confidently wrong. Um, so I think that it's really important to verify, you know, go to the regulation, go to the text of the statute, if you, you really wanna get down to the nitty gritty of it , um, to make sure that it says what you are being told that it says, make sure that that case actually holds for the proposition that you're being told it does. Um, you know, there's really no shortcut to doing that. And, and, you know , even if you find an article, I've, I've found many articles that you say things that I'm like, oh yeah, this is exactly what I was looking for. But then when I dig a little bit deeper, the writer got it wrong. The writer of the article, not ai, an actual person got it wrong or misunderstood or, or misinterpreted or read it in a way that I didn't agree with. Um, and so I think it's really critical to continue to go, you know , down and dig, dig deep to where the AI is getting that from. And , and don't be afraid to ask ai, you know, what's your citation for that? Or where are you getting that information from? Because it will answer that question as well. Um, and then you can, you know, look to that specific source and make sure that it is exactly what you are understanding or being told that it is.

Speaker 3:

Yeah, absolutely. So , um, for me it's a very similar approach in my practice too , as, as Faith just mentioned , um, double checking, et cetera. The site thing is interesting though, 'cause AI will, will make up sites. So I would say no matter the site to just go ahead and check it, it is incredibly confident when it does speak. And, and Faith and Caitlyn both have mentioned that, and that comes from this algorithmic hallucinations, very common in generative ai, not restricted to chat GPT, any type of generative AI can come up with these types of facts, et cetera. And for me, it's not just in my own legal practice, but one , I one thing I mentioned is I have to be incredibly conscious of the AI capabilities that are implemented throughout our hospital system. And there are , um, cases coming up now where you actually see generative AI used in the healthcare space coming up with diagnoses, writing things in the notes that that did not happen during an encounter or et cetera . So how do you, how do you mitigate those risks and, and really , um, make sure that we're approaching this thoughtfully. And I think the thoughtful approach is foremost in my , um, practice and how I'm coming towards AI and how I try to advise my client on it is that it is amazing. AI is wonderful, one of the things that was discussed before, and faith , I completely agree, we cannot avoid it. It's like, you know, in a few years it's gonna be akin to avoiding a computer. There are just mm-hmm <affirmative> there are moments and things and technologies that happen and we have to adapt, but it should be adapted from a very thoughtful place. Um, and I feel that that is really the push and pull that we're seeing, not just on legal practice, but also healthcare. One of the other , um, things that I also briefly mentioned before is the algorithmic discrimination, so biases that AI has and how those might come into play. So being very conscious of AI's limitations, I think is going to be incredibly helpful while we try to delve into this space. And of course, keeping an eye on the regulations that are hitting and trying to address both the biases and the hallucinations that we know are very common and can happen with ai.

Speaker 5:

Yeah, I think we would kind of wanted to talk a little bit about , um, you know, how do we make sure that we are still providing high quality representation when we're using ai. I think some of the , the things that you both have mentioned already, kind of your checks and balances, if you will, really kind of go to that. But curious if there's other things that you have in mind as you're using AI to kind of make sure that you're , um, not the quality of your services is not kind of dropping down because you're using, if you will , a non-lawyer to kind of help you in this way. How are you making sure that you continue to have high quality representation and legal services ?

Speaker 4:

One of the things that I do is, you know, I, I kind of talk , talked about it , um, a little bit as far as, you know , uh, uh, avoiding , um, you know, inaccurate results, but you wanna make sure that you're only using AI and , uh, and those tools as a compliment to your already robust, competent practice. Um, you can certainly use AI to reduce costs , but um, you'll also have to really keep in the top of mind that you need to review everything very carefully. Don't trust it blindly. That does take time and, you know, that's how attorneys are paid based on the time that it takes in order to complete, you know, the task. Um, but it's really important in order for us to maintain those high level quality services and you know, that , um, you know, meeting our burden as far as, you know, the requirements of our licenses to provide, you know, competent , um, work prote provide , um, you know, quality services, which, you know, is not just a personal , um, goal, but also, you know , uh, very high expectation on , on our client's side. Um, so we wanna make sure that we're using ai, you know, as a compliment , not letting it replace a lot of the, you know, higher level work that we need to do to meet our goals.

Speaker 3:

Absolutely. Um, and, you know, thinking about the re re reducing costs, et cetera, really comes down to the ethical duty to represent a client for reasonable costs and fees. So there it begs the question of whether or not not using AI will actually cause your client to incur fees that they shouldn't be incurring. And it comes back also to that idea of really working at the top of your license. If AI is used in an appropriate and efficient manner, not only is it helping your client get to where they're going, hopefully a bit cheaper than it would otherwise, but , um, I think that it also gives us each the space to really grow and keep elevating our practice and hit those places and spaces that maybe we haven't been able to hit as much because we have been bogged down by the first draft of some kind of communication, which I don't know about all of you, but sometimes gets me. So , um, for those, for those things, I think that AI can be a , a wonderful tool, but definitely something to stay tuned in . It's gonna be really interesting to see how this legislation patchwork is gonna develop and whether or not the federal government is gonna step in with some regulations of their own. So as we close up here, first I wanna give a big thank you to Caitlyn and Faith. Thank you for, for joining me on this podcast. It has been so great spending this time with you and for you guys agreeing to get this band together. For all of you out there listening, thank you for listening in and we hope to see you at the 2025 annual meeting in San Diego.

Speaker 2:

Thank you for listening. If you enjoyed this episode, be sure to subscribe to ALA's speaking of health law, wherever you get your podcasts. To learn more about a HLA and the educational resources available to the health law community, visit American health law.org.