Article

Generative AI: Adoption and ethical considerations for legal departments

Sterling Miller
Senior Counsel for Hilgers Graben PLLC

By now, you know that generative AI (GenAI) adoption is spreading at warp speed in the legal world. Lawyers, both in-house and outside counsel, are finding new and innovative ways to capture the opportunities generative AI offers to reduce cost and increase productivity. While all lawyers can benefit from the many uses of AI, in-house lawyers, in particular, are highly incentivized to lead the way.

Much like email and smartphones dramatically changed how we do business daily, generative AI will soon become ubiquitous — an indispensable assistant to practically every legal professional. Those who do not adapt and embrace the change will get left behind in some manner. Those lawyers who embrace it will ultimately be freed up to concentrate on the two things there always seems to be too little time for: thinking and advising.  

Welcome to the final article of the update of my 2017 series on artificial intelligence and its impact on in-house legal departments. In 2017, I was focused on the promise of the early intersection of artificial intelligence and legal work. It was more “this could happen” than “this is happening.” But, in 2023 — with the advent of ChatGPT and other AI-powered chatbots — “could” has quickly become “is,” and the changes wrought by generative AI tools are powerful. They’re also potentially disruptive in both good and not-so-good ways. 

Because of this explosion in practical day-to-day use of GenAI, I have been updating my 2017 series to discuss what’s happening, why it’s happening, and what in-house lawyers can and should be doing in response. In Part 1, I discussed what ChatGPT and generative AI are and how they work. In Part 2, we looked at the question, will generative AI replace lawyers? In Part 3, we took a tour of the incredible array of practical uses for AI. Here, in my final installment of the 2023 refresh, I discuss some of the ethical dilemmas, the shortcomings, and what you, as an in-house lawyer, should be doing next.  

What’s all the hubbub, bub?

As we saw in Part 3 of this series, there is a wide range of uses — and potential uses — for generative AI in the provisioning of legal services. Some of the most common tasks are drafting emails, memoranda, presentations, contracts, summaries of meetings and documents, translation, and much more. Chatbots, checklists, due diligence, and contract redlines and negotiation are also on the table. All of these are powerful tools in the hands of in-house counsel as they potentially solve two huge problems: lack of budget and lack of manpower.

The value is immediately apparent if generative AI can provide less-expensive legal services — either internally or purchased through a law firm. If AI can free up current staff from spending time on transactional tasks, the value is exponentially greater than merely paying less for legal services.

Freeing up time gives you and your business clients better access to high-level legal services — attorneys who can dedicate more time to thinking through problems and advising clients. It’s also a huge morale boost since attorneys freed from drudge work will become more valuable generally and more satisfied with their jobs. The latter part of this equation cuts down on lawyer attrition and burnout, the hidden productivity killer in most legal departments. The combination of these benefits will allow in-house legal departments to deliver on the old CEO/CFO demand of “doing more with less,” which is making the rounds once again in 2023. Historically, this demand usually means everybody works harder, and things get left on the side or get done in a less-than-ideal manner.

AI opens up the possibility of actually increasing service while spending less money and less time — with the added possibility of real client self-service. These are more than enough reasons for in-house counsel to welcome the arrival of GenAI in all its forms. 

So, what’s the catch?

Of course, it’s not that easy. While these tools are indeed powerful, several pitfalls must be overcome. I will focus on the shortcomings of ChatGPT here; many of these problems apply to the other flavors of generative AI, but not all do. This is one area where the pace of change is so dramatic that what is a problem today may not be a problem next week.

Here is a partial list of the hurdles we will all face while adopting generative AI tools to our in-house practice

  • Limits on information. Currently, ChatGPT relies solely on the information accessible through September 2021 and does not connect to the internet when searching. That’s a problem. However, I understand that an update is underway and connecting to the internet via plugins is possible. 
  • Memory limits. ChatGPT-3.5 can accept about 3,000 words max. ChatGPT-4 can receive about 12,300 words max. That is good, but not enough to do hard-core legal work, especially when analyzing briefs and motions.  
  • It makes stuff up. ChatGPT sometimes makes up answers. These are called “hallucinations,” which a lawyer does not want to hear about their assistant, real or virtual. It’s good to know what lawyers can expect out of ChatGPT and realize its limitations. That is why it’s critical for lawyers to double-check any material they get from generative AI tools, especially if they are using it for any type of analysis or legal writing.
  • IP infringement. ChatGPT does not care what sources of information it uses or whether that information is protected by copyright or trademark law. It will give you whatever it thinks best answers your prompt — or make stuff up. A big question is whether GenAI tools can ingest everything and call it fair use under U.S. copyright laws. If not, what does that mean for the future of these tools — and the cost? Generally, there are also no citations or footnotes given to attribute source material, though you can ask for them, but even then — as I have found out — those can be 100% wrong. Unless you are using the tool in a closed loop of information, IP infringement is a real risk. 
  • No confidentiality. Unlike Las Vegas, what happens with your ChatGPT prompt doesn’t always stay within your prompt. Whatever information you enter into ChatGPT can find its way into the public domain or be used to train the AI. Fixes are underway, but until that happens, be careful how you word your prompts and use “dummy” or fake names.   
  • Zero accountability. Generative AI is not rational, has no ethical boundaries, and doesn’t care what sources it uses — or if it must make up the answer. In other words, there is zero accountability. That’s okay, so long as you know that you are accountable. 
  • Privacy laws. Personally identifiable information you put into ChatGPT may be protected by various privacy laws, such as CCPA, GDPR, and others. If so, how does ChatGPT allow you to comply?  
  • Lack of policies and regulations. Many companies and legal departments have been caught off guard by ChatGPT and its popularity. We will need to put policies and procedures in place to protect the business and ensure that employees are using the tools properly — if at all. Likewise, regulators are lurching to enact legislation that covers generative AI, which will inevitably lead to unintended consequences.  

While these are all scary shortcomings, it does not mean that you should just ignore ChatGPT and hope it goes away. It won’t. So, like it or not, welcome to the brave new world and keep your guard up — along with your willingness to answer questions and solve problems — because that’s what lawyers do. 

Ethical dilemmas and AI

Let’s cut to the chase: GenAI lacks ethics. It’s just a machine with no ability to discern, apply context, recognize when it is making things up, or deal with or express emotion. It’s just a potentially helpful tool. Second, under the rules of professional responsibility here in the United States, lawyers will have many ethical obligations to watch out for and comply with when using generative AI. Here are some of the most important under the ABA Model Rules:   

  • Rule 1.1 – The duty of technical competence for lawyers. All lawyers must stay current on technological developments that impact the practice of law. 
  • Rule 1.4 – Communications. Specifically, Rule 1.4(a)(2) and the need to inform the client that you will be using AI to assist with providing your services. 
  • Rule 1.6 – The duty of confidentiality. When using GenAI tools, you must ensure that any client information you enter is not confidential or, if it is, that the tool you are using will protect that confidentiality. 
  • Rule 5.1 – The duty to supervise. In particular, for those you supervise, you must ensure they know the applicable ethical rules and comply. 
  • Rule 5.3 – The duty to supervise non-lawyers. Lawyers cannot outsource their work to non-lawyers, like ChatGPT. They must stay involved. 

Several other ethical rules may apply as well. Similarly, while unlikely, in-house lawyers are not immune from malpractice lawsuits. If you use generative AI for legal work, you must know what it is doing, review the work product, and not take anything you get from ChatGPT at face value. Alternatively, a better plan might be to adopt the three laws of robotics from Isaac Asimov — which he outlined in his science fiction classic, I, Robot and apply them to generative AI tools:  


“1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

What should I do next regarding generative AI?

Lawyers are generally slow to adopt new technology. We are naturally skeptical and see the problems with something new versus the benefits. But, given what I said above, ignoring GenAI is not an option. Here is what the legal department needs to do next: 

  • Embrace AI — it’s here to stay, but act with restraint and caution. Verify everything, trust nothing.  
  • Develop legal department and company policies regarding the use of generative AI tools. 
  • Start small with free AI products and low-risk tasks — understanding the risks is imperative — and get your feet wet. Then, move to a paid GenAI solution, preferably one with built-in legal guardrails. Finally, look for established companies offering generative AI products and use those to truly establish a foundation for AI use in the legal department like Thomson Reuters did.   
  • Do it as a team — figure out how best to make ChatGPT work for everyone in the department. Consider appointing someone to become the ChatGPT guru on the legal team. 
  • Keep data privacy and confidentiality concerns top of mind. 
  • Learn how to draft prompts that work for in-house legal research and needs.  
  • Stay up to date. Things are happening fast, and the ground keeps shifting. Keep current on legal trends.
  • Understand your state’s ethical obligations around the use of ChatGPT. 
  • Take the lead — or join the team — looking at ChatGPT for use by the company as a whole. The legal team should be leading from the front. 

Conclusion

This discussion of shortcomings, ethical dilemmas, and next steps concludes our series on ChatGPT and generative AI in legal departments. While I’m not an expert by any means, you should now have a solid understanding of GenAI, how you can benefit from its use in your legal department, and what steps you should take to take advantage of this exciting development. It will likely take a long time for ChatGPT to live up to all the hype regarding legal work, but it will get there. 

This is truly one of those game-changing moments in history, especially regarding the practice of law. You have a front-row seat and a part to play. As in-house lawyers, now is not the time to shy away from generative AI; it’s time to run to the fire. When you get there, be smart and thoughtful about how you, the department, and the company use these tools. Not only can your new 24/7 assistant do wonders for your bottom line and free up attorney time, but it can also allow you to carve out a niche as a leader and innovator at your company and — more importantly — boldly go where no legal department has gone before. As Spock from Star Trek said: live long and prosper. 

About the author

Sterling Miller is currently CEO and Senior Counsel at Hilgers Graben PLLC. He is a three-time General Counsel who spent almost 25 years in house. He has published five books and writes the award-winning legal blog, Ten Things You Need to Know as In-House Counsel. Sterling is a regular contributor to Thomson Reuters as well as a sought-after speaker. He regularly consults with legal departments and coaches in-house lawyers. Sterling received his J.D., with honors, from Washington University in St. Louis.