Your clients are using AI, you need to be ready

This article, written by attorney Bob Lucic, was originally published by New England Biz Law Update and can be found here.


Since the launch of ChatGPT on November 30, 2022, the reactions of lawyers to artificial intelligence have varied from scorn and denial to fear. Lawyers have famously gotten into trouble for failing to double-check citations generated by artificial intelligence. Some courts have issued standing orders on the use of AI, including requiring disclosure of AI generated content. Lawyers have struggled to understand the ethical issues related to use of AI by lawyers, especially client confidentiality issues and quality control of work product. Lawyers however, like generals, always prepare to fight the last war. Instead of getting ahead of the issues of this fast-changing technology, they are reacting to old news. The response by many lawyers is to discount AI and go about their practices as they have always done.

That reaction is understandable but it misses the larger point. Even if a lawyer personally does not want to embrace this technology, clients are using AI. They will begin to get themselves into trouble – if they haven’t already. Lawyers need to be able to advise their clients on the potential risks from AI. Even if they don’t want to use AI themselves, lawyers need understand this developing legal minefield to protect their clients.

A jarring example of how quickly the legal landscape is changing in this area happened just before the New Hampshire presidential primary. Using the AI generated voice of President Biden, robocalls were made to potential voters advising them not to vote in the primary and instead save their votes for November. The reaction of the Federal Communications Commission was swift: it issued a Declaratory Ruling stating that the use of an “artificial or prerecorded voice” falls within the restrictions of the Telephone Consumer Protection Act (TCPA), 47 U.S.C. §227. The TCPA requires prior express consent of the called party before the call can be initiated.  The FCC stated, “[i]n every case where the artificial or prerecorded voice message includes or introduces an advertisement or constitutes telemarketing, it must also offer specified opt-out methods for the called party to make a request to stop calling that telephone number.”

In the recent election in Pakistan, Imran Khan who is currently in prison, won. He won the election, in part, based upon AI-generated speech his supporters put together and broadcast. Even his victory speech was generated by AI. In a very short time, people will be able to communicate in virtually every language using AI. The technology has an enormous potential for good. But it also has an enormous potential for harm.

Currently, AI-generated voice transmission can be discernable to the human ear. That is no longer the case with images. The human eye cannot tell the difference between deep-fake and real images. Given the speed at which AI is developing, our ability to distinguish between a real (or recorded real) human voice and an AI generated voice will likely disappear soon.

This leaves clients faced with the potential to be defrauded on a scale that is almost incomprehensible. Anyone with an even minimal online presence can be subjected to deep-fake fraud. Individuals and companies with limited resources – unlike a presidential campaign or Taylor Swift – may find themselves unable to determine what is real and what is fake or may find themselves being misrepresented on the internet or in the media. Regulation is always going to be play catch up.

The flip side is that clients are or will be using AI themselves, in advertising, in their work product, in hiring, in research and development. They will also be doing their own legal research. Given the ease and the ever-lowering costs of AI platforms, clients are going to get themselves in trouble if they aren’t guided on the risks.

Clients will not necessarily have the sensitivity to confidentiality issues, especially if they are using open platforms such as ChatGPT. Lawyers will be presented with options for closed network platforms for use in legal research to reduce the risk of inadvertent disclosure of confidential information, which will come at a cost. But clients will likely look for low-cost, open platforms which will not treat information submitted into the platform as confidential. They will risk running afoul of privacy laws such as the European Union GDPR, especially if they do business internationally.

Another area of potential liability is copyright laws. Content creators are using “watermarks” – essentially embedded bits of code – to demonstrate that their content has been improperly copied. Intentionally using images that are subject to copyright in publications or advertising could subject the user to substantial fines and even criminal prosecution.

Lawyers will not be given a pass for not understanding the legal implications of clients using AI. It may seem overwhelming given the speed at which this technology is changing, but there is no shortcut. We all need to know what our clients are doing. We should take it as an opportunity to keep on top of the risks our clients face.