What the Legal Industry Needs to Know about (Quickly) Advancing Technologies

Richard Finkelman and Craig Freeman

New breakthroughs—and related ethical concerns—are emerging, and smart lawyers are working to get ahead of the curve

The legal industry, despite its somewhat stodgy reputation, actually has been out front in adopting automation technologies like word processing and electronic discovery systems. But for many lawyers, the promise of artificial intelligence (AI) has been too complicated and transformative to fully embrace and harness some of its most useful lawyer tools. 

But that appears to be changing.

Baylor University’s Executive LL.M. program in litigation management is focused partly on how advanced technologies, specifically AI, are changing the law and how lawyers can deliver high-quality results in affordable and cost-effective ways. The 14-month program, which started in 2018, has attracted an array of students, including in-house counsel from major corporations and partners from high-profile law firms.

“One of the things we preach throughout all of our courses is just how much technology can help lawyers through the litigation process,” said Stephen Rispoli, assistant dean of student affairs and pro bono programs at the Baylor University School of Law. “It’s only been in the last 5 or 10 years where the technology in litigation management has advanced enough to live up to its promise.” 

BRG professionals work closely with the Baylor program. The authors have been guest speakers on campus and have developed a series of videos and distance-learning content that focuses on innovations like machine learning and natural language processing, which fall under the broader AI umbrella. Our interactions fill us with optimism about how lawyers—even those who are busy and successful after several years of practice—have a strong appetite for learning about these new technologies.

While Baylor’s program is unique with its focus on litigation management, other LL.M. programs are being offered in the fields of science and law and highlight the rising importance of advanced technologies and the practice of law. However, there is still a lot to learn. 

In this article, we will detail how AI is changing the practice of law and, notably, litigation. We’ll discuss exciting emerging technologies and important ethical questions.

The Future of Litigation Management

The Future of Litigation Management

AI Technologies to Know About

AI is not one single technology. Multiple branches and fields of study encompass everything from robotics in self-driving cars to recommendation engines on Netflix and Amazon. For lawyers, the most important AI technologies come from the fields of machine learning and natural language processing. Many lawyers are familiar with predictive coding technology, also known as technology-assisted review (TAR), as its use has been prevalent for several years. Now, new and exciting AI technologies promise to provide breakthroughs in competitive advantage through the use of predictive analytics.

These technologies will bring a competitive edge and deliver lower costs, faster processes and new insights that can lead to better results for clients. But like any new technology, they come with risks—and in the legal profession, some of the biggest challenges will be ethical (as we’ll detail below). Turning over parts of a lawyer’s work and decisions to machines creates inherent challenges for a system historically built on human judgment.

Among the new technologies beginning to emerge is linguistic clustering, a technique used to group together similar documents using mathematical formulas, leading to more efficient and productive reviews and greater subject knowledge. The technique improves review speeds by as much as 30 percent, which can be a gamechanger in cases involving millions of documents.

There also are new ways to identify potentially privileged documents without relying on keywords. Attorneys have a unique writing style, and natural language processing can be used to predict documents that may be privileged because they are likely written by lawyers. One approach employs semi-supervised learning, where lawyers and paralegals label a small subset of documents that train machines to predict the next most likely potentially relevant or privileged documents. Along the way, the system grows smarter and smarter.

AI technology also is being used to analyze the sentiment of language, particularly in email communications. Analytics can identify positive, neutral and negative sentiment and help lawyers quickly find the time periods and parties involved in outlier communications—communications that are far more frequent and far more positive or negative. This can help lawyers understand fact patterns and witness involvements in a case. 

In entity extraction, information about a word or phrase is extracted and assigned a mathematical score that predicts the quality of the word or phrase. This can offer insights and input on search terms, prior to reviewing documents, that lawyers may never have thought to use: words and phrases that are important but do not occur frequently.

Last, the holy grail of litigation is the possibility of predicting settlements and, with great precision, the dollar values of settlements. This could lead to a wholesale reduction in the spend of corporate legal departments and to profound changes in the operation of law firms. But this is still in a proof-of-concept phase, with significant rollout in the next couple years.

This raises important ethical considerations for lawyers planning to use any of these technologies in their practices—and, for those who aren’t, knowing it might pop up in matters they’re working on. The legal industry isn’t the only one at a crossroads on these evolving technologies, and ethical guidelines are coalescing around an important set of principles, as we’ll discuss below.

Legal Ethics in the Age of AI

Organizations, notably Microsoft and the European Commission, are developing initiatives to guide AI’s ethical development, deployment and governance. Most of the guidelines are built around four core principles: transparency, justice and fairness, responsibility and accountability and privacy.

The most common principle, transparency, is about providing visibility into how applications interpret, communicate and decide. That’s achieved by disclosing data usage, user interactions and automated decisions. The benefits include verifying predictions and identifying flaws and biases, ensuring compliance with regulations, fostering trust and technological adoption and facilitating future research and technology adoption.

AI applications reflect the background and biases of the source that programmed them, which has led to problems (e.g., facial recognition technology performing poorly for racial minorities). Justice and fairness is about developing technical methods that identify and remedy biases and that acquire more accurate and diverse data to train systems against biases. 

Responsibility and accountability considerations pertain to such things as who is responsible and legally liable when a self-driving car that has human-driver interaction is involved in a crash. In some contexts, the AI is considered a legal entity; the EU is debating whether AI should be granted personhood.

With regard to privacy, we need technical solutions (e.g., secure cloud architecture, encryption, firewalls and differential privacy) to ensure data security and governance. These must follow guidelines set forth under the US Health Insurance Portability and Accountability Act, EU General Data Protection Regulation and California Consumer Privacy Act on matters like access control, anonymization and data minimization.

Lawyers’ own proficiency at adopting AI counseling and understanding related ethical issues—and how prevailing ethical thinking is progressing—could be the difference in securing future client work and having an advantage in the courtroom.

What’s Next

Lawyers need to understand these issues to ready themselves for a world where data scientists are called as witnesses to impugn or defend AI technologies. Sometimes these will be technologies used in litigation, but increasingly this will involve arguing about AI technologies that are part of the dispute itself. This trend is significant. If lawyers want to be effective advocates, they will need to understand AI to have a good grasp of how their clients use it in products or services.

“Businesses are using these advanced technologies to make products and services better,” said Kyle Dreyer, executive program coordinator for Baylor’s LL.M. program. “Increasingly, the expectation in the business community is that their lawyers should be able to do the same thing.”

We share a belief that lawyers, like consultants, are paid to give their clients the best business advice they have in the context of the problem or issue being faced. With the increasing use of AI technologies in the legal industry, it is paramount that attorneys educate themselves on how things work and what ethical issues exist for products and services that use AI. Once jury trials are back up and running, the need for business acumen in this area will become increasingly obvious.