Technology associate Phillippa Stubbs considers whether the rapid evolution of large language models means the end of legal careers
The year is 2013 — 10 years ago. The UK is still part of the EU; Emojis only depict one race; Manchester United have won another Premier League title and Siri is demonstrating its new “voice”. I am enjoying raising one child, contemplating the possibility of training to become a lawyer — an unassailable profession with a solid, scarcely changing career path.
Today, I am a qualified commercial contracts lawyer. Daily tasks include: asking Siri to remind me of all the important things I need to do today; consoling one child after Man United lose yet another game; monitoring the other child’s social media use; and working with amazing clients who continue to develop and use revolutionary technology.
All this naturally leads to reflection on how the world has changed in the last ten years, and some anxious speculation about how the next 10 years may unfold.
A career in law has always been projected as a “safe”, long-term career. The skills acquired during qualification will serve a variety of vocations, and everyone will need a lawyer at some point. Right?
However, technological advancements over the last 10 years, particularly the recent spurt of AI innovations, are making that “safe” career a little less certain.
Like many practising professionals, I cannot avoid becoming more and more reliant upon technology. I communicate via instant message, video conferencing and email. I access and save documents in a cloud-based system. I use specially developed software for drafting, research, filing and billing.
There is no denying that AI is getting smarter at an exponential rate. AI has been proven to increase productivity and efficiency within the legal industry. It is often used in document generation and bundling, legal analysis and legal research.
But lawyers do a lot more than just bundle documents. Using AI effectively will decrease costs for clients, and free up more time for lawyers to do what they do best. It will allow more time to be sounding boards, market readers, friends and confidants. We will have more opportunities to deliver specialist advice and provide increased client care.
But will AI soon be able to perform these human and more personable functions too?
AI is becoming so intuitive it can be used to support legal services, but what would it need to do to provide legal advice by itself? And what does this mean for aspiring junior lawyers?
Aside from interpreting legislation, understanding and maintaining a library of up to date case law, would it be able to correctly interpret and apply law to a client’s needs and provide practical or commercial advice?
In the future, it may be possible to create a predictive algorithm for complex contract negotiation. We could teach AI a number of variables including at the very least: what each party wants; what they are willing to concede on; how critical a contract is to a party and the “walk away” point.
But these variables are often bespoke to the scenario and the parties involved. There would be an infinite number of outcomes and if it were possible for a client to input such variables, the AI would unlikely take into consideration the human interactions which often support (or hinder!) successful negotiations.
As intelligent and efficient as AI is, there have been well documented cases of the risks posed by relying on AI completely.
In the otherwise uninteresting personal injury case of Mata v Avianca, Inc., a senior legal professional used ChatGPT to carry out his research, including sourcing precedents that supported the claimant’s case. The cases “found” by ChatGPT were made up and upon questioning the AI where it had found such cases, it provided further false responses.
This is just one example that AI, while an excellent support tool, still requires human intervention and supervision.
AI’s current development could be compared to the cognitive development of a young child, which I now know quite a lot about. Humans instinctively learn by watching and copying others, and even begin to learn how to manipulate a situation by saying things that aren’t true.
While there are a number of risks posed in using AI and there is a degree of uncertainty surrounding its place in the world, its place in the legal industry seems to have a positive effect on my role as a legal advisor. My job appears to be secure, for now.
In the next 10 years, I expect to see AI forming part of legal training courses, including how to draft an AI command to achieve a desired outcome. These will then start to appear as expected skills on lawyers’ CVs.
To continue to provide effective support and advice to my clients it will be important to educate myself and to fully understand how to use AI.
AI should be used to support elements of our work, to make us better at our jobs. Technology will continue to be a wonderful, ever changing commodity that with the right education can create a world of possibilities.
Phillippa Stubbs is a technology and outsourcing associate at Fieldfisher.
To learn more about technology and the law, sign up for this afternoon’s virtual event: ‘Secrets to Success legal tech edition — with Allen & Overy, Macfarlanes, Osborne Clarke and ULaw’. Apply now.