The many use cases of Artificial Intelligence (AI)

AI
07 May 2024

Artificial Intelligence (AI) – we just cannot seem to get away from it. It’s everywhere.

It automates our documents, automates our driving, listens to, and undertakes our commands, even cooks our dinners.

It’s the handy little helper we always knew we needed but didn’t know to what extent it could be used. Before we know it, we will be nothing more than couch potatoes, shouting commands this way and that, needing to not even lift a finger to change the channel on the TV.

Our houses will be cleaned by the automated house elf that does everything, including the folding of our clothes, our groceries will be bought by the intelligently automated butler that also cooks nutritional, low-Cal meals. Even our dogs will be walked by an automated dog walker of some kind. Every labour-intensive task you can think of will be taken care of. All that’s left will be worrying about intensive work that requires specialist skill, leisure activities and fitness, and chillaxing on the couch.

This is how we see a world that is dominated by everything AI.

One that is operating to the highest degree, where things run smoothly, where work is top-notch and employees have low stress, where people are fitter (because they have time to work out), where people sleep better (because their minds are no longer plagued by the usual demands of the day), where there are happy families (because we can actually spend quality time together). The world is a well-run AI machine.

It works. In an ideal AI world.

But this is the real world

In the real world, things that could be used for the good and for the better are often misused, are often abused and are often the cause of disastrous events.

Sure, we’re talking about AI here and not the atomic bomb (have you watched The Bomb and the Cold War yet?). But the underlying concept is true, nevertheless.

We’ll give you an example of what we mean. In an article titled Civilian AI Is Already Being Misused by the Bad Guys the following chilling paragraph is set out –

In March 2021 “a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only six hours for the AI tool to suggest 40,000 of them.

The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.”

See what we mean? A seemingly innocuous tool that was used for something exceptionally bad. Perhaps that’s too simplistic of an explanation. Because just as the old US adage goes “Guns don’t kill people. People kill people”, the same thinking can be applied here.

You see the accepted concept here is that – as set out by Prompt Engineering & AI Institute – “AI’s capabilities are predominantly dependent on the information it has been sent. AI doesn’t discern between factual or false data – it merely processes what it’s given.”

From where we sit there seems to be a double-edged sword. Yes, AI can improve our lives, can make work tasks easier. It can change many aspects of society and human life, but it’s the information we feed the AI that we need to be cautious of. As set out in the article Civilian AI Is Already Being Misused by the Bad Guys

like many cutting-edge technologies it can also create real problems, depending on how it is developed and used. These problems include job lossesalgorithmic discrimination, and a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly.

Both the development of and most importantly the use of AI needs to be done with extreme caution.

But how does that affect the legal profession?

We have written many articles on the use cases of AI. We have even commented on how AI is revolutionising the way lawyers practice law – the easiest example is accessing their files and billing for their work all from their mobile device using AJS Mobile.

But it turns out that lawyers have been using AI for a lot more than just document automation and we can understand why – simply put, it makes their work so much easier. AI is an amazing tool for e-discovery, for document management (and obviously automation), for conducting due diligences, and specifically the reviewing of large numbers of documents, such as contracts, for litigation analysis and last but certainly not least for legal research.

And it’s within the ambit of legal research that things take a bit of an unexpected turn. You see, first and foremost legal practitioners owe their clients – and their profession – a certain degree of duty of care. A client will assume that their legal representative has done the necessary research and has furthermore undertaken their own due diligence to ensure that the precedents and/or case law and/or legislation relied upon is in fact relevant, has application and is in fact real, or in other words that it does in fact exist.

In the UK and while not provided by a solicitor directly but by “someone in a solicitor’s office” in the matter of Harber v Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC) – “In a written document (“the Response”) Mrs Harber provided the Tribunal with the names, dates and summaries of nine First-tier Tribunal (“FTT”) decisions in which the appellant had been successful in showing that a reasonable excuse existed. However, none of those authorities were genuine; they had instead been generated by artificial intelligence (“AI”). … We accepted that Mrs Harber had been unaware that the AI cases were not genuine and that she did not know how to check their validity by using the FTT website or other legal websites” (Lexology).

In Vancouver, Canada “Instead of savouring courtroom victory, the Vancouver lawyer for a millionaire embroiled in an acrimonious split has been told to personally compensate her client’s ex-wife’s lawyers for the time it took them to learn the cases she hoped to cite were conjured up by ChatGPT. In a decision released Monday, a B.C. Supreme Court judge reprimanded lawyer Chong Ke for including two AI “hallucinations” in an application filed last December. The cases never made it into Ke’s arguments; they were withdrawn once she learned they were non-existent. Justice David Masuhara said he didn’t think the lawyer intended to deceive the court — but he was troubled all the same.” (CBC).

And in the US, in the 2023 matter of Mata v Avianca, “in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT. The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny (The Conversation).

Wait. What’s an AI hallucination?

According to IBM

AI hallucination is a phenomenon wherein a large language model (LLM) – often a generative AI chatbot or computer vision tool – perceives patterns or objects that are non-existent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”

With the above situations in mind, we believe the following extract from the article titled Addressing the Abuses of Generative AI in the Legal Profession, to be key –

One of the most significant concerns is the potential for AI-generated content to perpetuate bias and inequality. AI models are trained on data from the past, and if that data contains biases, the AI can inadvertently replicate and amplify them. In the legal context, this means that AI-generated legal documents or advice could perpetuate historical injustices, creating a legal system that is unfair and inequitable.

Moreover, there is a risk that the use of generative AI may diminish the critical human element in the practice of law. While AI can streamline processes and improve efficiency, it cannot replace the nuanced judgment, empathy, and ethical considerations that lawyers bring to their work. Over reliance on AI could erode the fundamental values that underpin the legal profession.

Transparency and accountability are essential in addressing these concerns. Legal professionals must have a clear understanding of how AI systems make decisions and generate content. They must also be vigilant in scrutinizing and auditing AI-generated work to ensure it aligns with ethical standards and legal principles.

Regulation and oversight are equally vital. As AI becomes increasingly integrated into the legal landscape, regulatory bodies and professional organizations must establish guidelines and standards for its responsible use. This includes setting boundaries on the roles AI can play in legal practice and ensuring that AI-driven decisions are subject to human review and accountability.”

What’s the takeaway?

We think it’s fairly simple.

While AI has many use cases and promises to make both life and work easier, the use thereof must be done with extreme caution.

As lawyers you are held to higher accountability. You are the custodians of the law and must therefore practice it with care, with caution and with finesse. Remembering all the while that when you misuse or misdirect or misquote the law something within our legal system dies just a little. So, handle the law, the practice thereof, the use thereof and the wielding thereof with care. And know that while AI can make your workload that much lighter, there is no replacement for the human spirit, for the human touch and for the human intellect.

To find out how to incorporate a new tool into your existing accounting and practice management suite, or how to get started with legal tech,  feel free to get in touch with AJS – we have the right combination of systems, resources and business partnerships to assist you with incorporating supportive legal technology into your practice. Effortlessly.

AJS is always here to help you, wherever and whenever possible!

(Sources used and to whom we owe thanks: IEEE Spectrum; Prompt Engineering & AI Institute, Lexology; IBM; Lexis Nexis; Law.com; The Daily Record; The Conversation; CBC).

See also:

(This article is provided for informational purposes only and not for the purpose of providing legal advice. For more information on the topic, please contact the author/s or the relevant provider.)
Alicia Koch

Alicia Koch is an admitted attorney with over 10 years PQE. She has worked in law firms, has had her own legal consulting company and has been an in-house legal... Read more about Alicia Koch

Share


Running Your Practice articles by


Running Your Practice articles on GoLegal