Understanding the Potential Benefits and Risks of Using ChatGPT in the Legal Profession
You may have used a chatbot in the past and your experience might have looked something like this:
Bot: Hi! How can I help you?
You: Can I get a refund?
Bot: Here is a link to our policy on refunds.
You: I need a refund. Where can I submit a request?
Bot: You can contact customer service through our hotline. Alternatively, here is a link to our policy on refunds. Is there anything else I can help you with?
You: I need a refund. I couldn’t get through to customer service and the refund policy has no clear instructions!
Bot: Sorry, I don’t understand your request. Is there anything else I can help you with?
This hypothetical exchange demonstrates how rudimentary (and often unhelpful) the typical chatbot is. That is, until recently. Perhaps the most well-known example of recent advancement in chatbots is the release in late 2022 of OpenAI’s Chat Generative Pre-Trained Transformer (or ChatGPT for short). Unlike previous chatbots, ChatGPT has a nuanced understanding of language and has the capability to search for and summarise information on almost any topic within seconds.
It is only natural that many professionals, including lawyers, might ask themselves – can I use ChatGPT to do my job?
In a field where research and drafting are fundamental, the potential of a tool like ChatGPT for lawyers is obvious. For example, it could reduce time spent researching and summarising the law and could even be used to draft legal advice or documents.
But lawyers (and bush lawyers) should beware. ChatGPT still has a long way to go before it can match a lawyer’s expertise and its use poses a number of risks in a legal setting. That said, there is scope to use ChatGPT productively and its capabilities, and those of systems like it, will only improve over time.
How does ChatGPT work?
ChatGPT is powered by artificial intelligence and is (currently) accessible for free. The user can type in any question or request, press enter and receive an answer almost immediately.
ChatGPT’s ability to understand text commands lies in GPT-4, its large language model. GPT-4 gives ChatGPT a default conversational feel as it uses natural language processing to read the user input and then produce a response that reflects a human conversation.
It is undoubtedly ground-breaking. But is it intelligent enough to do legal work?
What could ChatGPT do for lawyers?
Here are just a few examples of tasks that ChatGPT can do in a legal context:
- identify and summarise case law, legislation and legal commentary;
- provide answers to simple, and even some more difficult, legal questions;
- provide suggested wording for drafting of contracts; and
- assist in drafting correspondence and advice or court materials.
Using ChatGPT for these tasks could help increase efficiency as answers are provided almost instantly and lawyers could therefore allocate time to more complex tasks. In fact, ChatGPT is already being utilised in some firms.1
What could go wrong?
Lawyers who rely on ChatGPT could be putting client information at considerable risk. Any information that is submitted to ChatGPT is collected by OpenAI systems as well as third party servers. This information could be accessed by OpenAI as content to improve the performance of its products. If any storage servers used by OpenAI are compromised, information given to ChatGPT in the course of a request could be accessed by third parties. All of these considerations increase the risk of a breach of a lawyer’s duty to the client to maintain confidentiality, loss of legal professional privilege or a data breach involving personal or market-sensitive information.
ChatGPT is not all-knowing. It sometimes gives inaccurate answers. There are several reasons for this.
ChatGPT was trained on a specific dataset which has only been updated to September 2021 (at the time of writing). A lot can happen in 18 months, which means that any recent developments in legislation or case law are not factored into ChatGPT’s answers. When asked for its sources for an output, ChatGPT occasionally provides links to webpages or refers to legislative instruments which are no longer in existence or in force. Sometimes it will go on what can only be described as a digital acid trip and will simply make up output or sources, in a phenomenon AI developers call “hallucination”. Even so, the output can still appear convincing at face value.
ChatGPT’s creators have admitted that the chatbot may provide “plausible-sounding but incorrect or nonsensical answers”.2 The risk of this occurring may be heightened with niche questions of law.
It is also difficult for AI to understand the intricacies of a client’s circumstances in the same way that people can and to provide tailored advice.
There is nothing wrong with the appropriate use of AI by lawyers to provide efficient legal services. However, lawyers have obligations to exercise skill and independent judgement, so they must properly scrutinise whatever the AI produces.
Infringement of copyright
ChatGPT’s use of its training dataset raises questions around intellectual property. Who owns copyright to the materials ChatGPT uses to formulate its output? Does ChatGPT’s output infringe copyright, and what are the implications if we then use that material without appropriate permissions? What steps need to be taken to avoid intellectual property infringements?
The training dataset used by ChatGPT is made up of human-created content (that is, content drawn from the internet), meaning that it inherently contains biases. This means that it could provide irrelevant or even inappropriate answers, which should be avoided within a workplace or where ChatGPT’s output is used in work that is ultimately provided to a client.
Managing the risks
Here are some steps that lawyers should consider taking, if they choose to use ChatGPT.
Use ChatGPT as a starting point and always verify and tailor
For research tasks, ChatGPT could be used as a starting point by identifying key issues or the most relevant legislation and case law. Once you have this starting point, carry out your own research to verify the information provided by ChatGPT and ensure it is up to date. Never copy and paste outputs unthinkingly and always assume that verification is necessary. In addition to ensuring accuracy, this will have the added benefit of keeping your legal research skills sharp.
For drafting tasks, ChatGPT could be used as a starting point for the wording of a clause or to provide a general structure or layout of a document. If you do use ChatGPT for a drafting task, watch out for generic drafting and make sure that the wording is appropriate to the matter at hand and the relevant jurisdiction.
Keep it general
Using ChatGPT as a starting point means that there is no need to divulge client- or transaction-specific information when submitting a request. Questions should be phrased generally and the output can be tailored outside of the ChatGPT platform, as necessary.
Set ground rules
Ideally, policies and procedures should be put in place to deal with and mitigate the risk, which could include things such as:
- explaining how AI can be used productively;
- pointing out the potential risks;
- a prohibition on submitting client information;
- procedures for verifying output and quality controls; and
- consequences for misuse.
It certainly is a brave new world, and lawyers need to make their peace that legal AI is here to stay –preferably as a collaborator rather than a threat.
Note: ChatGPT did not write this article. Well, not much of it.
1 For example, see: Sara Merken, ‘OpenAI-backed startup brings chatbot technology to first major law firm’, Reuters (Article, 17 February 2023) https://www.reuters.com/legal/transactional/openai-backed-startup-brings-chatbot-technology-first-major-law-firm-2023-02-15/; Michael Pelly, ‘Law firms say ChatGPT an ‘opportunity, not a threat’’, Australian Financial Review (Article, 9 February 2023) https://www.afr.com/companies/professional-services/law-firms-say-chatgpt-an-opportunity-not-a-threat-20230208-p5cj2j.
2 See https://openai.com/blog/chatgpt ‘Limitations’.