Crafting Perfect Prompts for AI Legal Research
Lots of articles have been written about what not to do with AI. By now, every litigator should know not to let AI-generated work go out without reviewing it, and not to input confidential client information without checking who can read the data. There are also lots of articles about which AI tools you should or should not be using, or how AI tools compare.
This article is different. It assumes that you are going to use generative AI to do legal research and have chosen which tool to use. It proposes five ways to refine your prompts to get better answers, regardless of what tools you are using or what types of questions you are researching.
Tip 1: Set the Temperature
“Temperature” is a term of art in AI research. You can think of it as a measure of how eager to please the AI will be. It is best understood with an example. Suppose you ask an AI tool “Find me a case that says X” but there is no case that says X and in fact the law is the opposite.
- If the temperature was zero, it may say “You are wrong. There is no case that says X.” It doesn’t care about pleasing you and does little more than find identical text.
- If the temperature was higher, it may say “There is no case that says X, but here is a case that says Y which is similar to X.” Y may or may not actually be similar to X.
- If the temperature was even higher, it may say “Here is a case that says X”, omitting to tell you that the case does not exist or actually says the opposite.
Some AI tools let you explicitly set the temperature, e.g. there’s a slider. But even if your tool does not have a slider, try adding “set temperature to zero”, “set temperature low”, or “set temperature high” to your prompts. Many tools are sophisticated enough to understand and adjust accordingly.
It should be obvious why you might want low temperature: fewer hallucinations. Most of the time with legal research, low temperature is the right choice. Note that I did not say “no” hallucinations. Even at low temperature, AI can misinterpret a case or miss that it has been overturned.
You may want to try setting the temperature higher when the case law is against you and you need to find a creative argument, or if you’re working in an area with little case law where the only arguments will come by analogy. Just be aware that when you set temperature high, you have to be extra careful in double-checking that the interpretation is accurate.
Tip 2: Ask for Keywords or Prompts
As you develop expertise in an area of law, you develop an intuitive sense of the terms of art you care about. For example, where a law student may ask, “When can a bankrupt debtor sell future income they may not get”, a bankruptcy expert might instead ask, “Does the anti-deprivation rule apply to property to be vested only on satisfaction of a condition precedent”. There is a case squarely on point. You are more likely to find it with the second prompt.
So get the AI tool to build the second prompt for you. If all you know is the first prompt, type that in and instead of asking for an answer, ask the AI tool for what terms of art or what legal doctrines might be relevant. When the AI tool tells you about the anti-deprivation rule and conditions precedent, you can now turn that into the second prompt and use that one to get your answer.
Tip 3: Ask for the Chain of Thought
Modern legal AI tools do not simply plug your prompt into their model and spit out whatever it says. They usually go through their own multi-step process of refining the question first. It is often a good idea to ask the tool to show you those steps by asking for its “chain of thought”.
The chain of thought will identify the terms of art that the tool came up with, which you can use to refine your next prompt. If the tool went totally off course, that is usually visible in the chain of thought, so you can try the prompt again but expressly tell it not to veer off in that direction. For example, if you were looking for what a reasonable expectation of privacy means under privacy legislation, but the chain of thought shows that the model started looking at criminal cases, you can prompt it again saying “Focus on cases under privacy statutes or tort law, and only refer to criminal law to the extent that it is endorsed in those cases”.
Tip 4: Refine with Follow-Up Questions
When you are looking for cases with an AI tool, you can expect to get cases that are not relevant or are easily distinguishable. Many people stop there, concluding either that the AI tool does not understand or that there is no case on point. But sometimes all it takes is identifying why that case is wrong and asking the AI tool to try again. Consider trying follow-up prompts like “Try again but limited to cases about [specific cause of action]” or “Try against but exclude cases in [specific area of law]” or “Try again but exclude interlocutory decisions”.
Tip 5: If You Find Something Good, Look for Something Better
Sometimes, the AI tool gives you a great case. The natural inclination is to stop there. But there might be an even better case out there. Consider prompts like “Is there an appellate decision that says the same thing”, “Is there a case from [your jurisdiction] that stands for the same proposition”, or “Is there a case with the same holding with [your client’s facts]”.
None of the tips in this article will work on every tool or on every question. But you would be amazed how often they do. And you lose nothing by giving them a try. Happy prompting!
About the Author
Adil Abdulla, Sotos LLP
Adil is a litigation lawyer at Sotos LLP, where his practice focuses on class actions, complex IP and competition disputes, and emerging issues at the intersection of law and technology. He regularly advises clients on the practical implications of legal developments. Adil can be reached by email at aabdulla@sotos.ca or by phone at 416.572.7325.
