Pleading Your Case to the AI Judge

In a 2013 legal case in the U.S., a man named Paul Zilly was convicted of stealing a lawnmower.

Initially, he accepted a plea deal of one year’s jail and a subsequent supervision order. But an early Artificial Intelligence tool assessed him as a high risk of offending, and the sentence was lengthened to two years.

In 2016, the non-profit investigative site ProPublica looked through around 10,000 criminal defendants in Florida. It found that African American defendants were more likely to be given a high-risk false positive flag on the software than white defendants, suggesting that if Zilly had been white — and the software had not identified his ethnicity — then the original sentence would have been allowed to stand.

The case is one example given in a study on the use of AI in the legal system released this month by the Australian Institute for Judicial Administration (AIJA), UNSW Law & Justice, UNSW Allens Hub for Technology, Law and Innovation and the Law Society of NSW’s Future of Law and Innovation in the Profession (FLIP Stream).

The report — AI Decision Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators — identified examples of the use of AI in Australia and overseas, from computer-based dispute resolution software to the use of computer code based directly on rules-driven logic, or ‘AI judges’ to help clear a backlog of cases. 

In the case of the U.S. software, called COMPAS, the tool is intended to augment the judicial process by conducting a risk assessment on the likelihood that an offender will break the law again.

COMPAS integrates 137 responses to a questionnaire, from a relevant “how many times has this person been arrested before as an adult or juvenile?” to a much more general “do you feel discouraged at times?”

The code and processes underlying COMPAS are secret and not known to the prosecution, defense, or judge but can have real consequences, as the Zilly case illustrates.

Clearing the backlog

Estonian Ministry of Justice says it will seek to clear a backlog of cases using 100 so-called ‘AI judges,’ the intention being to give human judges more time to deal with the more complex disputes.  

Reportedly, the project could adjudicate small claim disputes under EUR7,000. In concept, the two parties would upload documents and other relevant information, and the AI system will issue a decision that can be appealed to a human judge. 

“Artificial intelligence, as a concept and as practice, is becoming increasingly popular in courts and tribunals internationally. There can be both immense benefits as well as concerns about compatibility with fundamental values,” said one of the report’s authors, Professor Lyria Bennett Moses from the University of New South Wales. 

Pull out quote The two parties would upload documents and other relevant information, and the AI system will issue a decision that can be appealed to a human judge.  

“AI in courts extends from administrative matters, such as automated e-filing, to the use of data-driven inferences about particular defendants in the context of sentencing. Judges, tribunal members, and court administrators need to understand the technologies sufficiently well to be in a position to ask the right questions about the use of AI systems.”

Professor Bennett Moses suggested that using some AI tools is “in conflict with important legal values.” 

“There are tools, frequently deployed in the United States, that ‘score’ defendants on how likely they are going to reoffend. This is not based on an individual psychological profile but rather on the analysis of data. If people ‘like’ you have reoffended in the past, then you are going to be rated as likely to reoffend,” she said.

“The variables used in this analysis include matters such as whether parents are separated (and, if so, one’s age when that occurred) – the kinds of things that might statistically correlate with offending behavior but are outside one’s own control. The tool is also biased (on some fairness metrics) against certain racial groups.”

Breaking down language barriers

Not all applications of AI in the legal system are harmful. 

Professor Bennett Moses said language barriers were one key area where AI could be of enormous value.

One practical and non-controversial example of a benefit is using natural language processing to convert audio of what is spoken by judges, witnesses in court, and counsel to text. 

This can make access to court transcripts faster and easier, particularly for those with hearing impairments. In China, some trials are captured ‘in real time’ in Mandarin and English text.

“I’ve always believed that interesting legal questions lie on the technological frontier, whether that relates to AI or other new contexts to which the law is called to respond,” Professor Bennett Moses said.  

“My main advice is to tread carefully, to seek to understand how things work before drawing conclusions on what the law should do about it. But we need people to ask the right questions and help society answer them.” 

Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their entire business models. You can reach him at [email protected].

Image credit: iStockphoto/style-photography

Leave a Comment