Technology-assisted Courtroom Judgements: What do you think?

Scales of Justice (Unsplash)

Two corporate DTPC participants have discussed the emergent relationship between artificial intelligence (AI) and the courtroom. This blog will share and discuss these perspectives alongside selected literature. 

AI can be defined as the capacity of a machine to imitate human behaviour (adapted from: Fiechuk 2019, p.139). As a rule, courtroom AI systems are designed around ‘narrow AI’ principles, which refer to: 

…the process by which a program learns how to perform a specific, or narrow, task. The program gathers many data points—all of which relate to the relevant task—and processes the data to more accurately perform the task. 

(Elrod 2020, p.1087)

Participant A argues for narrow AI systems in courtrooms because human judgement is fallible: 

People like judges making decisions. You’ve heard if you have the judge after lunch, you’ll get more leniency than if it’s before lunch.

Participant A

Based on this assumption, Participant A argues that courtroom AI promises objective ‘fairness’ between one judgement and the next:

If you’re looking at AI to do what they hope, which is to take out the human influence, that can be very empowering and can actually mean there is more fairness, than there is with human judgement. This idea that there is something great about human judgement and something bad about digital judgement is wrong. Human judgement is incredibly fallible and there is a possibility that digital judgement could possibly be better.

Participant A

Participant B adds that AI systems can make higher quality judgements, and ones that are impossible for humans to compute: 

…there are a huge number of data points which could impact on the likelihood of someone committing a crime after they have returned to society. Too many data points for any one human being to consume realistically and then generate an answer to.

Participant B

Participant B then explains that narrow AI systems can assess whether a person has fraternized with gang members, what location they are likely to re-enter into and what the crime rate in that area is like. 

Despite the promises cited here, concerns remain about the explainability of courtroom AI. Gless remarks that: 

Triers of fact will have to decide whether to trust an AI-generated statement that can only partially be explained by experts. 

(Gless 2020, p.207)

Trusting remains challenging because typically, the ‘how’ and ‘why’ of AI-generated statements cannot be stated. Consequently, using AI to inform court judgements can be fraught with danger for those whose lives become defined by its decision-making. Advocates of the technology hit back, citing that these systems make use of machine learning to improve their accuracy over time: 

…machine learning is improving; machine learning is the science behind computer programs and their ability to learn from experience and therefore, performance improves over time, increasing the effectiveness of AI. 

(Fiechuk 2019, p.140)

Given the expanding scale of populations worldwide and knowing that human judgements can be slow and arduous, machine learning AI might ensure that courtrooms scale to meet the needs of this contemporary age. 


Elrod, J. W. (2020). TRIAL BY SIRI: AI COMES TO THE COURTROOM, Houston Law Review, Inc. 57: 1085-1100.


Gless, S. (2020). AI IN THE COURTROOM: A COMPARATIVE ANALYSIS OF MACHINE EVIDENCE IN CRIMINAL TRIALS. Georgetown Journal of International Law. 51: 195+.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: