February 24, 2020
Legal robots & the human x- factor


Let us begin with the curious case of Wisconsin vs. Loomis.
In 2013, Eric Loomis was accused of participation in a drive-by shooting in Wisconsin. Loomis denied his involvement but pleaded guilty to a number of other offences. So before sentencing, an investigatory report that included an actuarial risk assessment was prepared by the prosecutor. Part of the report was made on the basis of an algorithm known as the COMPAS system - a highly controversial technology that has also been noted for its racist bias. Unfortunately for Loomis, the algorithm concluded that he was likely to commit new offences and the court put emphasis on the conclusion of the report that COMPAS had identified Loomis as a threat to the community. He was then sentenced to six years of imprisonment and five years of extended supervision. An algorithm had sealed his fate.
The sentence was challenged on the grounds of a lack of due process under the US Constitution. The defendants unsuccessfully argued that the algorithm was a closed source that had not allowed Loomis to see how the risk assessment was made or challenge its accuracy. Little did it help. The appeal was rejected, the court holding that the defendant had had full visibility of both the input and the output.
The case was an eye opener to Niels Christian Ellegaard. Now he has written about it in his newly published book Robots Entering the Legal Profession:
“The case triggers so many considerations and questions when it comes to the use of robots in the legal industry: there are questions of transparency, but also, do I have the right to insist on being judged by a human being?” asks Ellegaard, who is partner in the Danish top-five law firm, Plesner.
“The message of the book is that we need to discuss the use of robots in the legal profession before they enter it for real. There will be a smooth and seamless transition from now, where humans are in control of the legal industry, to a time where technological solutions will take over. But there will be no way back once we have reached that point,” Ellegaard tells Legal Tech Weekly.
The book is an attempt to figure out how much of our legal work can we leave to robots, and even more interestingly: where is a human touch required? Where such issues are more often discussed in technological accounts about the nature of artificial intelligence or in softer, more ethical debates about trolley dilemmas, Ellegaard approaches the subject from a rarely-used perspective of classic legal philosophy.
The human X factor in law
Central to Ellegaards project is finding the right balance when using robots in the legal profession. What makes the book an interesting read, is that it acknowledges the difficulties, and takes a responsible stance in favour of the technologies while avoiding any tech hostility. And in a legal tech industry that is so dominated by marketing speak and self-indulgence, there is something refreshing about his insistence on not giving any easy answers.
Ellegaard sees technology as a valuable supplement to legal practice but is, essentially, in search of the human touch. That undefinable X factor, where robots fall short: “I realised that you have to understand what it means to practice law in order to understand how much of our legal work we can leave to robots. It seems obvious, but in fact, it is not that simple because there are parts of the legal profession where you cannot establish a common formula. I call this the X factor, but we could also call it common sense.”
The X factor in law? What is that?
“In reality, it does not exist. It is just a term I use to give non-lawyers a clue that there are certain things you cannot put into a formula. From time to time, you will meet problems that force you to apply your common sense. It is very difficult because it is neither subjective or objective. You can identify if someone has acted negligently in a liability case. That is a judgment you learn to make in law school - whether you have acted or failed to act in a way that the bonus pater familias [a reasonable person ed.] would not. But there can be more than one opinion about it because it is based on an assessment,” explains Ellegaard, and then stops to think of an example.
“Take a parking lot where you are allowed to park for half an hour between 9 am and 5 pm. But then imagine there are no limits for electric cars, and you drive a hybrid which is both able to drive on electricity and gasoline. What are the rules then? That depends on an interpretation where you look at the purpose of the law and apply your common sense, which is a private, mental process,” Ellegaard elaborates.
But I guess the robot would be more transparent then?
“Yes, but if we want the robot to act in our image, then we face a problem finding out what ultimately drives us and determines our decisions. You know the law and the relevant legally facts. Then something goes on inside your mind - a legal decision-making process - and that results in an output. We do not know what is happening in our minds because that is a private event. So, if a robot takes that job, then we will have a robot doing something we cannot even explain to ourselves.
I have been a legal professional since 1996 and I still cannot explain it. We live in societies that are based on trust; trust in officials, in judges. If you replace them fully or partially with robots then we must ask ourselves if we can trust technologies, those who own them, and whether we can be sure they are not biased. But how will I know that it does what it is supposed to do when I do not know what we would like it to mirror?” says Ellegaard.
But a robot is coded so someone must know what is going on?
“I am not an expert in computer science, but I do know that algorithms do not have to be that complicated before even the developers are not able to explain what is happening. The robots become too complex for the human brain to understand and then you cannot make an audit trail, especially when you have algorithms based on machine-learning that learn from their own previous results.”
<div style="width:100%; height:0; position: relative; padding-bottom:56.25%"><iframe src="//hansreitzel.videomarketingplatform.co/v.ihtml/player.html?token=dd23890f7ee893c05c2f428a889a167c&source=embed&photo%5fid=49867570" style="width:100%; height:100%; position: absolute; top: 0; left: 0;" frameborder="0" border="0" scrolling="no" allowfullscreen="1" mozallowfullscreen="1" webkitallowfullscreen="1" allow="autoplay; fullscreen"></iframe></div>
So what to do then?
“If I should sum up my conclusion, then I will advocate for an interactive approach where we find the right balance. It is not necessarily static, but we need balance on three parameters: assistance, quality assurance and integrity.”
The first two parameters are straightforward. Since we are able to create algorithms so complicated that we cannot explain how they reach their results, Ellegaard believes that we need reference cases to check and measure if the robots reach the desired outcomes. We need to test and verify if the robots find the right results to assure some level of quality. In some cases, the robots should only deliver suggestions that the lawyer can choose between to avoid "lazy lawyers", and in some situations robots should be disallowed.
“If I was involved in a divorce and if someone had to decide a custody case involving my children, then I would not feel comfortable if a machine had to decide. I would expect a human being to take an empathetic decision. Integrity is a difficult concept because it involves a human need - not because of lack of quality - but because we need human contact and a human will,” says Ellegaard and points to the Loomis vs. Wisconsin case where the so-called veil of ignorance controversially was allowed.
Ellegaards book is written with a thoroughness that only a risk-averse lawyer can master. It deals with subject such as iterative approaches and autonomy in legal decision making. At one point, Ellegaard even asks the question: What is law?
But even though it is, first and foremost, a book by a lawyer for lawyers, Ellegaard also makes an effort to describe the practical context, and he even includes a chapter with advice for legal tech providers about design solutions to create the right interface for human/robot interactions. In other words, it is a book that both lawyers and tech-providers can benefit from reading.