AI and the Future of Human-Judges: Theorising the Role of AI in Legal Interpretation

Posted on December 23, 2020

Authored by Aditya Krishna*

Would You Accept Being Judged by AI in a Court of Law?
Image Source: The Sociable

Introduction

With the advent of the 21st century, technology began to develop in leaps and bounds, leading to the eventual creation of a plethora of AI or Artificial-intelligence applications and solutions. It has been forecasted in a study by the Stanford University in 2016 that over the coming decades AI is bound to drastically alter no lesser than eight broad sectors, including that of the legal sector.[1] With the question now being more of ‘when’ than ‘if’, it becomes extremely important to analyse and contextualise the role that AI could play in the legal system and AI’s capability to replace human-judges in the near future and its ability to perform legal interpretation.

When it comes to legal interpretations, two tasks usually need to be fulfilled. The first being that of describing the law and predicting how the same would be interpreted by others; and the second being that of selecting a specific legal interpretation, in case of legal indeterminacy, from  the repertoire of the existing interpretations. Hence, for an AI-system to take over the role of a judge, it would necessitate that the algorithm effectively performs both tasks. While we may envision AI taking over legal interpretation to the extent of providing descriptions or prognostications on the law, the real conundrum lies in the idea of such AI-systems engaging in moral judgements on the law. From a jurisprudence and policy perspective, this brings up two important questions for discussion, namely, the role of morality in legal interpretation and whether AI would be capable of performing the role of judges? This paper aims to conceptualise the use of AI as judges through a jurisprudential analysis of law and morality.

AI and The Judicial Process

While AI in the judiciary may seem like a pipedream, it is interesting to note that such systems have already found application in countries around the world. One such notable example is seen in the case of China, which established three special AI Internet Courts back in 2017. The said AI adjudicates upon disputes concerning online transactions of services and goods, copyright and trademark disputes, trade disputes, e-commerce product-liability claims and other such claims. It is interesting to note that the said courts operate 7-days a week and 24-hours a day and have average case disposal time of around 38-days.[2] The courts have also been reported to have adjudicated upon over 3.1 million cases from March to October of 2019 alone![3]

This aside there has also been a recent trend in the past-decade around the world indicating the movement towards the use of AI in the adjudication process. While as of now, AI is only being used in the digitisation of court filings/processes, judgement-prediction, as algorithmic-tools in criminal court decisions and in online dispute resolution mechanisms, it has been argued that the same could also be easily adapted so as to allow AI-systems to also step into the role of judges in the future.[4] Take, for example, its current application in judgment-prediction. Considering AI is already able to predict judgments delivered by the courts with a considerable accuracy,[5] it begs the question as to what really prevents such AI-systems from assuming the role of judges? Another pertinent example is seen in the case of risk-assessment algorithms, which are already being used extensively in the US to aid in the judicial decision making in criminal cases on issues of bail, parole, and sentencing.[6] Such systems, with time, are believed to only become better and more accurate and hence, AI functioning as judges is a lot closer than one might imagine.

The question that may now prop up is on how one could possibly test whether an AI-system could accurately perform a judge’s role? A glimpse at a possible solution to this is provided in Prof. Eugene Volokh’s recent paper, ‘Chief-Justice Robots’.[7] In his paper, Volkoh argues that the substitution of human-judges by AI would be contingent on the AI passing what he called the “Modified John Henry Test”. The test is described to be a judgment/opinion writing test wherein the AI-system’s performance is compared to the performance of ten average performers in the said field. If the said AI-system performs at least as good as the average performer, it is said to pass the test and can serve as an adequate substitute for a human.[8] The question of whether the system passed the test or not is to be decided by a panel of ten human-judges (experts on the said subject) who evaluate the performance of the participants, without knowing which is human and which is an AI-system. The determining factor in such a test would be subjective and based on the persuasiveness of the judgments pronounced by the said AI-systems. But one would now question what this persuasiveness could be based on? This discussion, in the authors opinion, tends to motivate an inquiry into jurisprudence on the role of morality in interpreting the law.[9]

Jurisprudential Inquiry

The study of the current jurisprudential inquiry would help elucidate on what is expected of a judge pronouncing a judgment in terms of integrating morality into their legal interpretations and help us theorise the role of AI based on different schools of thought.

In answering the question on the role of morality in legal interpretation, one could possibly discern the possibility and extent to which AI could possibly replace human-judges in the future. What is quite ironic here is the fact that even after all this time, to answer a so called “modern technological conundrum”, one finds the need to go back to the traditional and pivotal debate in jurisprudence on the role of morality in legal interpretations. This debate stems mainly from the long-standing rift between legal positivism and natural law.[10]

While multiple branches exist under legal positivism, as a school of thought it broadly tends to derive a distinction between law and morality.[11] Exclusive legal positivism recognizes that morality should and can have a role in the law creating process, both when they are enacted by the legislature and when new laws are made by judges, but the same is distinguishable from the task of ascertaining what the law is, where in they believe morality has no role.[12] . Inclusive legal positivism diverges from this and allows for a contingent role of morality in law.[13] Inclusive legal positivism is usually defined by the analysis of the social facts, since it postulates that the substance of the law is essentially dependent on social facts, which in turn may make morality pertinent to legal judgments, hence, integrating morality into the law.

Natural law theorists on the other hand, tend to oppose this distinction between law and morality such that moral facts play an ultimate role in identifying the content of the law.[14] Moral judgment is viewed as being necessary in instances to render legal judgment. Natural law theorists like Ronald Dworkin advocated for an “interpretivist” view that incorporated “substantive moral judgments with descriptive judgments in legal interpretation.”[15] Theorists like Lon Fuller also shared a comparable view in some regards though he did tend to emphasise more on a “special class of moral values that arise distinctively in legal systems” which he called law’s “inner-morality”.[16] Hence, it can be seen that by and large natural law necessitates that judges mandatorily employ aspects of morality while interpreting the laws.

Thirdly, a newer school of thought, which was theorised by Prof. Joshua P. Davis and was created as a solution to the long-standing debate between positive and natural lawyers, is legal dualism.[17] It postulates that the nature of law is not monistic but dualistic. As Shapiro correctly recognised in his book legality,[18] a main impediment for positivism exists in offering an account of the nature of law when it bears moral obligation. Accordingly, natural law struggles to do the same when the when the law “lacks moral legitimacy”. It is a school of jurisprudence which essentially explores a two-state solution, with “natural law and positivism each assigned to its appropriate terrain”.[19]

Moving with the general assumption that AI-systems are capable of undertaking descriptive and predictive legal judgment,[20] (and may even outperform humans in this regard) but are incapable of providing substantive moral judgments,[21] the perspective on the role of morality in law would play almost a pivotal role on the extent AI may replace human-judges in the future.

AI as Judges?

If exclusive legal positivism is the prevailing jurisprudential consensus, AI-systems would most likely be able to surpass and largely replace humans by make more accurate and purely positive legal judgments (It is important to note here that the author refrains from accepting that AI can completely replace human-judges at this point in light of the fact that many exclusive legal positivists like Scott Shapiro, do concede that morality does play a role in adjudication especially in the context of the law-making role of judges).[22] This may be inferred considering that the law necessitates only an appraisal of social facts, which AI is more than capable of. Assigning such a role to AI would also lead to a more precise, efficient, and consistent interpretation of the law. In doing so AI judges could take over legal interpretation but human-judges would be needed for judicial law making.

In contrast thereto, if inclusive legal positivism were the dominant theoretical understanding, it could possibly lead to the preservation of a larger human role in adjudication. Since the school of thought tends to accept moral judgments possible integration with the law, humans would play a role in saying what the law is to the said extent. None the less much like in exclusive legal positivism, AI would still have a large role in legal interpretation when it does not necessitate substantive moral judgments.

What is interesting to note is that even if we do assume exclusive legal positivism is correct on the nature of law and morality and even if it is the prevailing notion, human-beings would still not be completely replaceable and would still play a role, though in a limited capacity, as judges. Even in the situation where AI-systems are able to make purely predictive and descriptive judgments required to interpret the law, human-judges would still be needed to fill the gaps and render moral judgments at least in the judicial law-making process.

As per legal dualism, moral judgment in legal interpretation is only required when the law serves as a “source of moral guidance”.[23] If one were to believe that judges did not derive moral guidance, in line with the thinking of Oliver Wendell Holmes, Jr. (“the only concerns to which law gives rise are prudential, not moral”),[24] then even with the application of dualism, AI would be allowed to completely take over the role for human-beings in interpreting the law.

If one were to believe that judges did derive moral guidance from the law, as did Prof. Joshua P. Davis who conceptualised dualism, it would mean that the judges in interpreting the laws were obligated to exercise moral judgment. Hence, this would curtail AI’s role as judges due to its inability to make the substantive moral judgments required.

Another way to look at legal dualism, which the current author subscribes to, is to look at the intention of the interpreter of the law. As per legal dualism, “Natural law provides the best account of law’s nature when a legal interpreter seeks moral guidance from the law and legal positivism provides the best account when a legal interpreter seeks merely to describe the law or to predict how others will interpret it.”  To illustrate an interpretation of the theory, AI could assume the role of judges when the adjudication merely requires describing the law or predicting how others will interpret it, (much like in the cases of the small-causes or lower courts) and humans could assume the role of the judge when the judge is required to seek moral guidance from the law or engage in a judicial law-making process (as in the case of higher courts). Such a system can decrease the case disposal time in the lower courts and decrease the burden on the judiciary. In such a manner legal dualism, in the author’s opinion, can be used to implement the best aspects of the individual theories (while also overcoming their shortcomings)[25] and can also downplay its own criticism,[26] by creating an ideal balance between the use of AI and human-judges to get the most optimum and efficient solution.

Conclusion

AI-system may very well replace human-judges in providing descriptive and predictive judgments in the near future, but it is uncertain whether such systems can ever be programmed to make the substantive moral judgments (in light of the limitations created by the Poulomi’s paradox)[27] so as to completely replace human-judges. Clearly, even with the said technology developing exponentially, AI-systems are still likely to hit an unsurmountable impasse and will not completely replace human-judges at any point in the foreseeable future. Such systems, however, may be able to do so to a limited extent.

In creating this distinction between what can be adjudicated upon by humans and AI, we create a new understanding of natural law, which can be characterised as requiring legal interpretation to be undertaken by natural and not artificial intelligence. Morality could also be divided into substantive morality or natural morality (Judgment on what morality really entails and not merely a prediction of how others would evaluate morality in the case) and descriptive/predictive morality or artificial morality (Act of describing, as a matter of social fact, the dominant moral beliefs of the people in order to foresee the moral judgments people would most likely make in the situation). Assuming AI-systems can extend their capabilities to even passing purely descriptive and predictive moral judgment in the future, a jurisprudential understanding of morality would be fruitful in developing the nature of law and accordingly the role of AI as judges, to reconceptualise and change the traditional jurisprudential landscape.


*Aditya Krishna is a third-year law student from Jindal Global Law School and is currently pursuing his B.A. LL.B. (Hons.) degree. He has a keen interest in Intellectual Property Law, Technology Law and Constitutional Law. He currently serves as a Contributing Editor at IntellecTech Law.

[1] Stanford University, Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence, (2016). Available at: http://ai100.stanford.edu/2016-report (last visited on 15th Nov. 2020)

[2] Guodong Du, China Establishes Three Internet Courts to Try Internet-Related Cases Online: Inside China’s Internet Courts Series -01, China Justice Observer, (16th December 2018) https://www.chinajusticeobserver.com/insights/china-establishes-three-internet-courts-to-try-internet-related-cases-online.html

[3] Yan, China reforms judicial courts using internet technologies: white paper, Xinhuanet, (5th December 2019) http://www.xinhuanet.com/english/2019-12/05/c_138605955.htm

[4] Cary Coglianese and Lavi M. Ben Dor, AI in Adjudication and Administration, Faculty Scholarship at Penn Law 2118 (2020). https://scholarship.law.upenn.edu/faculty_scholarship/2118

[5] As is seen in a recent study in 2017, where it was found that a machine-learning system was able to predict the outcomes of over 70% of 28,000 U.S. Supreme Court decisions accurately. See: Matthew Hutson, Artificial Intelligence Prevails at Predicting Supreme Court Decisions, Science (May 2, 2017), https://www.sciencemag.org/news/2017/05/artificial-intelligence-prevailspredicting-supreme-court-decisions.

[6] Andrew C. Michaels, Artificial Intelligence, Legal Change, and Separation of Powers, 88 U. Cin. L. Rev.

1083 (2020), https://scholarship.law.uc.edu/uclr/vol88/iss4/4; Arnold Ventures, Public Safety Assessment Faqs (“PSA 101”), Arnold Ventures (Mar. 18, 2019), https://craftmediabucket.s3.amazonaws.com/uploads/Public-Safety-Assessment101_190319_140124.pdf.

[7] Eugene Volokh, Chief Justice Robots, 68 Duke L.J. 1135, 1138 (2019).

[8] ibid., 1318-1319.

[9] Considering AI would not be able to enter into substantial moral judgment and can only enter into predictive and descriptive judgments, the persuasiveness would depend on the role the judges of the test ascribe to morality in legal interpretation and hence a study of the role ascribed to morality would help in predicting the possible role AI could have as judges.

[10] Joshua P. Davis, Legality, Morality, Duality, 2014 Utah L. Rev.  55, 61- 63 (2014).

[11] SCOTT SHAPIRO, THE “HART-DWORKIN” DEBATE: A SHORT GUIDE FOR THE PERPLEXED, IN RONALD DWORKIN 22 (Arthur Ripstein ed., 2007)

[12] SCOTT SHAPIRO, LEGALITY, (Harvard University Press, 2011)

[13] SCOTT SHAPIRO, THE “HART-DWORKIN” DEBATE: A SHORT GUIDE FOR THE PERPLEXED, IN RONALD DWORKIN 22 (Arthur Ripstein ed., 2007)

[14] RONALD DWORKIN, JUSTICE FOR HEDGEHOGS 405 (2011).

[15] RONALD DWORKIN, JUSTICE FOR HEDGEHOGS (2011). Dworkin argued that legal interpretation involved two kinds of judgment, namely “fit” and “justification”. Fit was descriptive and included any “non-normative claims relevant to legal interpretation”, like the possible definitions of the terms in a statute, “possible rules that might make sense of binding precedents, or accounts of the workings of political institutions in a jurisdiction.” Justification on the other hand was prescriptive and involved “moral claims pertaining to legal interpretation”, this included which definitions of the laws, which rule applied, and “which accounts of political institutions would make the law most just.” Dworkin believed that legal interpretation involved both fit and justification hence necessitating the presence of morality or substantive moral judgment in legal interpretation.

[16] LON L. FULLER, THE MORALITY OF LAW (1964).

[17] Joshua P. Davis, Legality, Morality, Duality, 2014 Utah L. Rev. 55, 61- 63 (2014).

[18] SCOTT SHAPIRO, LEGALITY, (Harvard University Press, 2011)

[19] Joshua P. Davis, Legality, Morality, Duality, 2014 Utah L. Rev. 55, 61- 63 (2014).

[20] Considering AI systems are already largely able to perform the same as has also been stated earlier. Refer to Section II.

[21] The usage of Substantial morality is stressed here since it is the type of morality which had been contemplated by theorists when creating the said theories. The impact on our understanding of morality in light of the developing technologies and AI assuming the role of judges is addressed briefly towards the end of part V.

[22] SCOTT SHAPIRO, LEGALITY, (Harvard University Press, 2011)

[23] Joshua P. Davis, Legality, Morality, Duality, 2014 Utah L. Rev. 55, 61- 63 (2014).

[24] Oliver W. Holmes, Jr., The Path of the Law, 10 Harv. L. Rev. 457 (1897). Oliver Wendell Holmes, Jr.  characterised the law as “prophecies of what courts will do in fact, and nothing more pretentious.” In the context of his “Bad Man” theory of the law, he suggested that law gives rise only to prudential and not moral concerns.

[25] Joshua P. Davis, Legality, Morality, Duality, 2014 Utah L. Rev.  55, 61- 63 (2014). Davis provides a compelling account on how employing such a dualistic theoretical framework will cause the main challenges Shapiro identifies to legal positivism and natural law to get resolved. Evil Law would not serve as a hurdle to natural law and Hume’s Law would not impede legal positivism. Each theory on the nature of law works within its own domain. While any theory may be used in the future, the current author most prefers the application of a dualism framework as explained in Section IV (B) with the interpretation that factors the intention of the legal interpreter. Such a form of dualism would be keeping the best aspects of the positivism and natural law while addressing its shortcomings.

“As Shapiro recognizes, a main difficulty for legal positivism lies in providing an account of the nature of law when it gives rise to moral obligations. Legal positivism then struggles to explain how legal interpretation can proceed without moral judgment. On the other hand, a primary challenge for natural law is to offer an understanding of the nature of law when it lacks moral legitimacy. Evil Law appears to leave no room for moral judgment……..Shapiro’s insight about core challenges to legal positivism and natural law suggests a possible natural boundary in the jurisprudential landscape. Natural law may provide the best account of law when it imposes moral obligations and legal positivism when it does not. The content of the law may vary depending on which of these approaches is appropriate for a particular act of legal interpretation……Different legal interpreters—by virtue of their roles within the legal system—should use different interpretive methodologies……Perhaps law does not have the same nature for all purposes but instead consists of two (or more) complementary understandings of the nature of law.”

[26] While the dualistic structure proposed could be criticised for coming in conflict with the principle of Occam’s Razor, the current author believes that the theories contributions in the said context tend to outweigh the possible criticism on the said grounds.

[27] David Autor, Polanyi’s Paradox and the Shape of Employment Growth (September 2014) (NBER Working Paper No. w20485), Available at: https://economics.mit.edu/files/9835.

Polyani’s Paradox was devised by a Hungarian-British polymath named Michael Polanyi who in his book Tacit Dimension, explored the concept of ‘tacit knowing’ in human knowledge. In the book he argues that our knowledge and capabilities are usually beyond our understanding and cognition. He based this on the fact that we learn a lot of tasks through experience, which we cannot explain. He uses the example of us recognising people but not knowing how we do it to explain the concept. This forms the basis of the paradox which in essence states that “we can know more than we can tell.”

The paradox creates a constraint on the number of functions/ tasks we can code and teach a AI system to perform.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s