top of page

AI Judges: The Question of AI’s Role in Indian Judicial Decision-Making

Krishna Ravishankar & Parul Anand*


There is a debate about ethics in AI. But for the justice system, there is no debate.

-Professor Michael Legg, UNSW Law, Australia.


Indian Constitutional Courts have certainly been outliers in incorporating technology into their day-to-day modus operandi. Be it the introduction of livestreaming facilities of court proceedings, or the introduction of AI-enabled transcribers in order to record the oral arguments made in Court, they certainly have set an example regarding technology being a boon. However, the judges did not stop there. A recent bail order passed by Justice Anoop Chitkara of the Punjab and Haryana High Court has raised some eyebrows, both in legal and technology policy circles. The use of the much talked about OpenAI Source Platform, “ChatGPT”, in a bail order would surely make Justice Chitkara the first Indian judicial officer to take the aid of an AI platform to arrive at a sensitive judicial decision on a bail order. Though the court adds a caveat that the AI platform was not used to decide the merits of the bail plea, the very notion that a constitutional court’s functionary referred to artificial intelligence [“AI”] raises questions both from a technological and a fundamental rights perspective, both of which this article seeks to address.


The Interactions between AI and Fundamental Rights


AI could simply be defined as systems that act like humans. Machines which can reason enough to have intelligence – such as problem-solving, arriving at conclusions, etc., are said to be artificially intelligent. With an endlessly upward development trend owing to accelerating change, it is no surprise that AI seems to be foraying into the judicial setup. The Punjab and Haryana High Court’s judgment is but a nod towards the same. With astonishing speed and accuracy, such an introduction is predictable. However, such an inclusion ought to be looked at cautiously, especially because law and ethics seem to be unable to keep pace with technology.


AI and its inclusion in the judicial system raises the ‘centaur’s dilemma’ which deals with the extent of human control over AI. This terminology, which has been borrowed from defence circles, refers to the conundrum of either having less control over AI to allow it to do what it does best without undue interference or having more control over it to ensure the results produced are just and reasonable.[i] AI’s inclusion in the judicial system acts as another trade-off between swift judgements, owing to the speed of AI and the undeniably human element of ‘fairness’. The intention of involving AI in the legal processes would invariably lead to these complex and critical calculations holding critical weight.


While envisioning the use of AI systems in judicial proceedings, it is incumbent to address the repercussions of such inclusion on the fundamental rights of the parties involved. According to Cesare Becarria, there are four core values that any modern constitutional democracy subscribes to with regard to criminal trials – due process, equal treatment, fairness and transparency, all of which are encapsulated under Articles 14 and 21 of the Indian Constitution.[ii] This is along the lines envisioned by the Supreme Court in Zahira Habibullah Sheikh and ors. v. State of Gujarat and Ors., wherein it held that every stakeholder has an inherent right to be dealt with in a fair manner, which includes an impartial judge, a fair prosecutor and a fair trial, where any bias or prejudice for or against the accused is eliminated.[iii] The use of AI-enabled technologies therefore ought to be evaluated from the lens of such values.


AI Technologies and Bias: The Damaging Effect on Article 14’s Vision


The thin balance between just exactly what amount of human control ought to be exercised over AI, while still enabling it to perform to its full capacity, is an imminent inquiry which becomes more complicated due to elements like the anchoring effect which make such supplementation essentially futile. The anchoring effect refers to people’s bias towards computer-generated numbers and data – a tendency already proven to be harmful in navigational contexts. Such difficulty in evaluating algorithmic outputs objectively and exercising discernment regarding their acceptance or rejection by the judicial officer has been established in research.[iv] For instance, when confronted with a high recidivism prediction, the judge may increase the sentence without independently considering the need for such an increase. Essentially, the risk is induced by the fact that judges form AI’s output as an anchor which blurs their rational judgement. Judges, who ought to act as gatekeepers, and forbearers of equality and neutrality, can themselves fall victim to a favourable bias towards the AI’s output. This output being biased, advanced unchecked and in some cases, encouraged, owing to technological anchoring can render judicial protection against bias insufficient.


Apart from the prospect of this inherent bias in the judicial mind due to the “anchoring effect”, the “garbage-in, garbage-out” principle leads to the quality of decisions rendered more biased and unpalatable. This principle in machine learning circles determines the quality of results through the quality of data. Since the AI system ought to be fed pre-existing datasets of case laws and other judicially relevant information to produce pertinent answers, historical biases, discriminations, flawed judgements and approaches stand to continue uninhibited, and due to the ‘black box’ (a phenomenon rendering inner functionings of algorithmic decision-making opaque), entirely uninspected. Historically disadvantaged groups stand to lose the most from such an inclusion. For instance, an AI system involved in determining predictive recidivism was found to be heavily biased against African Americans — a historically disadvantaged group. Therefore, unless such datasets are intentionally made more just, which in itself would be an incredibly contentious endeavour, AI systems’ decision-making suffers from an inherent injustice that ought not to be overlooked. The bias aspect that arises from the judiciary as a class due to the anchoring effect and the data circles with reference to social profiling of convicts in the context of the “garbage-in and garbage-out” principle leads to the impingement of the values of fairness and equal protection of the criminal justice system.


The “garbage-in, garbage-out principle in the use of AI-based predictive models in criminal law associated with judicial decision-making brings social prejudices into the process. A noted criticism of the COMPAS system, an AI-based predictive model used for the purposes of sentencing, validated by the Wisconsin State Supreme Court in State v. Loomis, was the judicial institutionalisation of bias based on race.[v] The racial bias is squarely in violation of the equal protection clause of the Fifth and Fourteenth Amendments of the US Constitution and, in direct contradiction to the principle underlined by the United Supreme Court in Buck vs Davis: “Criminal law [should punish] people for what they do, not who they are.”[vi] which reiterated the aspect of the unbiased approach the criminal justice system should take. The Loomis precedent goes contrary to the principles of fairness. In State v. Skaff, where the Wisconsin Court of Appeals held that sentencing of an accused must happen only on the basis of accurate information and biased datasets may not be reliable or accurate for this purpose.[vii] Other foreign jurisdictions also have questioned this aspect of bias in AI-based predictive models. For instance, The Supreme Court of Western Australia in Director of Public Prosecutions for Western Australia v. Mangolamara expressed similar reservations vis-à-vis the bias with respect to indigenous tribes of Australia.[viii] It remarked that there was no evidence to show these AI tools are foolproof. The Court further observed that these AI predictive models consider data sets with regard to social profiling reflecting existing prejudices against these communities, highlighting the need for additional safeguards. It is not merely in the US or Australia that such an inclusion raises concerns about fundamental rights violations regarding bias and discrimination; the Indian context has its own issues.


In the Indian context, Article 14 of the Constitution as a genus, read with Articles 15, 16 and 17 as species concerns the mitigation of bias, discrimination and the promotion of equality and fairness. These articles were incorporated into the Indian Constitution to make society more egalitarian and individualistic.[ix] The aspect of damaging bias consistently observed in AI-based systems runs directly contrary to the vision of Indian constitutionalism.[x]With regards to such bias creeping into the criminal justice system due to the incorporation of AI in cases such as the aforementioned bail order, certain repercussions on Article 14 come into the picture.


The Rajasthan High Court in Shyam Singh v. State of Rajasthan remarked that judicial bias in criminal cases is a violation of Article 14 of the Indian Constitution and goes against the principles of fairness and equal treatment of an accused enshrined in the Indian Constitution.[xi]Such an implication is highly concerning in a country like India which is afflicted with deep-rooted societal prejudices and biases. A study conducted by the Centre for the Advanced Study of India at the University of Pennsylvania shows high in-group bias amongst both law enforcement agencies and subordinate magistracies dealing with criminal matters, especially when the accused are from a scheduled caste or scheduled tribe background.[xii]Moreover, a Pew Research Study shows that even today, one in five members of Scheduled Caste Communities face discrimination due to caste bias. With such an existing sensitive background, a country like India has a lot to lose with such an inclusion. As the criminal justice system already struggles with systemic prejudices, the incorporation of an AI-based system rife with biases, as these systems are usually seen to be owing to the garbage-in and garbage-out principle, would be violative of the fundamental rights of Indian people. Moreover, the anchoring effect also leads to a bias where the judges stop indulging in a fair trial. This two-layered bias with AI system’s inclusion in the criminal justice system, or the justice system in general has chilling repercussions on fundamental rights of equal protection, amongst others.


Concerningly, when judges are already biased towards the technology from the outset owing to the anchoring effect, and the decisions produced are biased due to the garbage-in and garbage-out principle, the intermediate check that would be provided by a prudent evaluation of the actual decision produced by the AI also remains absent.


The “Black Box” Phenomenon of AI: A Blackhole to Criminal Due Process under Article 21


AI decisions appear to be made in a “black-box” which has serious implications for the rule of law and justice.[xiii] Black Box AI situations refer to a phenomenon where the true decision-making process of an AI system is hidden from the user. Such an absence of critical information about grounds of reasoning for arriving at a legal decision, assuming AI’s inclusion in the judicial decision-making, suggests concern, not only from the end of the judge but also for all parties involved.


The Black Box phenomenon strikes at the heart of transparency and due process. The Loomis judgement legally validated the use of the COMPAS system, whose working is largely unknown to the litigants of a criminal trial, which utilises an algorithm to determine the sentence of a convict. One of the arguments made by the defendants in the case was the aspect of non-transparency and how the COMPAS Predictive AI model did not explain the breakdown of each variable, relevant weighting and their correlation, which is violative of procedural due process in criminal trials.As held in Townsend v. Burke, the due process clause provides for the accused, the right to know the basis on which his conviction takes place.[xiv] The validation of the COMPAS Model in Loomis went against the precedent of Brady v. Maryland which laid down the principle of fair disclosure while holding that non-disclosure of information is to be construed as a violation of the right of due process to the accused.[xv] Moreover, as held in Crawford v. Washington, the confrontation clause provided in the Sixth Amendment allowing the accused the right to cross-examine witnesses and the complainants becomes muddled in the case of AI-based predictive models.[xvi]This is because no objective standard can be used to determine a cross-examination standard for non-human intelligence. These violations of due process owing to an absence of information, and cross-examination becomes similarly relevant in the Indian Context.


From the Indian perspective, as the scope of Article 21 was enlarged in Maneka Gandhi v. Union of India, these due process rights have been recognised as key cornerstones of Indian criminal and constitutional jurisprudence.[xvii] While the principle of fair disclosure is unsettled in Indian law, the Supreme Court in Shiv Kumar v. Hukam Chand emphatically remarked that the expected attitude of the Public Prosecutor is that of fairness not only to the court, but also to the accused, and it is important for prosecution to be transparent throughout the trial process, from conviction to sentencing.”[xviii]


This precedent is viewed as one of the pinnacles of bringing transparency with regard to the basis of the conviction of an accused.


With regards to the confrontation clause, the Supreme Court reaffirmed the same to be an intrinsic principle of natural justice under Article 21 in State of Madhya Pradesh v. Chintaman Sadashiva Vaishampayan wherein it was held that:


“The rules of natural justice require that a party must be given the opportunity to adduce all relevant evidence upon which he relies, and further that, the evidence of the opposite party should be taken in his presence, and that he should be given the opportunity of cross-examining the witnesses examined by that party.”[xix]


The use of AI therefore can blur some important due process rights of transparency and natural justice enshrined under Article 21 which are integral for a fair criminal trial.


Possible Caveats: Inspiration from the GDPR Jurisprudence and Way Forward


Having considered the critiques of the AI-based technologies from a rights perspective, there can be caveats that if added might ensure that these infirmities are addressed. One possible solution could be to adopt a similar model to the Article 22(1) mechanism, provided under the General Data Protection Regulation (“GDPR”) of the European Union.[xx]This provides for an opt-out right to the accused to not use AI-enabled technologies in matters such as bail, sentencing and pretrial release. This opt-out aspect of the GDPR protects the aspect of informed consent, an integral part of Article 21 of the Indian Constitution. Similarly, for the accuracy of data sets used, a verification mechanism can be set up, as envisioned under Article 5 of the GDPR. This rigorous verification mechanism, which happens at every stage of profiling as well as decision-making, protects the principles of fairness and equal treatment of convicts/accused under Article 14. Apart from a verification mechanism, there must be guidelines issued by the Supreme Court to make sure that there is human interference to mitigate these biases of AI, so as to ensure there is a check and balance mechanism between the use of human and non-human intelligence.


Though the usage of AI technology is still nascent in Indian jurisprudence with reference to this Punjab and Haryana High Court order, these are some basic caveats and precautions that can be taken from the lessons learnt from other jurisdictions. However, even with such caveats, the incorporation of AI in Judicial decision-making seems precarious. Keeping in mind the critical rights violations that such an inclusion seemingly entails, it becomes hard to argue for such an inclusion. Notwithstanding, the speed offered by such systems and the resultant benefits thus derived for an overburdened Indian judiciary, AI’s insertion in the judicial decision opens regardless, a Pandora’s box of concerns. Utilising this emergent technology is absolutely necessary to move forward, but the direction ought to be incredibly measured and not a leap inside.

_______________________________

* Krishna Ravishankar & Parul Anand are 3rd-year students at the National Law University, Jodhpur.


The views expressed above are the authors' alone and do not represent the beliefs of Pith & Substance: The CCAL Blog.


[i] James E Baker, The Centaur’s Dilemma: National Security Law for the Coming AI Revolution 22 (1st ed., Brookings Institution, 2020). [ii] Bernard E Harcourt, Beccaria’s On Crimes and Punishments: A Mirror on the History of the Foundations of Modern Criminal Law in Foundational Texts In Modern Criminal Law 59 (1st ed., Oxford University Press, 2014). [iii] Zahira Habibullah Sheikh and Ors. Vs. State of Gujarat and Ors 2004 (5) SCC 353. [iv] Vitor Conceicao, The Anchoring Effect of Technology in Navigation Teams in Adv. in Hum. Asps of Transp. (2018). [v] Comment on State v. Loomis, 130(5) Harv. L. Rev. (2017). [vi] Buck v. Davis 623 Fed. Appx. 668. [vii]State v. Skaff 15 Wis. 2d 48. [viii]Hannah McGlade and Vicknie Hovane, The Mangolamara Case: Improving Aboriginal Community Safety and Healing, 6(27) Indig. L. Bull. 29 (2007). [ix] Justice K.S Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1. [x] Uwe Peters, Algorithmic Political Bias in Artificial Intelligence Systems, 35 Phil. & Tech. 25 (2022). [xi] Shyam Singh vs The State Of Rajasthan And Anr. 1973 CriLJ 441. [xii] Paul Novosad, In-Group Bias in the Indian Judiciary: Evidence from 5 Million Criminal Cases (Nov. 4th 2021), https://casi.sas.upenn.edu/events/paulnovosad. [xiii] S. Greenstein, Preserving The Rule Of Law In The Era Of Artificial Intelligence (AI), 30 Artificial Intel. &L. 291 (2022). [xiv] Townsend v. Burke, 334 U.S. 736 (1948). [xv] Brady v. Maryland, 373 U.S. 83 (1963). [xvi] Crawford v. Washington, 541 U.S. 36 (2003). [xvii] Maneka Gandhi v. Union of India, AIR 1978 SC 597. [xviii] Shiv Kumar v. Hukam Chand, (1999) 7 SCC 467, ¶ 18. [xix] State of M.P. v. Chintaman Sadashiva Vaishampayan, AIR 1961 SC 1623, ¶ 11. [xx] Regulation (EU) 2016/679, 2016, Reg. 22(1)


bottom of page